id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
55304922 | pes2o/s2orc | v3-fos-license | Durum Wheat Whole-meal Spaghetti with Tomato Peels: How By-product Particles Size Can Affect Final Quality of Pasta
The goal of the study is to investigate the impact of the incorporation of by-product (tomato peels) on durum wheat whole-meal spaghetti. To the aim, different amounts of tomato peels flour were added to pasta dough until the overall sensory quality reached its threshold (peels flour at 15% TP). Moreover, the effect of different particle sizes of tomato peels addition on sensory quality of pasta was also evaluated. The increase of particle sizes determined a decline of pasta sensory quality. So, samples enriched with fine particles showed high sensory quality, a more acceptable cooking quality and the lowest value of starch digestibility. The utilization of fine particles of tomato peels seems to be useful to enhance the spaghetti quality. Therefore, fine particles allowed obtained fortified pasta with acceptable sensory properties. Durum Wheat Whole-meal Spaghetti with Tomato Peels: How By-product Particles Size Can Affect Final Quality of Pasta
Introduction
Over the last decades consumer food demands changed considerably. For this reason, foods today are not intended only to satisfy hunger and to provide necessary nutrients, but also to prevent nutrition-related diseases and enhance physical and mental well-being of consumers [1,2]. In this regard, functional foods offer an outstanding opportunity to improve the quality of products. Pasta, in particular, is an important basic food widely consumed across the world and was among the first food to be authorized by the FDA (Food and Drug Administration) as a good vehicle for the addition of bioactive compounds, such as antioxidant compounds and dietary fibre [3,4]. However, pasta enriched with bioactive compounds of vegetable origin is still very limited [5,6]. Padalino [7] carried out studies to improve nutritional properties of pasta by adding artichoke, asparagus, pumpkin, zucchini, tomato, carrot, broccoli, spinach, eggplant and fennel, all very rich in phenolics and carotenoids that can impart health benefits being able to scavenge reactive oxygen species and protect against degenerative diseases like cancer and cardiovascular diseases.
Tomatoes (Lycopersicon esculentum L.) are known as an excellent source of many nutrients and secondary metabolities, as minerals, vitamins C and E, β-carotene, lycopene, flavonoids, organic acids, phenolics and chlorophyll [8] especially in the peels. Al-Wandawi [9] reported that tomato peels contain high levels of lycopene and β-carotene compared to pulp and seeds. When tomatoes are processed into products like Catsup, salsa and sauces, 10-30% of their weight becomes waste or pomace [10]. In fruit and vegetable industry, generally processing leads to one third of the product to be discarded. This can be costly for the manufacturer and also may have a negative impact on the environment. Many researches have shown that byproducts generally exert high nutritional value, could be used as food ingredients, gelling and water binding agents and could provide a valid solution for pollution problems connected with food processing [11]. To the best of our knowledge, no reports are available on the use of tomato peels-based flour in pasta processing. Hence, the aim of this work was to study the impact of tomato peels addition on chemical composition, cooking and sensory quality of whole-meal durum wheat spaghetti. Specifically, the study was organized in the following steps. In the first one, the tomato peels flour amount added to the dough was continuously increased until reaching the sensory threshold (15% of flour addition). The next experimental step was aimed to investigate the influence of peels particles size on texture properties, cooking quality, sensory and nutritional characteristics of final enriched pasta.
Raw materials
Durum wheat seeds Pr22 were provided from the C.R.A. (Foggia, Italy). The whole-meal flour was produced from grinding of the seeds with a stone mill (Mod MB250 Partisani). Tomato skins of different cultivars (Ulisse, Docet, Ercole, Player, Herdon, Fuzzer and Komolix), obtained in the crop year 2012-2013 in Campania and Apulia (Southern Italy) industries, were used. Tomato skins were dehydrated by exposure to sunlight and then in the oven (40-50°C) and the flour was produced by hammer mill (16/BV-Beccaria s.r.l. Cuneo).
Spaghetti preparation
Whole-meal flour of durum wheat was mixed with water (30% w/w) in the rotary shaft mixer (Namad, Rome, Italy) at 25°C for 20 minutes to uniformly distribute water. In the first experimental phase, the tomato peels flour (particles size<500µm) was added to the wheat flour at various concentrations: 10%, 15%, 20% and 25% (w/w). In a subsequent experimental phase the sample with 15% addition (15-TP) was prepared by tomato peels flour to different particle sizes: 63 µm (15-TP/FPS), 125 µm (15-TP/MPS) and 250 µm (15-TP/CPS). Spaghetti based only on whole-meal flour were also manufactured and used as the reference sample (CTRL). In all the steps, dough was extruded with a 60VR extruder (Namad). Subsequently, the pasta was dried in a dryer (SG600; Namad). The process conditions were in according Padalino [12].
Sensory analysis
Dry spaghetti samples were submitted to a panel of fifteen trained tasters (six men and nine women, aged between 28 and 45) in order to evaluate the sensory attributes. The panelists were also trained in sensory vocabulary and identification of particular attributes by evaluating durum wheat commercial spaghetti [13]. They were asked to indicate color and resistance to break of uncooked spaghetti. Elasticity, firmness, bulkiness, adhesiveness, fibrous nature, color, odor and taste were evaluated for cooked spaghetti. To this aim, a nine-point scale, where one corresponded to extremely unpleasant, nine to extremely pleasant and five to the threshold acceptability, was used to quantify each attribute [14]. On the basis of the above-mentioned attributes, panelists were also asked to score the overall quality of the product using the same scale.
Chemical determination
Dry spaghetti samples were ground to fine flour on a Tecator Cyclotec 1093 (International PBI, Hoganas, Sweden) laboratory mill (1mm screen -60 mesh). Moisture and ash content (%) were measured according to AACC method [15]. Protein content (% N × 5.7) was analyzed using the micro-Kjeldahl method according to AACC method [15]. Total dietary fiber (TDF), soluble water dietary fiber (SDF) and insoluble water dietary fiber (IDF) contents were determined by means of the total dietary fiber kit (Megazyme International Ireland Ltd., Wicklow, Ireland) based on the method of Lee [16]. The available carbohydrates (ACH) were determined according to the method of McCleary [17] as described in the ACH assay kit (Megazyme). All nutritional analyses of the flour and spaghetti samples were made in triplicate.
For the carotenoids determination spaghetti were homogenized in a blender and an aliquot of 10 g was added of 100 ml of solvent mix (esano:acetone:methanol; 2:1:1; v/v/v) and sonication continuously for 10 min (Misonix Ultrasonic Liquid Processor, NY, U.S.A). The extraction was repeated until sample became colorless. The combined extract was transferred to a separating funnel and 5 ml of distilled water was added to separate polar and nonpolar phases. The nonpolar hexane layer containing carotenoids was collected and concentrated in a rotary evaporator (Heidolph, Germany) till dryness. Residue was dissolved in 10 ml of hexane. Lycopene and β-carotene were determined according to Fish [18] by a spectrophotometric method using an Agilent 8453 UV-Vis spectrophotometer. The concentration of lycopene was calculated at λ=503 nm using the molar extinction coefficient β=17.2 × 10 4 /M/cm. For β-carotene, the absorbance was measured at λ=450 nm and the quantification were carried out using a standard curve. All the nutritional analyses were made in triplicate and the results were expressed as mean ± standard deviation (SD).
Cooking quality
The optimal cooking time (OCT) was evaluated in according to the AACC approved method [15]. The cooking loss, the amount of solid substance lost to cooking water, was determined according to the AACC approved method 66-50. The swelling index and the water absorption of the cooked pasta (grams of water per gram of dry pasta) were determined according to the procedure described by Padalino [12].
Moreover, the cooked spaghetti samples to OCT were submitted to hardness and adhesiveness analysis, by means of a Zwick/Roell model Z010 Texture Analyzer (Zwick Roell Italia S.r.l., Genova, Italia) equipped with a stainless steel cylinder probe (2 cm diameter). The hardness (mean maximum force, N) and adhesiveness (mean negative area, Nmm) were measured in according to the procedure described by Padalino [12]. Six measurements for each spaghetti sample were performed.
In vitro digestion
The digestion was carried out as described by Chillo [19] with slight modifications. Briefly, dry spaghetti samples (5 g) were broken into 5.0 × 1.0 cm lengths and weighed accurately. Fifty milliliters of boiling water was immediately placed in a covered boiling water bath to cook the spaghetti to the OCT. The spaghetti were tipped into a digestion vessel with 50 ml of distilled water and 5 ml maleate buffer (0.2 M pH 6.0, containing 0.15 g CaCl 2 and 0.1 g sodium azide per liter) in an block at 37°C (GFL 1092; GFL Gesellschaft für Labortechnik, Burgwedel, Germany) and allowed to equilibrate for 15 min. Digestion was started by adding 0.1 ml amyloglucosidase (A 7095; Sigma Aldrich, Milan, Italy) and 1 ml of 2 g per 100 g pancreatin (P7545; Sigma Aldrich) in quick succession and the vessels were stirred at 130 rpm. An amount of 0.5 ml of the digested samples was taken at 0, 20, 60 and 120 min for the released glucose analysis. The sample digested to 120 min was homogenized through an Ultra Turrax (Ika Werke, Staufen, Germany).
Analysis of digested starch
The samples removed during digestion were added to 2.0 ml of ethanol ethanol and mixed. After 1 h, the ethanolic sub-samples were centrifuged (2000 g, 2 min) (Biofuge fresco; Heraeus, Hanau, Germany). Finally, the reducing sugar concentration was measured colorimetrically (k=530 nm) using a Shimadzu UV-Vis spectrophotometer (model 1700; Shimadzu corporation, Kyoto, Japan). Glucose standards of 10 mg/ml were used. Amyloglucosidase (0.25 ml) (EAMGDF, 1 ml per 100 ml in sodium acetate buffer 0.1M, pH 5.2; Megazyme International 205 Ireland Ltd., Wicklow, Ireland) was added to 0.05 ml of the supernatant and incubated at 20°C for 10 min. Afterwards, 0.75 ml DNS solution (10% 3,5-dinitrosalicylic acid, 16% NaOH and 30% Na-K tartrate -Sigma Aldrich) was added to the above solution, heated to 100°C for 15 min and allowed to cool at 15°C for 1 h. Then, 4 ml of distilled water (15°C) were added to the solution. The results were plotted as glucose release (mg) per g of sample vs. time. The starch digestibility was calculated as the area under the curve (0-120 min) for the tested products, and expressed as the percentage of the corresponding area for white bread [19].
Statistical analysis
Experimental data were compared by a one-way variance analysis (ANOVA). A Duncan's multiple range test, with the option of homogeneous groups (P<0.05), was carried out to determine significant differences between spaghetti samples. STATISTICA 7.1 for Windows (StatSoft, Inc, Tulsa, OK, USA) was used for this aim.
Results and Discussion
As reported above, the experimental plan has been organized in two subsequent steps, the first aimed to find the better concentration of tomato peels amount to be added to the dough and the second one, to study the effects of peel-particles size on quality of spaghetti. Results of each step were detailed in different paragraphs.
Step 1 -Optimization of tomato peels flour addition
The sensory properties of dry spaghetti samples are listed in Table 1. Results highlighted that in general the overall quality of spaghetti made with whole-meal flour (CTRL) without any peels addition was higher in comparison to the samples supplemented with tomato peels flour, above all at concentrations higher than 15%. In particular, in the uncooked spaghetti poor colour and break to resistance were found. Regarding the cooked spaghetti, the addition of tomato peels flour influenced pasta elasticity and firmness, due to the high fibres content. Incorporation of vegetable matter rendered a firmer texture to pasta sample due to the nonstarchy nature of vegetables. In addition, the TDF content of tomato peels (mainly insoluble fibers) was found higher than that reported in other vegetables [20]. Similar results were observed by Yavad et al. [6], who found an increase of pasta firmness (100% durum wheat) enriched with vegetable flour. The low elasticity value was also due to the inclusion of tomato peels fibres that promoted the formation of discontinuities or cracks inside the pasta strand, which weakened its structure. Spaghetti fortified with tomato peels flour resulted less adhesive than the CTRL sample, even if the differences between samples were not significant. Stickiness did not increase in the different pasta samples, most probably because fibres addition is generally recognized to have a positive effect on stickiness [21]. Spaghetti exclusively made with whole-meal wheat or containing amounts of tomato peels flour up to 15% (w/w) appeared with a pleasant brown colour, whereas, the spaghetti samples made by using more than 15% tomato peels flour present an intense orange colour, which is different from the common pasta and considered unacceptable. Rekna [5] also observed reduction in colour intensity of cooked pasta enriched with vegetable flour, probably due to the pasta swelling and to the conversion of pigments resulting in a yellowness increase. Colour is the key factor for assessing the visual quality and market value of food products [22]. Svec [23] also studied the colour impact of non-traditional cereals and reported that until 10% addition the colour remained acceptable. In addition, spaghetti samples enriched with tomato peels flour had very intense taste and odour as compared to the CTRL sample. Therefore, on the basis of the sensory acceptability, the spaghetti samples enriched with tomato peels flour at 15% were selected for the subsequent work of pasta optimization.
Step 2 Effects of tomato peels particles size on spaghetti quality Sensory analysis: The results of sensory properties of dry spaghetti are listed in Table 2. As can be inferred, the overall quality of uncooked and cooked spaghetti declined as the particles size increased, thus demonstrating that spaghetti enriched with the fine particles (15-TP/FPS) exerted the greatest overall quality. In terms of firmness no differences among samples were recorded, even though a little increase was found in pasta with fine particles because these particles contain more protein than coarse particles of tomato peels. Padalino [12] also found that the high protein content of pea flour increased pasta hardness due to low hydration of starch granules. Most probably, during cooking, the protein can link to most of the water molecules, leaving less water to swell the starch phase [24]. From data reported in Table 2 it is also clear that particles size has a marked effect on the fibrous nature of spaghetti. As compared to the other samples, the 15-TP/FPS showed a low fibrous sensation, due to the less dietary fibres content (Table 3), and present low adhesiveness and bulkiness. One possible explanation of the observed results is that with fine particles a more stable network can be realized, able to bind starch granules and vegetable flour and avoid solids loss during cooking [5]. No effects of particles size were underlined on pasta odour. Samples 15-TP/FPS and 15-TP/MPS in particular showed a pleasant orange colour. As concern the taste, the most prized samples were again 15-TP/FPS and 15-TP/MPS, due to the low fibrous sensation during mastication. Table 3 summarizes the protein and the dietary fibres content of spaghetti. It is clear that particles size of peels had no effects on protein and available carbohydrates content. A certain drop in dietary fibres for samples supplemented with fine particles (12.69%), in comparison to medium and coarse sizes (14.78%) was observed. It is plausible to suggest that the use of flour with coarse particles better corresponds to more functional compounds, such as dietary fibres [25]. Table 3 also reports the results of the in vitro starch digestibility. Results suggested that the particles size influenced the starch digestibility (SD). It can be seen a significant decline of SD in spaghetti enriched with coarse particles size. These differences could be due to the fact that the sample 15-TP/CSP had the greatest dietary fibres content that are known to reduce the glycaemic response of pasta [26]. These results also are also in agreement with Padalino [7], who 6.20 ± 0.27 b 5.57 ± 0.33 a 6.50 ± 0.31 a,b 5.78 ± 0.26 b 6.51 ± 0.40 a 6.02 ± 0.40 a 6.80 ± 0.33 a,b 6.21 ± 0.40 b 6.26 ± 0.34 b 6.30 ± 0.34 b 15% TP 6.08 ± 0.20 a,b 5.80 ± 0.24 a,b 6.05 ± 0.25 a,b 5.37 ± 0.31 a 6.31 ± 0.23 a,b 5.68 ± 0.28 b 6.60 ± 0.39 a 6.11 ± 0.40 a 6.30 ± 0.36 b,c 6.01 ± 0.21 b 6.11 ± 0.28
Cooking quality: The optimum cooking time, the cooking loss, the swelling index and the water absorption of spaghetti samples are presented in Table 5. From this table emerges that particles size of peels flour also affects cooking quality. In fact, the OCT decreased as the particles size increased. Specifically, the OCT for samples supplemented with fine and medium particles was higher (9.00 and 8.30 min respectively) than 15-TP and 15-TP/CPS samples, likely due to the protein matrix-starch granule network, which was affected by the vegetable flour fibres. The physical disruption of gluten matrix caused by fibres addition and the reduction in gluten content due to tomato peels flour addition may facilitate the water penetration into pasta core. The less cooking time of pasta supplemented with vegetable flour could be explained by the faster reconstitution of fine vegetable matter distributed in pasta matrix [5]. Table 5 also shows a decline in cooking loss for samples 15-TP/FPS and 15-TP/MPS, as compared to the other pasta samples. As reported for sensory analysis, these results could be mainly due to the better binding of starch granules and vegetable flour with fine particles size in gluten network [5]. Concerning the swelling index, significant differences were observed with the increase of particles size. Specifically, the 15-TP/CPS sample showed the highest swelling index. The 15-TP/CPS presented agreater water absorption than the other samples, but without any significant differences. These results could be explained by the reduction of protein, as resulting from the increment of the average particles size. In fact, the sample with the coarse particles showed a slight drop in protein content that is known to counteract starch granule swelling during cooking, due to competition between protein and starch for water availability [24]. Regarding the adhesiveness, the 15-TP/FPS sample recorded the smallest value (0.51 N), and concerning the hardness, the same sample recorded the highest value, in accordance to sensory quality and cooking loss. This result also suggests that the high protein content of sample with fine particles of peels increased the hardness of pasta because of the low hydration of starch granules.
Conclusions
In this work, the impact of tomato peels-based flour addition on chemical composition, cooking and sensory quality of whole-meal durum wheat spaghetti was studied. In the first experimental step, tomato peels flour amount added to pasta dough was continuously increased until the sensory quality reached the threshold (tomato peels flour concentration 15%). In a second step, the influence of particles size on sensory quality of pasta with 15% tomato peels flour was investigated. The results indicated that the increase of particles size determined a decline of overall quality of samples; even a slight better nutritional composition was recorded. Specifically, the spaghetti enriched with the fine particles showed the greatest sensory score, due to the low fibrous, low adhesiveness, low bulkiness and high hardness values and showed a significant increase of starch digestibility. Therefore, our findings suggest that whole-meal spaghetti with fine particles represent fortified pasta with good sensory properties, very comparable to the control samples, and good cooking quality. This example of pasta fortification can offer a broad spectrum of new products with desired properties and encourage the use of agronomic by-products for further studies and new food applications. Mean in the same column followed by different superscript letters differ significantly (P<0.05). Table 5: Cooking quality of spaghetti enriched with tomato peels flours at different particle size studied in step 2. | 2019-04-29T13:13:20.735Z | 2015-09-11T00:00:00.000 | {
"year": 2015,
"sha1": "30fc43a54242a6d23064c51f5c7ca08002499904",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/durum-wheat-wholemeal-spaghetti-with-tomato-peels-how-byproduct-particles-size-can-affect-final-quality-of-pasta-2157-7110-1000500.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9fee57c8cedb2d6e6bf9bb7d38b85d43804e7d6c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
236444536 | pes2o/s2orc | v3-fos-license | Planetary polar explorer – the case for a next-generation remote sensing mission to low Mars orbit
We propose the exploration of polar areas on Mars by a next-generation orbiter mission. In particular, we aim at studying the seasonal and regional variations in snow-deposits, which – in combination with measurements of temporal variations in rotation and gravity field – will improve models of the global planetary CO2 cycle. A monitoring of polar scarps for rock falls and avalanche events may provide insights into the dynamics of ice sheets. The mapping of the complex layering of polar deposits, believed to contain an important record of climate history, may help us understand the early climate collapse on the planet. Hence, we propose an innovative next-generation exploration mission in polar circular Low Mars Orbit, which will be of interest to scientists and challenging to engineers alike. Schemes will be developed to overcome atmosphere drag forces acting upon the spacecraft by an electric propulsion system. Based on the experience of missions of similar type in Earth orbit we believe that a two-year mission in circular orbit is possible at altitudes as low as 150 km. Such a mission opens new opportunities for novel remote sensing approaches, not requiring excessive telescope equipment or power. We anticipate precision altimetry, powerful radars, high-resolution imaging, and magnetic field mapping.
Why study polar caps on Mars?
Owing to the presence of an atmosphere, ice reservoirs, and widespread evidence for past liquid water on its surface, Mars is the most Earth-like planet. While terrestrial climate change and its impact on the environment is of great concern to scientists and society alike, important lessons may be learned from studies of our neighbor planet.
Analysis of the Martian orbit and rotation reveal that the planet experienced notable variations in solar irradiation. There is ample evidence that Mars once had a dense atmosphere supporting liquid water bodies on the surface, which represented possible habitats for life forms at that time. Today, most of the atmosphere and water have vanished. Understanding this early climate collapse and identifying the current whereabouts of water on Mars are among the foremost issues in Solar System exploration and in research on the origin and evolution of life [32].
Telescope observations as early as those by Giovanni Domenico Cassini in 1666 revealed that Mars has pronounced polar caps (Fig. 1). Data from spacecraft orbiting Mars (starting with Mariner 9; [31] indicate that these are several kilometers thick and show layered structures, suggesting a complex sedimentation process of ice and dust (Fig. 2). Hence, these "polar layered deposits" (PLD) represent an important record for our understanding of the planet's climate history [3], 45. Currently, the polar caps play an important role in the Martian climate through an active exchange of volatiles with the atmosphere. Some of the most critical questions are: What are the physical characteristics of the PLDs and what record of climate change is expressed in their stratigraphy?
The appearance of the polar caps varies with season, owing to depositions (i.e., snow cover) and sublimation of large volumes of CO 2 . This redistribution of CO 2 (involving almost one third of the total atmospheric CO 2 ) is associated with a planet-wide atmospheric circulation. The changing mass loads also cause measurable effects on Mars gravity and rotation. However, a full understanding of the dynamics of this scenario is still lacking.
The recent identification of a subglacial water lake by Mars Express adds to the intriguing nature of the polar caps and to prospects for exciting new discoveries in the future. Not surprisingly, a dedicated series of Mars Polar Science Conferences has periodically summarized our knowledge of the polar regions of Mars and identified key science questions that are critical to advance beyond the present stage [5-8, 14, 47]. Moreover, the study of the polar caps is mandatory to address fundamental science questions related to the recent and ongoing evolution of Mars volatiles and climate [29], 30. As a key future element to study the polar caps, essential for a better understanding and characterization of Mars polar areas, a next-generation remote sensing mission will be required.
Mars polar caps
The visual appearance, structure, and dynamics of Mars' North and South polar caps are complex -and they show puzzling differences. Both caps represent unique laboratories, useful for studies of other planetary polar environments, including the terrestrial Arctic and Antarctic.
North and South
The North polar cap covers an area of approximately 1100 km in diameter, the Southern cap being significantly smaller (400 km). The caps are more than 2 -3 kilometers thick in places (Fig. 2), consisting mostly of water ice with only a small component of dust, as data from the Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS) on Mars Express suggest (Fig. 2). The South polar caps, which are exposed to longer and colder winter seasons than their northern counterpart, are covered by additional (~10 m) CO 2 (dry ice).
The morphologies of both polar caps are complex and far from understood. At the margins, the caps rise gently; in other places, they are characterized by steep scarps. Spiral-shaped troughs on both polar caps ( Fig. 1) are probably related to prevailing wind patterns associated with Mars' rotation and Coriolis forces.
Seasons Just like for Earth, the rotation axis of Mars is tilted with respect to its orbital plane, and the planet undergoes a seasonal cycle of changing Solar irradiation on the northern and southern hemisphere -leaving the polar areas in winter darkness. Owing to the notable eccentricity of Mars' orbit, the orbital speed and the solar distance of the planet vary during the year, adding to the seasonal effect of changing irradiation levels. As a consequence, Southern winter seasons are particularly long and cold.
In addition to the permanent ice, the polar caps are covered by thin sheets of CO 2 ice in the winter season, which are sublimating during the spring. The overall statistics of data from the Mars Orbiting Laser Altimeter (MOLA) revealed the seasonal snow depth of Mars polar deposits at a maximum of 1.5 -2 m. However, the precise areal distribution and local depths of the deposits are unknown. The effect implies a substantial redistribution of atmospheric CO 2 during the seasons, involving one third of the total atmospheric mass (3 x 10 15 kg; [43]. The seasonal redistribution of these loads affects the moment of inertia of the planet body and causes measurable effects on Mars gravity (i.e., the gravitational flattening) and rotation (i.e., length of day variation). More precise measurements of this interaction between surface and atmosphere may help us improve the models of global mass transport and circulation of the atmosphere.
Layered Deposits
Mars has experienced dramatic climate changes in the past. Numerical simulations of the coupling of Mars' spin and orbit reveal that obliquity and eccentricity varied substantially over comparably short time scales of 10 million years, causing changing solar irradiation and climate (Fig. 3, left). The seasonal deposition of ice and dust varied accordingly, leaving a record in the polar areas. Indeed, exposed walls of troughs and scarps in the polar caps show marked layers of deposits of dusty ice (see Fig. 3, right), which often can be traced over hundreds of kilometers. Surface textures and unconformities probably reflect physical properties of the layers (such as dust content or ice grain size). Periods where layers were eroded were followed by times when new layers were deposited. By studying the characteristics of the layers, we may understand how the Martian climate has changed, similar to how scientists on Earth study ice cores from the Arctic and Antarctic. Subglacial water lakes? Orosei et al. [35] reported the detection of a subglacial lake beneath (~1.5 km) the southern polar ice cap, and spanning 20 km horizontally, as implied from a bright radar echo obtained by the MARSIS radar on Mars Express. While liquid water bodies cannot be sustained at the surface at the given temperature and pressure of the atmosphere, the survival of brines (saline waters) below the surface is conceivable.
Owing to limited data coverage of MARSIS, it is plausible that subsurface water may be found in other locations as well. Lake Vostok in Antarctica is a well-known terrestrial analog. The idea is most intriguing that such subglacial lakes, in spite of extreme environmental conditions, represent possible habitats for life forms.
Activity on polar scarps Most of the polar cap margins are characterized by steep scarps. The High Resolution Imaging Science Experiment (HiRISE) on-board the Mars Reconnaissance Orbiter monitored present activity along the scarps. Benefitting from its near-polar orbit, HiRISE has observed and revisited tens-of-kilometerslong scarps, where it revealed evidence for avalanches and block falls occurring [40,41].
The ice cap margins are the first to lose the CO 2 snow layers that cover the PLD in winter and sublimate in spring [19,37]. The diurnal temperature variations result in thermal stress-induced fracturing of both the lower part of the PLD and the underlying sandy Basal Unit (BU) leading to block falls [4]. This active mass wasting is thought to cause measurable scarp retreat [20].
The process competes with the gradual (~0.22-0.9 mm/yr) accumulation of dust on PLDs [36,46,48] and with viscous flow of the ice sheets [48], as is suggested by shape and slopes observed in local parts of the PLDs [24]. Consequently, accurate estimates of morphology of the ice caps and the erosion rates on scarps are a key to understanding which process controls the current evolution of scarps and the dynamics of the polar caps.
Mars Gravity and Rotation
The gravity field of Mars has been studied from radio science observations by spacecraft in orbit about the planet, including Mars Global Surveyor, Mars Odyssey, Mars Reconnaissance Orbiter, and Mars Express. Genova et al. [17] recently obtained a gravity field expressed in spherical harmonics to degree and order 120 (Goddard Mars Model 3, GMM-3), the quality of which, however, is not globally uniform. At the south pole the effective degree strength is about 100, while at the north pole it is only as high as 85 (Fig. 4).
The time variability of the gravity field, i.e., periodic variations due to solar and Phobos tides, atmospheric loads, and seasonal mass relocations from pole-to-pole, is of particular interest. Genova et al. [17] have considered annual, semi-annual, and tri-annual models for the zonal harmonics C 20 , C 30 , C 40 , and C 50 . However, clearly, at both poles of Mars the resolution of the gravity field is not sufficient to perform detailed analysis of the properties of ice caps. Gravity solutions by Zuber et al. [52], Konopliv et al. [27] and Genova et al. [17] suggested that lowlands within the north polar layered terrains do not correlate with gravity anomalies in these regions. This lack of correlation might be caused by heterogenous material in the lower crust or upper mantle, or mascons of sedimentary and/or volcanic depositions [17]. This enigma can be only solved by a detailed study of the gravity field anomalies at the poles in higher resolution.
The rotation of Mars was precisely measured by radio tracking of landers on the surface. Data from Mars Pathfinder revealed significant variations in the length of day as early as in 1997 [15]. Further continuous tracking of landers has revealed nutations related to the Martian seasonal cycle [26]. The InSight mission, currently operating on the Martian surface, is likely to detect nutations associated with the core of Mars [9], 16. MOLA's range measurements have been used to construct a precise topographic map of Mars, an important reference data set up to the present day, with many applications to studies in geophysics, geology and atmospheric circulation. The map showed the full variety of terrain types on the Martian surface (Fig. 5) including e.g., the low northern hemisphere, the Tharsis province, Valles Marineris, the southern highlands and both polar caps. Measurements of topography also contributed to understanding the pathways of liquid water flowing on early Mars.
Mars altimetry
MOLA detected seasonal variations of snow deposits at the polar caps of 1.5 m to 2 m maximum during the Martian winter [43,53] at an effective resolution of only 10 cm. MOLA also detected cloud structures in the planet's atmosphere. Clouds were identified that were reflective at the Laser wavelength of 1064 nm,others were opaque -absorbing the return pulse [33]. Formation and migration of the clouds could be tracked and interpreted with respect to the seasonal cycles on Mars.
Unfortunately, there was little information in the "shape" of returning Laser pulses, known to contain precious information on atmospheric structure, not to mention surface slopes and surface roughness. In the worst case (e.g., in ice-covered polar areas) shots were saturated and yielded shot arrival time only. Occasionally, cloud cover prevented the mapping of ground topography. As the orbit was not perfectly polar (i ≈ 93°), areas at high latitudes could only be mapped by occasional tilting of the spacecraft, thus resulting in coverage gaps, as well as limited knowledge on instrument pointing, coordinates of the ground spot, and height accuracy.
Radar data Radar sounding is well established in planetary science, and the technique has been successfully applied to Mars. The Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS; [22] on board ESA's Mars Express mission mapped the thickness of the ice sheet (Fig. 2). MARSIS complemented by the SHAllow RADar sounder (SHARAD, [42] on board NASA's Mars Reconnaissance Orbiter (MRO, [54].
Unfortunately, owing to different characteristics of MARSIS and SHARAD in terms of frequency and power, signal attenuation and visibility of subsurface layering by the two instruments is quite different, the full reasons for this remaining unclear. Thus, the radar data leave significant ambiguities in a scientifically outstanding question. For example, the basal layer is poorly resolved, and can be traced with confidence only in the MARSIS data. Also, the reflector identified and interpreted as a subsurface lake [35] is only visible in MARSIS data.
Imaging data Owing to seasonal changes in illumination and low-Sun conditions, imaging of polar areas is far from trivial. While the polar regions are completely covered by images with scales of tens to hundreds of meters (e.g., [38], the [52] situation is different for image data sets with higher resolution. A mosaic based on CTX images with a scale of 5 m/pixel covers all of Mars from between 88°S and 88°N [10], thus missing some small parts of the polar caps. The availability of very high-resolution data from the HiRISE camera (25-30 cm/px) (see Fig. 6), however, is limited to about 2% of the Martian surface. While the repeat HiRISE imagery for change monitoring is locally very good (e.g., [13], many other areas expected to show seasonal and interannual changes are poorly covered. Additional images at scales of tens of cm and ultrahigh-resolution images at scales of ~5 cm/px [29] would dramatically enhance our ability to reveal stratigraphic details and thus the climate record stored in the polar layered deposits, and an even higher cadence of repeat imaging distributed over the entire Martian year (except when the polar caps are not illuminated in winter) at selected areas with a high potential for changes would enable a better understanding of the surface processes acting in this highly dynamic environment.
Next generation remote sensing mission
We propose a Next-Generation Remote Sensing (ESA M-Class) mission to Mars, involving a spacecraft in circular Low Mars Orbit (LMO) (< 150 km), supported by electric propulsion. Due to the atmospheric drag, spacecraft in circular orbits about the planet (e.g., MRO or the ExoMars Trace Gas Orbiter), typically orbit at altitudes of about 400 km. In contrast, Mars Express moves in a highly eccentric orbit, during which the spacecraft approaches the planet to within 250 km, however, only for short phases during the periapsis pass (e.g., [21].. Remote sensing can benefit from such a mission in LMO in terms of better data resolution, while sounding instruments may profit from high signal strength. We anticipate high-resolution imaging (not requiring excessive telescope equipment), as well as radar sounding and Laser altimetry (not requiring excessive power). Magnetic field mapping will enjoy high signal strength and high spatial resolution of data.
We may carry out a quick back-of-the envelope analysis for atmospheric friction to be anticipated and its compensation. The thin atmosphere of Mars and the Fig. 6 Example of HiRISE image of PLDs (cf. also: Figure 3). Credit: NASA Jet Propulsion Laboratory resulting drag reduces the altitude of a spacecraft. In order to avoid a de-orbiting of the spacecraft we use an electric propulsion system which compensates the drag force. We consider a simple scale height model, which yields Mars' atmospheric density for given spacecraft height h of the type: with ρ 0 being the reference density at the reference surface (0.001 kg/m 3 < ρ 0 < 0.0001 kg/m 3 ) and H is the atmospheric scale height (11.1 km). We also may consider explicit models of the Martian atmosphere, which is known to vary with latitude, with time of day and season (e.g., The Mars Climate Database, MCD v 5.3 [http:// www-mars. lmd. jussi eu. fr/ mars/ access. html]). To determine the drag force, which acts on the spacecraft, we may use: where A is spacecraft cross section, v(h) is spacecraft velocity (for LMO: 3.45-3.55 km/s) and C D is the drag coefficient (typical dimensionless number: 2.0).
We assume a spacecraft cross section of 10 m 2 to determine the resulting drag force to be compensated by electric propulsion. For example, the BepiColombo transfer module is equipped with two thrusters, at a thrust of up to 125 mN, each. The DAWN spacecraft is equipped with three engines, each of which delivers a thrust of 90 mN. We determined at which distance the electrical propulsion system would be capable to compensate the atmospheric drag. As a result, we find that stable orbits at an altitude of 150 km can comfortably be achieved and sustained for a mission of 1-2 Mars years.
The proposed mission relies on important ESA heritage: The GOCE (Gravity field and steady-state Ocean Circulation Explorer) was launched in 2009, to map the Earth's gravity field at unprecedented details at that time. The satellite's unique Fig. 7 GOCE spacecraft, artist's conception. Credit: ESA -AOES Medialab; GFZ arrow shape and fins (Fig. 7) helped keep GOCE stable as it flew through the upper atmosphere at a comparatively low altitude of 255 kilometers. An ion propulsion system compensated for the variable deceleration due to air drag. The spacecraft's primary instrumentation was a highly sensitive gravity gradiometer. After running out of propellant, the satellite made an uncontrolled atmospheric reentry on 11 November 2013. We expect that the proposed mission can be realized within the M-class as defined by ESA. It has to overcome a number of challenges.
(1) We must optimize the spacecraft shape, to minimize drag. Just like in the case of GOCE, winglets may be used to support attitude control of the craft and possibly even to provide some lift. (2) We must optimize the orbit to warrant access to solar power, required to support electric propulsion. We may study Sun-synchronous orbits, in particular the terminator orbit. As in this case the Sun position vector is perpendicular to the spacecraft motion vector, the drag force acting upon the solar panels is minimized. (3) We must consider Mars' atmospheric structure and its temporal/spatial variations. In particular, dust storms may pose a challenge to the mission, as atmospheric density increases dramatically during such events and may require the spacecraft to move temporarily into a higher orbit.
Instruments
Radio science We aim at a refinement of Mars' gravity field. Here, radio tracking of the spacecraft using state-of-the-art radio science equipment (e.g., USO) is critical. We aim at recovering gravity field parameters of degree and order > 200. Specifically, we aim at measurements of time-varying effects of the field (notably, the variation of zonal harmonics), representing atmospheric circulation and the seasonal mass re-distribution. The drag of the atmosphere adds substantial non-gravitational perturbation to the spacecraft orbit. Hence, this drag must be measured by an accelerometer to allow the correction of this effect. The BepiColombo spacecraft, where radio science is combined with observations of spacecraft motion by an accelerometer may be a good example. Measurements of the spacecraft drag may be used for studies of Mars' atmospheric density structure.
Next generation laser altimeter and atmospheric lidar While data from MOLA are now more than 20 years old (data acquisition began in March 1999 and lasted until June 30, 2001), a Next Generation Laser Altimeter should focus on seasonal changes for understanding the climate cycles on Mars. The measurements should include precise determination of height variations of the polar caps for at least two Martian years. Volumes of seasonal deposits and rates of sublimation of snow in polar areas shall be determined for improving the models of the global CO 2 cycle. This shall be accompanied by measurements of wind speed and tracking of clouds. Studying the Martian atmosphere, cloud heights, opacity of the Martian atmosphere and the vertical distribution of dust, in particular, during dust storms shall be measured.
These diverse scientific objectives require a flexible Laser system, suitable for both precise range measurements to the ground as well as atmospheric sounding, including precise measurements of attenuation and reflectivity in the atmosphere. A Laser altimeter suitable for the described tasks could be based on LOLA (Lunar Orbiting Laser Altimeter; [44] or the ICESat-2 Laser altimeter ATLAS (Advanced Topographical Laser Altimeter System). We propose a high (order of kHz) shot rate and multiple parallel altimeter tracks from one orbit pass (Fig. 8), using a beam splitter [28]. The typical pulse width is on the order of several ns and the pulse energy is around 1 mJ per shot, thus requiring a single-photon detection scheme. Shot statistics will be used to measure surface albedo, roughness, and atmospheric structure parameters. The system shall operate at multiple Laser wavelengths to support atmospheric sounding. While 1064 nm and 532 nm are typical for ranging tasks and geodetic applications, the Laser light at a wavelength of 2.1 μm and 1.6 μm is known to be sensitive for CO 2 absorptions. However, the appropriate Lasers are yet to be flown in space applications.
Newly developed onboard noise cancellation techniques improving signal detection and range acquisition can be expected for this kind of development [1]. The main ground data processing tasks include the formation of super-resolution gridded topographic models, benefitting from multiple overlap of Laser spots, Laser crosstrack analysis (or self-registration, [49] to solve for spacecraft orbit corrections and Mars rotation model parameters. The instrument will benefit very much from the low orbit height, resulting in smaller ground footprint size and higher return signal strength. Fig. 8 Multi-beam patterns of the LOLA and ATLAS instruments on board LRO and ICESat-2, respectively Imaging Experiment Multi-temporal mapping of polar scarps shall be carried out to study dynamics of ice sheets and to identify time-varying features, such as rock falls and avalanche events. Modern machine-learning and change detection methods are an important key for the analysis of the image data.
Radar
The past decades of planetary exploration relied significantly on space-borne radars to explore the subsurface of moons and comets [18,23,25,39]. Prominent examples are the Lunar Radar Sounder (LRS, [34] on-board the Selenological and Engineering Explorer (SELENE) spacecraft and the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) instrument [25] as part of the Rosetta mission.
Radars on a future Mars Remote Sensing mission will require a versatile instrument, as demonstrated by RADAR [11] on board the Cassini mission, which combined the functionality of an imaging radar (e.g., [12] and an altimeter (e.g., [51]. By using multiple frequencies, as foreseen for the Radar for Europa Assessment and Sounding: Ocean to Near-surface (REASON), we will be able to characterize the subsurface mechanical and thermal structure, to probe surface composition [2], and support altimetry [50]. Such a radar allows deeper insights into the extent and structure of the Martian polar deposits to be obtained. Searching for reflectors from putative subsurface lakes at appropriate frequencies allowing for better resolution would be among the primary goals of a future instrument to characterize the abundance of subsurface water on Mars. Orbiter concepts involving lower altitudes would allow for better signal-to-noise ratios hence permitting higher frequencies at similar penetration depth or deeper penetration for continuing the search for Martian aquifers.
Conclusion
For the next-generation exploration of Mars, we propose dedicated studies of the polar caps, which are critical for our understanding of global atmospheric circulation and contain critical information on the climate record of the planet. This may be carried out by a new M-class remote sensing mission in a polar Low Mars Orbit (< 150 km), the spacecraft being equipped with next-generation onboard instruments, e.g., multi-beam Laser altimeters, a powerful sounding radar and high-resolution imaging experiments (Fig. 9). Such a mission may support our understanding of the early climate collapse on Mars, the whereabouts of water, and may even support the understanding of climate change on Earth.
Funding Open Access funding enabled and organized by Projekt DEAL.
Conflict of interest
No funding was received to assist with the preparation of this manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-07-27T01:29:45.955Z | 2022-01-05T00:00:00.000 | {
"year": 2022,
"sha1": "06417a7fffe203871d83c21a9a5786b3e924b4d3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10686-021-09820-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "16cd626011c98874f96e5b54999088d06a6f8b10",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": []
} |
270330378 | pes2o/s2orc | v3-fos-license | A Novel Multimodal Biometric Person Authentication System Based on ECG and Iris Data
Existing security issues like keys, pins, and passwords employed presently in almost all the fields that have certain limitations like passwords and pins can be easily forgotten; keys can be lost. To overcome such security issues, new biometric features have shown outstanding improvements in authentication systems as a result of significant developments in biological digital signal processing. Currently, the multimodal authentications have gained huge attention in biometric systems which can be either behavioural or physiological. A biometric system with multimodality club data from many biometric modalities increases each biometric system's performance and makes it more resistant to spoof attempts. Apart from electrocardiogram (ECG) and iris, there are a lot of other biometric traits that can be captured from the human body. They include face, fingerprint, gait, keystroke dynamics, voice, DNA, palm vein, and hand geometry recognition. Electrocardiograms (ECG) have recently been employed in unimodal and multimodal biometric recognition systems as a novel biometric technology. When compared to other biometric approaches, ECG has the intrinsic quality of a person's liveness, making it difficult to fake. Similarly, the iris also plays an important role in biometric authentication. Based on these assumptions, we present a multimodal biometric person authentication system. The projected method includes preprocessing, segmentation, feature extraction, feature fusion, and ensemble classifier where majority voting is presented to obtain the final outcome. The comparative analysis shows the overall performance as 96.55%, 96.2%, 96.2%, 96.5%, and 95.65% in terms of precision, F1-score, sensitivity, specificity, and accuracy.
Overview
Since the last 20 years, researchers have noticed an incredible development in the usage of digital data in the forms of audio, video, text, images, and various other types of raw and unprocessed data.This growth is accelerated by the extensive use of development of Internet-based devices.But the growing popularity of these gadgets brought with them a number of problems, including processing, data security, and storage capacity.Since there is currently a vast amount of data available online, it is crucial to maintain secure admittance to the data to shield critical information from numerous threats [1].
Biometric authentication is one of the promising techniques to deal with the different types of attacks on the data.
Nevertheless, recent technology growths made it easier for hackers to create fraud technologies that can mimic these physical and traditional security methods, such as pins and passwords, that are easily falsified, and keys are frequently misplaced.However, biometric identification, which verifies a person's identity and behavioural traits, has gained popularity in these applications [2].Unlike legacy methods like passwords and tokens, biometrics cannot be copied, moved, lost, forgotten, altered, or faked [3].These days, a wide range of industries, including face, fingerprint, iris, gait, palm vein, hand geometry, DNA, keystroke dynamics, and more, use biometric technologies.But thanks to recent technology developments, fraud systems that can impersonate these anatomical and behavioural characteristics have been created by hackers [4].
Thus, these techniques become vulnerable to various spoofing attacks.Moreover, the present studies have stated that the conventional unimodal or single biometric authentication system may be unreliable because the characteristics of a single biometric system can be contaminated which may lead to the failure of security systems to certain threats [5].
It is simple to counterfeit a single biometric attribute for use.For instance, it is simple to duplicate a fingerprint by employing a finger's fictional ridge pattern.Furthermore, when only one sensor is utilized, a person's face can be faked utilizing neural texture algorithms or deepfake recognition.However, forging many data points will be extremely difficult when multiple sensors are utilized.Additionally, multiple biometrics can offer widely accepted user acceptance, guaranteed correctness, and spoof proof [2,6].Researchers have devised a biometric authentication system using multiple traits to mitigate the nonsecurity challenges associated with unimodal security systems.These systems make use of two or more distinct biometrics, allowing their qualities to be combined to provide a strong attribute set that may be utilized for matching [6].The multimodal biometric system is more dependable for a range of real-time applications since it can identify people effectively in a variety of situations and is noise-resistant.A variety of combinations are included in the schemes with multimodalities that are currently in use, including ear and iris recognition [7], ECG, fusion of palm, iris, and finger veins [5], and many more [1].In this study, we have used iris and ECG data for authentication since they are useful in determining a person's lifelessness, while attackers can alter other biometrics like veins, palm prints, and fingerprints.Additionally, each individual has distinct physical traits, which is why each user's ECG signal is unique.In terms of authentication accuracy, recent techniques based on ECG authentication have demonstrated notable performance.Fusion techniques have an impact on multimodal biometric systems' accuracy.At several stages of fusion, including feature, rank, score, sensor, and decision level, these fusion procedures can be applied.Regulating the other fusions can produce results with a notable degree of matching precision, but fusion at the level of sensor is heavily reliant on the acquired data quality such as iris and ECG data.
Feature extraction is a major step in biometric feature extraction.A number of techniques and algorithms have been introduced recently for feature extraction.Wang et al. [8] reported that the heart rate is a significant impact on any ECG processing system; thus, the authors focused on the short-term ECG signal identification.Specifically, the short-term ECG signals have fewer readings of heart rates because during verification, the ECG signal is acquired for a short-term duration.Thus, it becomes a challenging task to match the attributes of ECG signals.In [8], the authors developed a principal component analysis network (PCA-Net) for feature extraction to identify the potential features.Similarly, Huang et al. [9] focused on the ECG signal processing for biometric authentication system.In this work, the authors used an improved local binary pattern-(LBP-) based feature extraction scheme which helps to extract the latent semantic attributes from LBPs.The obtained semantic LBP features are then processed through collective nonnegative matrix factorization learning process.Moreover, the labels are also incorporated to retain the intra-and intersubject similarities and make it robust against the noise and variations.
Similarly, iris feature extraction and matching are also presented to develop the ECG-based biometric authentication system.As discussed before, the fusion of features has an important part to improve the accuracy in iris verification, and based on this, Prabu et al. [10] proposed a modern method to fuse the biometric characteristics of hand geometry and iris of the users.This hybrid feature fusion scheme used LBP and scale-invariant feature transform (SIFT) features.Finally, learning machine-(LM-) based classifier trains the model.Le-Tien et al. [11] adopted a deep learning-based scheme with modified convolutional neural network and Softmax classifier.This model considers segmentation, normalization, and histogram equalization.Further, the modified convolutional neural network is developed which is obtained by presenting the feature extraction and recognition.The extraction of feature is performed by the Resnet50 method, and these features are added to the completely connected layer for training, and finally, the Softmax layer performs the classification.Currently, the multimodal biometric schemes are adopted widely due to their robust performance to obtain the robust performance proposed by the authors.Ammour et al. [12] used the combination of face and iris data to obtain the attributes for biometric classification.For feature extraction, a 2-dimensional log-Gabor filter which is a multiresolution filter is applied.Later facial features are extracted by applying a wavelet transform called singular spectrum analysis (SSA).
In addition, Aleidan et al. suggested an ensemble strategy based on VGG16 pretrained transfer learning (TL) and long short-term memory (LSTM) to identify individuals using ECG characteristics.A 98.7% accuracy rate was attained using the suggested system [13].
Hezil and Boukrouche proposed a new technique of biometric authentication using human ears and a palm recognition system where these give relevant evidence required for security.Local binary patterns (LBP), Weber local descriptors, and binarized image analysis were used to take up the research work.The authors used IIT Delhi, and IIT2 Ear databases were used for authentication using feature-level fusion technique.The results obtained using these multimodes (ear and palm) provided a significant result [14].In the paper "Cascade Multimodal Biometric System Using Fingerprint and Iris Patterns," the authors have used multimodal biometric techniques using fingerprint and iris patterns.CASIA-Iris V1 database and FVC 2000 and 2002 fingerprint database were used to carry out the work.Canny edge detection technique is used to detect the edges of the iris image, and Log-Gabor filter is used for the feature extraction of iris, and the minutiae feature extraction algorithm is used to detect and extract finger features [15].The results showed a good accuracy of 99.86%, whereas the result, when only the iris was used, was only 99.2%, and when only the fingerprint was used, it was 99.36%, respectively.Walia The multimodal schemes provide a reliable performance and robustness to several noises and fraudulent technologies.As mentioned before, several stages are included in multimodal biometrics which are considered fusion models such as sensor, feature, score, rank, and decision level.In the proposed work, the focus is on feature-level fusion and decision-level fusion to increase the accuracy of detection.The methodology carried out in this work goes like this: (a) Preprocessing phase: this step develops a combined image preprocessing and ECG signal preprocessing phase to increase the data quality (b) Segmentation and feature extraction: to develop an efficient approach for ECG signal segmentation where peaks and intervals are detected of ECG signals and various features of iris image are extracted (c) Feature fusion module: in this stage, we present a feature fusion approach where ECG and iris features are combined and redundant features are discarded (d) Finally, the decision-level fusion method along with the score-level fusion model is presented to get the similarity between ECG and iris inputs The next portion of the paper discusses the following: Section 2 provides a brief literature review about the existing techniques of multimodal biometric authentication system, Section 3 explains the current model, Section 4 presents the outcome of the proposed approach and comparative analysis with already standing techniques, and finally, Section 5 explains the final comments and future opportunities of the research.
Literature Review
Many numbers of works are carried out in this area of biometric authentication.The current advancement has reported the superior performance of multimodal over conventional unimodal schemes.In this section, we describe the recent methods of multimodal authentication systems.
Multimodal biometrics is a grouping of numerous methods using a number of sensors.Regouid et al. [7] presented a biometric system with multiple traits where a number of inputs are combined like ECG, iris, and ear to collect the biometric samples of the users.The preprocessing phase consists of segmentation and normalization of ECG iris, and ear signals.In addition, the feature extraction phase consists of a combination of 1D, shifted 1D and 1D-multiresolution local binary patterns (LBP).Along with this, the input signals are transformed into a 1D signal.The obtained signals are categorized by using K-nearest neighbour and radial basis function (RBF) classifiers.In [5], the authors also described the challenges of unimodal systems such as low accuracy and unreliability to prevent the attacks and introduced a fusion model which considers the weighting vote strategy.In [23], Mehraj and Mir used an oriented gradient feature based on a histogram to obtain the handcrafted features.Further, this model uses transfer learning (TL) by using AlexNet, ResNet, DenseNet, and Inception models.Zhang et al. [24] used face and voice data for biometric authentication.The improved local binary pattern feature extraction is applied on data which is of type image, and voice activity detection is used for audio signals.Further, an adaptive fusion scheme is applied which considers renowned attributes from the face and voice data and fuses them to obtain the robust recognition for biometric authentication.Su et al. [25] proposed a biometric technique using finger vein and ECG.This model is based on the fusion of feature which is obtained by applying discriminant correlation analysis (DCA).Jardine et al. [26] used face and ear biometrics for authentication.Rather than using the dataset in its input form, it uses a steerable pyramid transform to decompose data into scales and orientations which is used to obtain the texture features of images.These descriptors are statistical features, directional patterns, and local phase 3 BioMed Research International quantization.They are applied to generate the most discriminative texture features.The fused features are classified by applying the K-nearest neighbour classification scheme.Mohan and Ganesan [27] presented multimodal biometric classification using electromyography (EMG) finger vein recognition and hand vein recognition data which are used for person recognition.The fusion scheme uses an optimization strategy by combining elephant herding and deer hunting optimization.The fusion is performed based on the weight factors.
Vyas et al. [28] presented a coding-based method known as bit-transition code which is applied on the grouping palm print modality and iris modality.The Gabor filter is used for preprocessing which generates the symmetric and asymmetric parts.These parts are encoded by applying the encoding.Further, score-level fusion is applied on individual palm and iris data to produce the final decision.Chanukya and Thivakaran [29] presented a preprocessing, feature extraction, and classification-based strategy for multimodal biometrics.The preprocessing step includes median filtering.Further, the shape and texture features are extracted from the enhanced image.Finally, an optimal neural network model is applied whose weights are selected with the help of the firefly optimization algorithm.Hammad and Wang [30] used a method based on convolutional neural networks (CNN) to authenticate people using fingerprint and ECG data.Convolutional neural networks help to obtain the robust features from both ECG and fingerprint and generate the pattern.Later, Q-Gaussian multisupport vector machine (QG-MSVM) classification is applied for authentication.
Previous research [31] has demonstrated the efficacy of the discrete wavelet transform (DWT) in characterizing variations in electrocardiogram (ECG) patterns across different individuals.This study leverages the DWT to decompose ECG signals into multiple scales, each representing a specific level of signal coarseness.These scales serve as an initial feature set for subsequent feature selection.By selectively choosing the scales associated with the QRST complex, it becomes possible to retain identity-related information while minimizing interference effects to the greatest extent achievable.However, this approach lac ks in developing the robust feature extraction module and relies on limited attributes.Dar et al. [32] presented an ECG-based authentication approach which comprise of several stages, including ECG preprocessing, feature extraction, feature reduction, and classifier performance evaluation.The ECG segmentation involves detecting the R-peaks, but the system is not reliant on fiducial detection and avoids excessive computational complexity.Feature extraction combines the discrete wavelet transform (DWT) of the cardiac cycle with heart rate variability-(HRV-) based features.To decrease the dimensionality of the feature set, a best-first search method is employed.The classification stage utilizes random forests as the chosen classifier.In [33], the authors introduced the second order difference plot (SODP) nonlinear analysis technique used for analysing time-series data.It enables the identification of features by statistically analysing the distribution of waves.In this study, second order difference plot (SODP) features were extracted using various quantification methods for human identification based on ECG signals.A novel quantification approach, called logarithmic grid analysis, was introduced specifically for ECGbased human identification using second order difference plot (SODP).In [34], the authors introduced a temporalfrequency autoencoding approach for authentication.The approach begins by transforming the raw data into the wavelet domain, enabling a multilevel time-frequency representation.To remove noise components while preserving identity-related information, a feature selection method built on prior knowledge is proposed and employed to the transformed data.Following that, a stacked sparse autoencoder is employed to learn intrinsic discriminative features from the selected data.Finally, the identification task is accomplished using a Softmax classifier.
The paper [35] has demonstrated the use of IoT in their work of biometric authentication with safety and security using a fingerprint scanner to acquire the user's fingerprint details where the attendance will be saved in the cloud automatically.In the paper [19], the authors proposed a new method of biometric authentication using deep learning using a convolutional neural network (CNN) and a longterm memory network.A customized function was used to evaluate the authenticity of the person.ECG beats are identified based on the R-waves, and later, they are fed to the system with a convolutional neural network (CNN) and a longterm memory network to train the data set, and finally, a decision will be taken.
The Proposed Methodology
The unimodal has several challenges to authenticate the users; thus, multimodal schemes are introduced recently.However, achieving the noteworthy accuracy remains a challenging task.Here, we introduce a new method for multimodal authentication by developing an innovative feature extraction technique for ECG and iris data.Further, a feature fusion and classifications model are presented to learn the pattern and classify them according to their label.
Iris Feature Extraction.
The proposed work has used MMU iris datasets.However, when real-time iris desires to be captured, lighting conditions will not disturb the accuracy as the iris data is captured in a closed chamber which will be having sufficient light requirements.In the first phase, the iris data will be segmented and its features are extracted.Once the iris is segmented, we apply a combination of Gabor filtering and scale-invariant feature transform feature to obtain the robust features from the segmented iris.In order to segment the iris, we first focus on approximating the centre point of the iris; later, inner and outer iris boundaries are identified to segment the desired region as given in Figure 1.
Generally, the iris data capturing devices extract the square region and the said approach here estimates the biggest dark object in the extracted region.To get the segmentation, we initialize from the centre of the image denoted as P X I , Y I .The iterative process is performed in vertical and horizontal directions.This process is iterative and considers a 2 × 2 region from the image centre.Here, we represent the In order to find this, we present the circular edge detector module with the help of convolution operations which inspects the entire image space with the parameter x 0 , y o , r which can be represented as Using this approach, we get the inner and the outer radius of the iris image.Based on this, we crop and segment the iris.The obtained sample outcome is depicted in Figure 3.
The obtained segmented image is used further for feature extraction.We apply combined Gabor and scaleinvariant feature transform feature extraction.The transfer function of the one-dimensional Gabor filter revealed in Figure 4 can be expressed as where ω 0 represents the central frequencies.To improve the robustness of the planned method, we transform the polar coordinates to Cartesian coordinates; thus, the frequency domain form can be expressed as where f 0 denotes the central frequency, σ u is the bandwidth controller for u1, σ v is the bandwidth controller for v 1 , and θ denotes the orientation of filters.Further, we extract the odd symmetric function which outperforms when compared with the even symmetric Further, we apply the scale-invariant feature transform (SIFT) feature-based scheme to obtain the scale variation attributes which the robustness of attributes irrespective of image acquisition.Figure 5 depicts the outcome of scale-invariant feature transform detection.
A vast quantity of useful descriptive image features is derived using the SIFT detector.These characteristics are unaffected by scale, rotation, or lighting.These points are usually seen in high-contrast places, potentially on the margins of objects.This helps to generate the robust features from the image irrespective of its rotation, orientation, and scale.The SIFT feature extraction process is as follows: (i) In the first step, a scale-space extrema extraction process is applied where interest points of the iris image are extracted for varied scale and scale invariant rotation.This is produced with the help of finding the Gaussian difference function (ii) In the second stage, a key point localization scheme is performed which is an important part of SIFT.This generates the position and scale for resultant interest points (iii) The next stage is based on extracting the image gradients and orientation assignment based on these points (iv) After that, the feature description is carried out.The image gradients with local patterns are measured in the given neighbourhood of a key point at the given scale 3.2.ECG Feature Extraction.The proposed multimodal approach considers the iris and ECG data for authentication.The previous section has described the extraction of important features of the iris.In this section, we consider the feature extraction for ECG signals.The proposed approach considers wavelet transform-based feature extraction along with principal component analysis.Moreover, peaks and intervals are detected.Figure 6 shows the R, S, and T-wave peak detection.Further, we apply wavelet transform on the input ECG signal to obtain the detailed coefficients.The continuous wavelet transform is expressed as Here, x t denotes the original input signal, ψ t is the wavelet basis function, a is the dilation, and b is the translation factor.With the help of this, we obtain the wavelet transform which is expressed as In this work, we used the Symlet 8 wavelet function because it generates a more symmetrically supported wavelet 8 BioMed Research International which is equivalent to ECG when compared with the other wavelet functions.Further, we perform a 2-level decomposition as depicted in Figure 7.
According to this process, the input original signal a 0 passes through the high-pass filter module g and the same signal is processed through the low-pass filter h.This stage produces high-frequency subband components and lowfrequency subband as d 1 and a 1 , respectively.This decomposition process can be expressed as Here, a k, j and d k, j are the k th approximation and detailed coefficients, respectively.Later, we applied principal component analysis on the obtained coefficients which is found by finding the covariance matrix C from the input data and the mean vector as In the next phase, we compute the eigenvalues and eigenvectors from the obtained covariance matrix.Here, the obtained eigenvalues are arranged in largest to smallest order and eigenvectors are arranged as matching to the eigenvalues.Thus, the principal component can be stated as Here, l is the arranged vector corresponding to eigenvalues.
Ensemble Classifier.
Here, the final stage of the planned method is proposed where we perform the ensemble classification to learn the patterns from multimodal inputs and predict the final outcome.In order to perform this task, we generate classification trees with the help of a decision tree classifier.Further, the obtained predictions are processed through the majority voting to get the final prediction. Figure 8 depicts the procedure of the ensemble classifier.The building of a decision tree depends on the divideand-conquer process.According to this process, the datasets are trained and contain T training data of k classes as C 1 , C 2 , ⋯, C − k .During the tree construction process, if T consists of a single class then it will be considered as lead.If no cases are present in the T, then also, it is a leaf; it will be assigned to a major class of its parent node.On the other hand, if T contains a mixed group of classes or a test will be carried out, data T will be split into multiple subsets as T 1 , T 2 , T 3 , , T n .This procedure is repeated until every subset and its belonging class is obtained.Later, we apply the majority voting algorithm to obtain the final predicted output.The majority voting is applied as follows: where X A depends on the characteristic function as C j x = i ∈ A and A denotes the unique class label.
Data Availability Statement and Outcomes
These ECGs are attained from 44 male and 46 female volunteers.The quantity of archives for each user differs from 2 to 20 records.Each signal contains a raw signal and a filtered signal.For this experiment, we have considered 45 user's data from each record and arranged them with their labels.To measure the performance, the dataset is split into 70% for preparation and 30% for verification purpose.
Figure 9 demonstrates a sample outcome of ECG processing where we display the different phases of ECG signal processing.The raw signal is processed through the ECG processing module where signal filtering and various segmentations such as T-wave and S-wave are done.
Furthermore, we present iris image processing as represented in Figure 10 where this segmented iris is used for feature extraction.
In this data, 2 class labels are used which are "Genuine" and "Imposter" classes.The ensemble classifiers' performance is measured in accordance with specificity, sensitivity, precision, F1-score, and accuracy.The obtained outcome values are linked with the other standard classifiers like bagged ensemble, decision tree, random forest, and proposed ensemble classifiers.For each experiment, we have considered individual and combined biometrics to obtain the classification performance.Table 1 demonstrates the general representation of the confusion matrix.
The confusion matrix of the decision tree classifier is presented in Table 2, and the obtained performance is presented in Figure 11 and Tables 1-4.
Based on these values, we compute the precision, F1 -score, sensitivity, specificity, and accuracy performance for ECG and iris-based biometric verification scheme as depicted in Table 5.We can observe that the accuracy of the proposed ensemble approach obtained is good compared to random forest at 86.36%, decision tree at 87.50%, and bagged ensemble classifiers at 90.48% which clearly shows the outperformance of the proposed ensemble model.
Figure 12 depicts the comparative analysis with respect to accuracy for combined multimodal scenario for different classifiers.
The above analysis is compared with other approaches like random forest, decision tree, and bagged ensemble method, and the approach proposed here has a quite interesting performance as the precision score, F1-score, sensitivity, specificity, and accuracy, respectively, are 96.55%,96.2%, 96.2%, 96.5%, and 95.65% for the proposed approach.Figure 13 shows the accuracy performance with an increased number of subjects.
According to this experiment, we present the comparative analysis in terms of overall verification accuracy, and the average accuracy is obtained as 87%, 84%, 87%, 88.55%, and 94.23% by using decision tree, random forest, bagged ensemble, and proposed ensemble, respectively.The bagged ensemble classifier is mainly used to estimate random sets present in the original database.It further separated the individual predictors required to obtain the final results.Further, we present a study of comparison for ECGbased authentication systems where the performance of the projected method is compared with existing techniques as stated in [31][32][33] and [34] as shown in Table 6.
Conclusion
This paper focuses on the development of an authentication system using multimodalities.The conventional schemes are The technique proposed here attains healthier accurateness for ECG-ID database classification because proposed approach uses a combined feature extraction approach to extract the robust features.
12 BioMed Research International constructed on the unimodal systems which are not suitable due to poor reliability; thus, currently, multimodal systems are adopted widely in authentication applications.Specifically, ECG plays an important role due to its feature of liveliness detection.Similarly, iris images are important to obtain the unique features.Thus, in this work, we focused on ECG and iris data to come out with an authentication system using multimodalities.The ECG signal and iris data are processed through various phases such as preprocessing and feature extraction (for ECG, we used wavelet and principal component analysis-based morphological features and iris features are extracted by employing Gabor and scaleinvariant feature transform (SIFT) feature extraction).
Finally, the majority voting-based decision tree ensemble classifier is presented to obtain the final outcome.
Data Availability
The results of the proposed approach are described here and its comparative analysis with the existing schemes.
4
BioMed Research International image centre points by P X 0 , Y 0 .To find the boundaries of the iris, we focus on finding the radius.r * s = arg min r∈R D s r 1 max r,x 0 ,y 0 G σ r * ∂ ∂r I x, y 2πr ds 2
Figure 1 :
Figure 1: Panel (a) is the internal boundary representation of the iris image, and panel (b) is the external boundary representation of the iris image.
Figure 4 :
Figure 4: Real parts of Gabor filter-based feature extraction.
Figure 10 :
Figure 10: Iris original image and segmented image.
Figure 11 :
Figure 11: General representation of the confusion matrix.
Table 1 :
Confusion matrix random forest.
Table 2 :
Confusion matrix decision tree.
Table 5 :
Performance chart with respect to different classifiers.
Table 6 :
Comparison of results with respect to the accuracy of an existing system.
The said approach is implemented by using MATLAB tool running on Windows 11 platform which has 8 GB RAM and 4 GB NVIDIA graphics card installed.In this experiment, two different datasets MMU iris dataset[36]and ECG ID database from PhysioNet[37]are used in the work.A brief discussion is presented about these datasets: (i) MMU iris dataset (MMU1): This database contains eye images to train the models of iris-based biometric attendance systems.The patterns obtained for each eye are distinctive to each individual, which aids in classifying a person.This dataset contains 460 photos, including 5 shots of each person's left and right iris.This data is available at https://www.kaggle.com/datasets/naureenmohammad/mmu-iris-dataset.(ii) ECG ID database: this database contains a total of 310 recordings of ECG signals captured from 90 people.Each recorded data has the following information: (a) ECG lead I records signal for 20 seconds which is converted to digital form at 500 Hz with 12-bit perseverance.(b) 10 annotated beats.(c) Header (.hea) file which contains the information as age, gender, and date of recording.(d) This data is available in ECG-ID Database v1.0.0 (http://physionet.org). | 2024-06-08T15:21:06.892Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "05a6f53dfcad697084e1d4986a4964217638c4fe",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2024/8112209",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b91d7343aadb5afc1517a8dded4bcb1b5b0dd0a8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2904109 | pes2o/s2orc | v3-fos-license | Stress-Related Alterations of Visceral Sensation: Animal Models for Irritable Bowel Syndrome Study
Stressors of different psychological, physical or immune origin play a critical role in the pathophysiology of irritable bowel syndrome participating in symptoms onset, clinical presentation as well as treatment outcome. Experimental stress models applying a variety of acute and chronic exteroceptive or interoceptive stressors have been developed to target different periods throughout the lifespan of animals to assess the vulnerability, the trigger and perpetuating factors determining stress influence on visceral sensitivity and interactions within the brain-gut axis. Recent evidence points towards adequate construct and face validity of experimental models developed with respect to animals' age, sex, strain differences and specific methodological aspects such as non-invasive monitoring of visceromotor response to colorectal distension as being essential in successful identification and evaluation of novel therapeutic targets aimed at reducing stress-related alterations in visceral sensitivity. Underlying mechanisms of stress-induced modulation of visceral pain involve a combination of peripheral, spinal and supraspinal sensitization based on the nature of the stressors and dysregulation of descending pathways that modulate nociceptive transmission or stress-related analgesic response.
Introduction
Alterations of visceral sensation such as enhanced perception of physiological or experimental visceral stimuli along with hypervigilance to those, are at the origin of visceral hypersensitivity, a phenomenon commonly considered to play a major role in the pathophysiology of irritable bowel syndrome (IBS). [1][2][3][4][5][6][7] Epidemiological studies have implicated stress of psychosocial, physical or immune origin as a trigger of first onset or exacerbation of IBS symptoms. [8][9][10] Early adverse life events in the form of emotional, sexual, or physical abuse are major predisposing factors for the development of IBS later in life. 11,12 Childhood trauma, especially in genetically predisposed individuals, is thought to induce persistent changes in the brain arousal response system that impacts on the neuroendocrine hypothalamic-pituitary-adrenal (HPA) axis. 12 In adult IBS patients, acute stress episodes, chronic social stress, anxiety disorders, and maladaptive coping style determine the illness experience, health care-seeking behavior as well as treatment outcome. 12,13 Stress-related psychosocial factors such as somatization, neuroticism, and hypochondriasis are also important predictors in the development of post-infectious IBS. 14,15 Emotional or physical stressors may cause disturbances at every levels of the brain-gut axis including the central, autonomic and enteric nervous systems and affect regulation of visceral perception and emotional response to visceral events. 16 Over the past 15 years, various animal models have been developed to get insight into the underlying mechanisms of visceral hypersensitivity and the influence of stress on visceral pain pathways. 1,[17][18][19][20] While in humans the evaluation of visceral sensitivity is predominantly based on the conscious perception to gut distension, the measurement of this subjective response cannot be performed in animal studies. Objective evaluation of responses to visceral stimulation in clinical studies includes the assessment of reflex activity (eg, a somatic nociceptive cutaneo-muscular flexion reflex can be inhibited by painful visceral stimulation) or evoked central processes (eg, changes in activation of the anterior cingulated cortex involved in pain inhibition). 21,22 Indeed, during the last decade functional imaging techniques have been applied successfully to examine the human brain response to noxious visceral stimuli. 23 In experimental animals, the pattern of brain and spinal circuitries activated by various stressors and colorectal distension (CRD) under basal or hypersensitive state have been early on mapped in a number of studies using the induction of the Fos protein expression as a direct marker of neuronal cell activation and double immunohistochemical labeling to identify the phenotype of Fos positive spinal and supraspinal neurons. [24][25][26][27][28][29][30][31] Recently, preliminary reports applied imaging techniques to get insight into brain circuit activated by visceral stimulation in rodents. Similarity in some regional brain activation induced by CRD have been found when comparing Fos expression and functional magnetic resonance imaging. 32 In addition this comparative study indicates that both methods are complementary as Fos immunohistochemistry provides a higher spacial resolution over imaging while imaging displays a higher sensitivity to detect a large number of brain area. Development of imaging in conscious animals with removal of additional stress linked with conditions of functional imaging monitoring will enable bridging the gap between the multidimensional nature of human pain experience and preclinical studies. 33 In this review we will outline some of the most relevant preclinical models that have been developed, comment on their contribution to our understanding of stress modulation of visceral pain mechanisms, and assess the clinical relevance of these preclinical studies to unravel potential molecular targets to alleviate visceral pain symptoms in IBS.
Stress Pathways: Corticotropin Releasing Factor Signaling as an End Point Effector
First coined by the endocrinologist Hans Selye, the term "stress" defines the physiological adaptive responses to real or perceived emotional or physical threats ("stressors") to the organism homeostasis. 34 When exposed to an acute threatening challenge, the body engages a "fight or flight" response 35 driven by sympathetic activation leading to rapid heart rate and respiration, increased arousal, alertness, and inhibition of acutely non adaptive vegetative functions (feeding, digestion, growth and reproduction). 34 Concurrently, a negative feedback is activated to terminate the stress response and bring the body back to a state of homeostasis or eustasis, 36 that engages neural, neuroendocrine and immune components, a process called allostasis 37 or "stability through changes". 37,38 However, persistence or chronicity of the stressors can overload this adaptive system which then becomes defective or excessive. The organism is no longer brought back to basal homeostasis leading to a state of allostatic load 37,39 or "cacostasis." 36 This state lies at the origin of a variety of stress-related diseases that develop in the context of a vulnerable genetic, epigenetic and/or constitutional background. 36 The pathogenesis of stress-induced disorders affects the whole body, including the viscera of which the gastrointestinal (GI) tract is a sensitive target. 36,40 Over the past decades, important components of the stress-activated pathways whereby the brain translates stimuli into final integrated bodily response have been identified through the characterization of corticotropin releasing factor (CRF) signaling system. This is composed of the 41 amino acid peptide CRF, and related peptides, urocortin 1, urocortin 2 and urocortin 3 along with the CRF receptors CRF 1 and CRF 2 and their variants which display specific affinity for CRF and related agonists. 41 The development of selective CRF receptor antago-nists has also largely contributed to delineate the role of activation of CRF receptor subtypes in the stress response. 42,43 In particular convergent reports indicate that the activation of CRF 1 receptor underlies the multiple faceted components of the stress response. 40,44,45 CRF/CRF 1 signaling plays a primary neuroendocrine role in stimulating the HPA axis leading to the release of adrenocorticotropic hormone and corticosterone in rodents and cortisol in humans. 43,46 In addition the CRF signaling system also acts as a neurotransmitter/neuromodulator to coordinate the behavioral, immune, and visceral efferent limbs of the stress response. 44,45,[47][48][49] It does so via the activation of the locus coeruleus and its noradrenergic projections to the forebrain which contribute to arousal, alertness as well as the modulation of forebrain, hindbrain and spinal sites regulating the autonomic nervous system activity leading to the stimulation of the sympathetic nervous system and release of catecholamines, [50][51][52] and sacral parasympathetic activity while decreasing vagal efferent output [53][54][55] that influences immune and visceral function. 56,57 In addition the brain CRF/CRF 1 signaling pathway is involved in stress-related induction of anxiety/depression 44,45,58 and alterations of colonic motor and visceral pain while both central and peripheral CRF 2 receptor activation may exert a counteracting influence. [59][60][61][62][63] Moreover recent experimental and clinical studies point to an equally important contribution of the peripheral CRF/CRF 1 signaling locally expressed in the gut to the GI stress response.
Visceral Pain Pathways
Pain perception in peripheral tissues depends on the signal transmission from the site of pain origin to the CNS. Nociceptors (receptors activated by noxious stimuli) 65 located in 2 sets of primary small afferent fibers (C and Aδ afferents) innervating the viscera that project to distinct regions in the CNS, 66 are the primary pathways of pain transmission. From the esophagus to the transverse colon, the GI tract innervation is provided by vagal afferent fibers originating in the nodose ganglia and projecting centrally to the nucleus of the solitary tract. Pelvic nerve afferent fibers, which originate in the lumbosacral dorsal root ganglia, and project centrally to the lumbar 6 -sacral segments of the spinal cord innervate the remaining part of the large bowel (descending and sigmoid colon, rectum). The entire GI tract is also innervated by afferent fibers contained in the splanchnic nerves projecting to the thoracic 5 -lumbar 2 segments of the spinal cord. 67 Even though visceral afferents constitute only 10% of all afferents, they are able to monitor changes in the gut milieu and participate in the transmission of visceral sensory information. 68,69 Of note, vagal afferents do not encode painful stimuli however, changes in their activity can modulate nociceptive processing in the spinal cord and the brain. 68,70,71 Upon entering the dorsal horn, visceral primary afferents carried out by the pelvic and splanchnic nerves terminate in spinal cord laminae I, II, V and X 72 converges onto spinal neurons in the lumbosacral segments and thoracolumbar segments respectively. Lumbosacral segments process reflex responses to acute visceral pain, while thoracolumbar segments' involvement in normal visceral sensation is uncertain, 73 however, both segments process inflammatory stimuli. 73 Subpopulations of neurons within the dorsal horn project to discrete nuclei within the thalamus (ie, ventral posterior lateral thalamus) as well as other structures in the brain stem (parabrachial nucleus, periaqueductal gray, nucleus tractus solitarius). From the thalamus, the information is conveyed to cortical areas involved in sensory processing (such as the somatosensory cortex) or those involved in processing emotional or affective information (such as the anterior cingulate gyrus and insular cortex). 65,74 In addition to the ascending system, which enables pain perception described above, other neural circuits originating from supraspinal sites can influence nociceptive activity in the spinal cord and in primary afferents, a system referred to as descending pathways. 75 There are 2 types of descending control pathways: inhibitory, which produce analgesia (periaqueductal gray, locus coeruleus) and facilitatory which produce hyperalgesia (rostroventral medulla and OFF and ON cells).
Visceral Pain M onitoring in Rodents
The primary readout and the standard assay for the measurement of visceral pain in rodents consists in the monitoring of abdominal muscles contraction or visceromotor response (VMR) to controlled isobaric distensions of the distal colon by an inflatable balloon. 78 The VMR can directly be assessed as electromyographic (EMG) signals monitored via surgically-implanted recording electrodes in external or internal abdominal muscle which are either externalized through the skin (abdomen, neck) [79][80][81] or connected to radiotelemetric implants in the abdominal cavity. 82,83 Although the method is of significant value in the field of visceral pain study, it has experimental shortfalls such as damage to EMG electrodes, loss of signal and electrical interferences which is of particular concerns in chronic experimental settings. Additionally, EMG surgery involves skin and/or mus- 19,88 ). (A) Original and rectified representative electromyographic (EMG) and intraluminal colonic pressure (ICP) traces recorded simultaneously on the same mouse in response to CRD (45 mmHg, 10 seconds). When both raw EMG (upper line) and ICP (second line to the bottom) signals are analyzed in Spike 2 by computing "DC Remove" 1 second to exclude all slow events > 2 seconds (ie, colonic smooth muscle contractions) and "root mean square amplitude" to extract the area under the curve of the signal, the resulting EMG and phasic ICP signals (middle lines) present a significant overlap. (B) Mice were equipped with EMG electrodes or not and exposed to water avoidance stress for 1 hour per day for 10 days tested with ICP for visceromotor response (VMR) to CRD. (C) Intraperitoneal injection of the selective corticotropin releasing factor receptor subtype 1 agonist, cortagine-induced visceral hypersensitivity in C57BL/6 mice tested with ICP for VMR to CRD. Data are expressed as mean ± SEM, n = 10-14 per group as specified in graph legends. *P<0.05 compared with baseline, **P < 0.05 vs vehicle. cle incision depending on the technique used (subcutaneous abdominal electrodes or intraperitoneal cannula) and chronic implantation of a foreign body. Even though no data are available in the literature in relation to the impact of chronic EMG electrodes placed into the abdominal wall, such intervention could induce a host-tissue response with local micro-inflammation (neutrophils, lymphocytes and macrophages) as it has been shown for other types of implants in the skin and peritoneum. 84,85 A recent report suggests that the preconditions of animals (EMG surgery, and post-surgical delivery of antibiotic and single housing) has considerable impact on their visceral pain responses, particularly in the context of stress studies. 86 Other approaches consist of recording manometric changes in the pressure of the balloon inserted into the distal colon 86,87 or changes in pressure inside the colonic lumen. 19 . Experimental stress models have been developed that target different periods throughout the lifespan of animals to assess the vulnerability, trigger and perpetuation influences of stress on visceral sensitivity. During early life, trauma due to maternal neglect (neonatal maternal separation stress) or injury (neonatal chronic colonic inflammation or pain) can enhance the susceptibility of individuals to develop altered visceral pain responses at adulthood. During adulthood, life-threatening stressors (post-traumatic stress disorder model), psychosocial stressors (acute and chronic stress) or physical stressors (intestinal infection or inflammation, antibiotic administration and surgery) have all clearly been established as triggering factors to the development of visceral hypersensitivity in rats and mice. Lastly, the use of specific strains of rodents known to exhibit various levels of anxiety, depression or stress hyper-responsiveness (Wistar-Kyoto and Flinders Sensitive Line) help mimic the influence of perpetuating factors on symptoms of visceral pain. WAS, water avoidance stress; PRS, partial restraint stress; PTSD, post-traumatic stress disorder; DSS, dextran sodium sulfate.
post-surgical treatments such as antibiotic, analgesics which can affect the visceral pain responses and still remain an objective and sensitive measure of abdominal contractions (Fig. 1). However, they require the animals to be partially restrained in Bollman cages, a context to which they need to be habituated and which by itself may bring a component of stress.
Behavioral approaches such as operant behavioral assays 78 have also been used in early studies and capitalized on the learning and fear behaviors of animals in response to painful CRD. Visual monitoring of the abdominal withdrawal reflex 89 has also been applied in a few studies, and while having the great advant-age of being one of the less invasive technique employed to date, it is a very subjective method. Indirect endpoints such as Fos or extracellular signal-regulated protein kinase induction in the CNS, 29,62,[90][91][92] and functional brain imaging of integrated brain responses to nociceptive stimuli 33,93 have also been utilized in some studies. These approaches allow for direct assessment of the neuronal circuitries recruited by the visceral pain stimulus and, in the case of functional brain imaging is very similar to the monitoring of CRD responses in healthy and IBS human subjects. Unfortunately, in animals these brain mapping techniques require euthanasia and limit the assessment to specific time points.
Journal of Neurogastroenterology and Motility
However, as more stringent brain imaging approaches are developed in rodents, they will open new venues to parallel human studies. 94
Experimental Stress M odels and Visceral Pain
By convention, stressors are categorized in exteroceptive (psychological or neurogenic) and interoceptive (physical or systemic) classes 95,96 and both have been used in animal models to investigate the relationship between stress and visceral pain. 97 Dual visceral pain responses: hyperalgesia and analgesia have been described in rodents exposed to exteroceptive stressors. Though only recently investigated, the analgesic response bears very relevant implications in the understanding of visceral pain-associated pathologies (detailed in section "Stress-induced visceral analgesia: how does it help us to model and understand visceral hypersensitivity?") In contrast, interoceptive stressors have been most consistently associated with the development of stress-induced hyperalgesia. Stress modulates visceral pain in IBS patients as well as in healthy subjects, 9,98 therefore experimental animal models, involving exposure to various clinically relevant stressors have been developed to recapture features of IBS symptoms, of which hyperalgesia to sigmoid distensions is a hallmark. 99,100 Moreover clinical studies have stratified that the stress-related modulation of IBS symptoms 9 may be occurring through (1) permanent enhancement of stress responsiveness, (2) transient symptom exacerbation and/or (3) symptom perpetuation. Consequently existing experimental stress models target different periods throughout the lifespan of animals to assess the vulnerability, the trigger and perpetuating factors determining stress influence on visceral hypersensitivity (Fig. 2).
Stress in the Perinatal Period: Genetic/ Epigenetic Factors
Twin studies in IBS patients showed higher (but relatively low) concordance rates in monozygotic than dizygotic twins suggesting that although genetic factors are not dominant, they play a role in the occurrence of IBS. 101 There is also a growing literature reporting the association between functional genetic polymorphisms and IBS at the level of serotonin transporter gene (associated with diarrhea in female IBS patients), or α 2 -adrenoreceptor gene (associated with constipation), and more recently, additional gene polymorphisms have been unraveled supporting the potential permissive role of genetics in IBS pathophysiology. [102][103][104][105] Of interest, it has been postulated that epigenetic factors related to heritable changes in gene expression that occur without alteration in gene sequence, determine the manner in which gene activity may be altered either transiently or permanently in response to environmental challenges. 106 Such epigenetic modifications could account for symptoms persistence, familial clustering and the transgenerational impact of IBS. However, experimental studies have not dwelled on strain differences in terms of stress responsiveness, anxiety and depression in rodents, [107][108][109][110] to assess and compare how genetic predisposition together with perinatal (maternal prenatal stress) or early life stressors (neonatal maternal separation stress) could affect visceral pain responses at adulthood in the context of epigenetic modifications. There is only one preliminary report suggesting that strain may determine the duration of the visceral hyperalgesia in response to chronic heterotypic stress (detailed in section "Stress in the adult period: trigger factors, Psychosocial stressors"). The influence of genes on the vulnerability of rodents to exhibit visceral hypersensitivity has however been assessed in relation with anxiety behavior at adulthood in rat strains with different anxiety/depression backgrounds 110 (detailed in section "Genetic models of anxiety & depression").
Stress in the Early-life Period: Vulnerability/ Trigger Factors
Early life events and childhood trauma by biopsychosocial factors (neglect, abuse, loss of caregiver or life threatening situation) enhance the vulnerability of individuals later in life to develop affective disorders (depression, anxiety and emotional distress) and put them at a greater risk for developing IBS. 12,99 In the context of epigenetic modifications, experimental studies showed that early developmental trauma decreases glucocorticoid receptor expression by hypermethylation of a key regulatory component and consequently affects the feedback regulation the HPA-axis with impact on the endocrine/behavioral adaptation and the susceptibility to stress-related disorders. 112 In addition, experimental studies indicate that the newborn's gut through stress-related changes in intestinal permeability may be exposed to a variety of factors resulting in mucosal inflammation and tissue irritation which could have long-term consequences at adulthood even though no longitudinal clinical studies exist showing that gut irritation in early life is a risk factor for IBS development at adulthood. 97 Moreover, postnatal microbial colonization has been also suggested as a potential factor programming the HPA-axis response to stress in mice. 113 An experimental model commonly used as a stress model to mimic early stress/childhood trauma is the neonatal maternal separation in rodents. This is achieved by isolating pups from the dam for 2-3 hours/day during the first 2 weeks after birth from postnatal day (PND) 1-2 to PND 14. 17,[114][115][116] At adulthood, rats previously subjected to neonatal maternal separation exhibit visceral hypersensitivity to CRD under basal conditions which is further exacerbated by exposure to the acute psychological stressor in the form of water avoidance stress (WAS) consisting in placing rodents on a small platform surrounded by water for 1h. 117,118 Other models used repeated intermittent colonic irritation during the neonatal period (PND 8-21) either in the form of daily noxious CRD (60 mmHg-60 seconds distension twice separated by 30-minute period of rest) or by performing daily intracolonic injection of mustard oil (5%), increases pain behavior to CRD from postnatal week 5 up to postnatal week 12.
89,119
Likewise, an acute somatic injury (saline or carrageenan injections into the hind paw) performed during the critical period of postnatal development, ie, before PND 14, produces visceral analgesia to CRD in adult rats. 120 Based on these studies and the extensive amount of evidence originating from somatic pain studies, 121,122 it appears that neonatal insults either acute or repeated, somatic vs visceral occurring during the development of the organism contribute to induce a state of visceral hypersensitivity in adulthood which may reflect long-term changes in visceral sensory processing. 120
Psychosocial stressors
Psychosocial stressors (eg, threat to social status, social esteem, respect and/or acceptance within a group; threat to self-worth) activate stress circuits within the emotional motor system and induce neuroendocrine response (CRF and cortisol) and autonomic response (norepinephrine and epinephrine) that result in the modulation of gut sensory, motor and immune function as well as intestinal permeability. 9 In experimental studies, the 2 main acute stressors that are prominently used in visceral pain studies are WAS for 1 hour and partial restraint stress for 2 hours, a stressor with stronger psychological component than WAS, which entails taping the forelimb of rats in order to prevent their movements. [123][124][125] Exposure of male Wistar rats to WAS for 1 hour leads to the development of a delayed visceral hyperalgesia to CRD, appearing 24 hours after the end of the stress, 126 while exposure to partial restraint stress, induces an immediate hyper-algesia to CRD in male 127 and female Wistar rats. 115 However, in the context of clinical studies in which daily chronic stress predicts the intensity and severity of subsequent symptoms in IBS patients, [4][5][6]99,128,129 a variety of chronic stress models divided in 2 categories have been recently developed in adult rodents. The first category consists in exposing animals repeatedly (over a few days to weeks) but intermittently (once or twice per day) to 1 or different stressors, with the aim of mimicking the daily exposure to psychosocial stress that humans can encounter through their personal and professional interactions. The second category consists in continuous exposure to stressors as part of change in internal state, for instance inflammation, or external milieu, for instance single housing, or social crowding which mimics the effect of social milieu in humans or using genetic rodent strains that have constitutive stress hyper-reactivity (Wistar Kyoto, Flinders Sensitive Line). In particular, repeated intermittent exposure to WAS is one of the first "chronic" stress model to have been adapted to the study of visceral hypersensitivity 81 and is presently widely used. 88,130,131 Initial studies in which the visceral pain response was monitored using EMG recording that entails surgical implantation of electrodes, male Wistar rats exposed to 10 consecutive days of WAS for 1 hour daily developed visceral hypersensitivity to CRD lasting up to 30 days after the end of the last session of WAS. 81,130 In our laboratories however, we found that when naïve male and female Wistar rats were exposed to a similar WAS schedule and their VMR was monitored by intraluminal colonic solid-state manometry, a technique that does not require surgery, animals developed visceral analgesia to CRD. 132 Similar results have been obtained in C57BL/6 mice 88 and analgesic vs hyperalgesic responses were established to be dependent upon preconditions (surgery and single housing) associated with the method of recording of VMR (Fig. 1). 88,133 Therefore, the impact of repeated mild stress such as 1-hour exposure to WAS on visceral pain response to CRD is largely influenced by the basal state conditions of the animals before applying the repeated stressor (detailed in section "Stress-induced visceral analgesia: how does it help us to model and understand visceral hypersensitivity?" and reference 88 ). Repeated exposure to unpredictable sound stress has also been recently shown to provide a model of delayed visceral hyperalgesia in male Sprague-Dawley rats. 134 Because habituation can occur in response to repeated exposure to an homotypic stressor, 135,136 heterotypic stress models with different and alternating modalities to induce stress have been recently developed. However male Wistar rats exposed ran-
Journal of Neurogastroenterology and Motility
domly to a combination of cold restraint stress (45 minutes), WAS (1 hour) or forced swimming (20 minutes), 1 stressor per day for 9 consecutive days develop visceral hypersensitivity only at 8 hours but not at 24 hours or 7 days after the end of the last stressor. 137 Interestingly however, the same regimen of alternating stressors in a different strain of rats, Sprague-Dawley, led to a sustained visceral hypersensitivity lasting up to 2 weeks after the end of the stressor (S. Sarna and J. Winston, pers. comm.), suggesting that the strain and therefore genetic background of the animals, affects the visceral pain responses to repeated intermittent exposure to different stressors.
Life-threatening stressors
Retrospective clinical studies indicate that living through or seeing a traumatic event, such as war, environmental disasters, rape, physical abuse or a bad accident in adulthood can lead to post-traumatic stress disorder (PTSD). [138][139][140][141][142][143][144] There is evidence of increased prevalence of GI symptoms, in particular IBS in PTSD sufferers including war veterans. [138][139][140][141][142] Additionally, patients with IBS who have experienced traumatic events may be at higher risk for other co-morbid psychiatric disorders than IBS patients without a trauma history. 141 In adult rats, treatment with a relatively short-lasting session of shocks or a social confrontation with a predator or aggressive conspecific animals induces long-lasting (weeks-months) conditioned fear-responses to trauma-related cues, and a generalized behavioral sensitization to novel stressful stimuli that persists or grows stronger over time. [145][146][147][148] Repetitive balloon distention of the distal colon causes increased cardiovascular 'pseudoaffective' reflexes in pre-shocked rats compared to controls, 2 weeks after a single session of foot shocks. [145][146][147][148] Of note, female rats appear to show a different pattern of sensitized behavioral responsiveness to the same challenge, possibly pointing to sex-related alterations in the neuronal substrates involved. 149
Interoceptive stressors
In approximately 10% of patients with IBS, the onset of symptoms began with an intestinal infectious illness. 150 Bile salt malabsorption resulting from infectious damage with organisms such as Salmonella and Campylobacter within the terminal ileum and right colon may also underlie some forms of post-infectious IBS. 151 Inflammation, antibiotic treatments, bladder infection and surgery may also contribute to the symptoms in some patients. Below are described some experimental models of interoceptive stressors that have been used to mimic these clinical conditions. Post-infectious irritable bowel syndrome model. Pro-spective studies have shown that 3% to 36% of enteric infections lead to persistent new IBS symptoms depending on the infecting organism. In addition, the co-existence of adverse psychological factors at time of infection is also an important determinant to the susceptibility to develop post-infectious IBS. 152 Other risk factors include female sex and some psychological characteristics such as anxiety, depression and somatization. 152 While viral gastroenteritis seems to have only short-term effects, bacterial enteritis and protozoan and helminth infestations are followed by prolonged post-infectious IBS. 152 The vast majority of human post-inflammatory hypersensitivity symptoms are observed after bacterial infection (Campylobacter, Shigella, Salmonella or Escherichia coli infections).
In preclinical models, long-lasting visceral hyperalgesia has been observed in mice after transient intestinal inflammation induced by Trichinella spiralis infestation 153,154
or in rats infested by
Nippostrongylus brasiliensis. 155 Recently, however, it was found that male C57BL/6 mice infected with Citrobacter rodentium, an attaching-effacing murine enteropathogen similar in its mechanisms of infection to enteropathogenic Escherichia coli, do not spontaneously develop visceral hypersensitivity symptoms assessed by the increase in EMG response to CRD 156 unless exposed to a stressor (WAS, 1 hr/day for 9 days) during the time of infection (S. Vanner and N. Cenac, pers. comm.).
Post-inflammatory irritable bowel syndrome model.
Despite some controversies on the origin of the symptoms, 157,158 "IBS-like" symptoms appear to be common in patients in remission from ulcerative colitis. 15,159 In rats, chemical irritants applied to the colon such as acetic acid, 160 associated with increased responsiveness to CRD on days 5 or 60 after the induction of colitis in male Balb/c mice while chronic colitis induced by DSS (3 cycles of 5% DSS for 5 days/cycle and 15 days of normal drinking water in between each cycle) has not. 167 These results are in contrast with another study showing that 4% DSS in drinking water for 5-7 days-induced colitis but failed to cause the development of visceral hypersensitivity in response to CRD in C57BL/6 or Balb/c mice when tested on days 5,12,16,20,30,40 or 51 after the induction of colitis. 170 These disparate findings suggest that inflammation alone may not always lead to visceral hypersensitivity and that the type of inflammatory insult and severity determine whether this will result in the development of postinflammatory hypersensitivity. The interaction between colonic inflammation and the development of visceral pain has to be substantiated in future investigations. 166 Antibiotics. Patients treated with antibiotics for non-GI complaints are 3 times more likely to report functional bowel symptoms. Antibiotic use disrupts the intestinal microbiota, fragilizes the host's intestinal homeostasis and integrity of intestinal defenses, 171 and has been associated with IBS. 172 In support of this hypothesis, administration to Balb/c mice of an oral combination of non-absorbable antibiotics (neomycin, bacitracin and pimaricin) which disturbed the commensal intestinal microflora results in visceral hypersensitivity to CRD in these animals. 173 Paradoxically, clinical studies support that specific antibiotics (rifaximin or neomycin) are an effective treatment option in non-constipated IBS patients, over a 3-month period 174,175 or even longer, 176 thereby confirming the role of dysbiosis in developing IBS symptoms. 177 Surgery and somato-visceral convergence. Despite controversies, studies suggest that IBS is associated with an increased risk of abdominal and pelvic surgeries. 178 Surgical procedure as both a visceral and psychological stressor can initiate a series of events that either disturb GI function and interactions within the brain-gut axis and/or alter gut microbiota, which consequently may lead to generation of IBS symptoms. 179 Hind paw (plantar) incision or injection of low pH (4.0) sterile saline in the gastrocnemius muscle of adult male Sprague-Dawley rats induce a significant visceral hyperalgesia to CRD that lasts up to 2 weeks after the somatic injury occurred. 180,181 As a model of postsurgical pain, the plantar incision model is particularly relevant because surgical procedures are relatively common and possible visceral hypersensitivity may also thus be a relatively common postsurgical event. 179 The impact of somato-visceral convergence has to be considered in experimental models of visceral pain where animals are surgically equipped within the abdominal wall with EMG electrodes 84 (detailed in section "Stress-induced visceral analgesia: how does it help us to model and understand visceral hypersensitivity?). Viscero-visceral interactions: neonatal cystitis. A significant overlap is observed between IBS and urinary symptoms, in particular those resulting from interstitial cystitis (IC). 182 Like IBS, IC predominantly affects female patients (90%) and shows a high comorbidity rate with psychological disorders. By analogy to IBS, an increased number of mast cells have been found in bladder biopsies in IC. 183 Recurrent urinary tract infections during childhood correlate with the development of chronic pelvic pain, a condition that often overlaps with IBS. 184 In an animal model of bowel-bladder cross-sensitization, acute bladder chemical irritation causes a significant decrease in colorectal sensory thresholds to CRD. 185 Very recently, the induction of neonatal cystitis in female Sprague-Dawley rats at PND 14 was shown to result in colonic hypersensitivity to CRD during adulthood, 186 supporting a potential key role for viscero-visceral convergence in IBS and comorbid disorders such as IC and chronic pelvic pain. 182
Stress in the Adult Period: Perpetuating Factors
There is a strong overlap between IBS and psychiatric disorders, as established by the high percentage (54% to even 94%) of IBS patients meeting the criteria for at least 1 primary psychiatric disorder, most notably mood and anxiety disorders. 182 Although comorbid psychiatric disorders seem to be not directly connected with the occurrence of IBS, they strongly influence how the symptoms are experienced, the individual illness behavior, and ultimately the outcome. The influence of cognitive aspects as well as motivational and emotional components on the processing of sensory information is mediated by extensive neuro-anatomical network with a pivotal role of the insular and anterior cingulate cortices. 9,187,188 Autonomic dysfunction, in particular decreased parasympathetic activity and increased sympathetic outflow observed in psychiatric disorders as well as in IBS, 16,189,190 has been also suggested to have a relevant impact on the neurally mediated regulation of colonic sensory-motor and immune function. 191 The neuroimmune cross-talk involving the stress-induced changes in vagal nerve activity and/or sensitization of mast-cells seems to play a critical role in altering visceral sensitivity and intestinal barrier. 192
Genetic models of anxiety and depression
In a comparative study using 3 strains of rats known to have varying levels of baseline anxiety, the high-anxiety Wistar-Kyoto rats had increased VMR to CRD compared to low-anxiety Sprague-Dawley and Fisher-344 animals suggesting a direct link between anxiety and visceral hypersensitivity. 111 In addition, compared to low-anxiety strains of rats, the sensitivity of high-anxiety rats was highly exacerbated by peripheral sensitization of the colon with a small dose of acetic acid. 111 Of note, Wistar-Kyoto rats are also considered as a model of depression, 193,194 as are rats from the Flinders Sensitive Line which exhibit increased cholinergic sensitivity compared to control rats of the Flinders Resistant Line. 195,196 Similarly to Wistar-Kyoto rats, Flinders Sensitive Line rats exhibit increased VMR to CRD as well as a blunted corticosterone response to acute noise stress compared to Flinders Resistant Line, suggesting a link between depression, HPA axis dysfunction and visceral hyperalgesia. 197 Genetic models of chronic stress Genetic models that blocked chronically the stress pathways by deleting CRF 1 receptors showed a decrease in anxiety and colonic sensitivity to CRD. 198 Conversely, genetic models of chronic stress relying on the over-expression of CRF stress system in mice 199 are available and could be useful to study IBS-like manifestations, but the visceral sensitivity of these transgenic animals has not been assessed yet. However, as CRF over-expressing mice display phenotypes of Cushing's syndrome, 200 new promising genetic models with more selective conditional and/or region-targeted genetic manipulations including RNAi gene silencing technology to modify CRF-related genes are continuously developed. [201][202][203][204][205][206] These models will be suitable to explore specific stress circuitries in the context of targeted chronic CRF expression/deletion and the impact on visceral pain modulation which so far is lagging behind.
Stress-Induced Visceral Analgesia: How Does It Help Us to Model and Understand Visceral Hypersensitivity?
While extensively described in somatic pain field, 207 to date activation of descending inhibitory pathways in stress-related visceral responses has received less attention. Opioids have been implicated in descending inhibition of visceral sensitivity following an acute stress as evidenced by the fact that naloxone unmasked WAS-induced hyperalgesia to CRD in normal Long-Evans rats and exacerbated the pain response to CRD in maternally-separated rats. 117 In another study, a non-opioid, neurotensin-dependent visceral analgesic response was observed 6 hours after ex-posure to an acute session of WAS in Sprague-Dawley rats, with males exhibiting stronger analgesia than females as well as in wild-type but not in neurotensin knock-out mice. 208 In another experimental model, a daily short period (15 minutes) of separation from PND 2 to 14, decreased VMR to CRD performed immediately after WAS and prevented the development of hyperalgesia 24 hours after WAS in adult male Long-Evans rats. 209 These data suggest a potential upregulation of endogenous pain-modulatory systems by this short maternal separation stress. 209 Similar findings in adult rats have been recently reported, such that Wistar rats handled daily for 9 days develop visceral hypoalgesia in response to CRD that becomes significant 7 days after the last handling. 137 These studies point to the type of stress itself contributing to the differential recruitment of those descending inhibitory pathways. However, importantly, we recently demonstrated that mice that had undergone surgery for the placement of EMG electrodes on abdominal wall and were subsequently single housed to avoid deterioration of implanted electrodes by cage-mate, developed visceral hyperalgesia in response to repeated WAS (1 hr/day, 10 days) while mice tested for visceral pain using the non-invasive solid-state intraluminal pressure recording and kept group housed developed a strong visceral analgesia under otherwise similar conditions of repeated intermittent WAS. 88 As mentioned before surgery per se is known to induce a long lasting visceral hyperalgesia and recent reports suggest that previous injury or exposure to opioids in male rats can switch stress influence on pain responses from analgesia to hyperalgesia. 210 Collectively these data demonstrate that the state of the animal tested (naïve vs exposed to surgery), its social environment (group housing vs single housing, cage enrichment or not), the handling performed by the investigator, the methods used to record VMRs (EMG requiring surgery and antibiotic post surgery vs manometry not requiring surgery/antibiotic), as well as the sex of animals can significantly affect the response to exteroceptive stressors. Therefore these preconditions should be carefully detailed in describing the experimental conditions and taken into consideration in the design, conduct and interpretations of the data when investigating the influence of stress on visceral sensitivity in experimental animals. Based on recent clinical findings demonstrating that IBS patients have compromised engagement of the inhibitory descending pain modulation systems, 21,211,214 gaining a deeper understanding of the mechanisms involved in the expression of stress-induced visceral analgesia or lack thereof are promising avenues to be explored and may lead to new therapeutic targets for IBS. Therefore the use of non-invasive methods of monitoring VMR that alleviates the surgical, antibiotic and housing impacts on repeated stress modulation of visceral pain represents a step forward to gain insight into the underlying mechanisms in particular the neural substrates and neurochemistry of stress-related analgesia as established in the somatic field. 207
Sex Differences in Stress-Induced Alterations of Visceral Sensitivity
Women are more susceptible to stress-related disorders which is consistent with female predominance in IBS patients (women to men ratio about 2:1). 215,216 Sex differences in the stress response and stress-induced pain modulation have been documented in a number of human studies. 217 Clinical trials have also revealed important sex-related differences in therapeutic efficacy of some serotonergic drugs used in IBS treatment (eg, alosetron, 5-HT 3 receptor antagonist) suggesting a conceivable link between estrogens and serotonergic mechanisms in the modulation of stress-related visceral hypersensitivity. 218,219 Contrasting with this clinical evidence, most of the preclinical studies assessing stress-related alterations in visceral sensitivity have been conducted in male rodents. 208,220 However, the few studies performed in female indicate that sex hormones have a significant effect on visceral sensitivity in rodents. [220][221][222][223][224] Therefore, addressing the influence of sex and sex hormones in the modulation of visceral pain by stress appears critical to develop novel therapies relevant to sex difference in IBS. 216,225
Mechanisms Involved in Stress-Induced Modulation of Visceral Pain
Maladaptive neuroplastic changes leading to persistent increased perception and responsiveness to noxious stimuli, or response to normally non-noxious stimuli are key for the expression of pathological pain (hyperalgesia and allodynia). Such neuroplastic changes can occur in primary afferent terminals (peripheral sensitization) but also in the spinal cord (central sensitization) and in the brain (supraspinal pain modulation) or in descending pathways that modulate spinal nociceptive transmission. Such alterations in the processing of sensory information are all considered as possible mechanisms of visceral hypersensitivity in IBS patients. 66,226 Peripheral Sensitization: Corticotropin Releasing Factor System, Mast Cells, Gut Microbiota and Ion Channels Several reports in both humans and rodents have well documented the key role played by the peripheral CRF signaling, via CRF 1 receptors, in the development and expression of visceral pain. 19,60,[227][228][229][230][231] Stress and peripheral administration of CRF induce mast cells degranulation in the colon in experimental animals and humans, 232,233 which contributes to the development of visceral hypersensitivity (Fig. 1) via the release of several preformed or newly generated mediators 118,[234][235][236][237] (histamine, tryptase, prostaglandin E 2 , nerve growth factor) that can stimulate or sensitize sensory afferents 66,238 by activating a number of ion channels widely expressed in colonic afferents [239][240][241][242] such as N-methyl-D-aspartate receptor, 242 proteinase-activated receptor, 236 and transient receptor potential vanilloid 1 [243][244][245] to name a few. Stress can also disrupt the intestinal epithelial barrier thereby increasing the penetration of soluble factors (antigens) into the lamina propria, leading to nociceptors sensitization, 235,246 a phenomenon which appears as a prerequisite for the development of visceral hypersensitivity in both humans and rodents. [246][247][248] Alterations of epithelial permeability following stress involves the activation of the peripheral CRF system and may [249][250][251][252][253] or may not be dependent from mast cell activation 238,253 in a time-dependent manner. In addition to inducing a leaky epithelial barrier, stress can also change the composition of the intestinal and fecal microbiota of rodents. [254][255][256] This can in turn have significant impact on the host and affect behavior, visceral sensitivity and inflammatory susceptibility.
257-261
Spinal Cord Plasticity and Glia Activation: Central Processing of Peripheral Pain Perception Once peripheral sensitization has occurred, it activates the release of mediators in the spinal cord including growth factors 262,263 (nerve growth factor or brain-derived neurotrophic factor) and upregulates some key ion channels and receptors such as acid-sensing ion channels 1A and neurokinin 1 receptor [264][265][266][267] contributing to the phenomenon of spinal sensitization which has been associated with visceral hypersensitivity. Very recently, spinal cord glia activation has been suggested as being another potential mechanism through which spinal sen-sitization may occur in response to stress linked to the development and maintenance of visceral hypersensitivity. [268][269][270] Candidate molecules involved in glia activation signaling include neurotransmitters such as susbstance P or glutamate, but also purinergic agents, opioids, chemokines and glucocorticoids (for review see reference 268 ). Glutamate uptake through spinal glutamate transporters is critical for maintaining normal sensory transmission under physiologic conditions. 271,272 A potential deficiency in glutamate reuptake by astrocytes associated with the activation of spinal cord glia 273 has been recently suggested to play a role in the spinal sensitization and the development of visceral hypersensitivity in rats. 274 Together, these data strongly support the concept that transmission of visceral nociceptive signals may be enhanced in various conditions of spinal microglia activation. 275
Supraspinal Pain Modulation: A Fine-tuning between Pain Facilitation and Inhibition
Various supraspinal sites are involved in the modulation of visceral pain signals. Rectosigmoid distension in humans activates sensory (insula and somatosensory cortex), and limbic and paralimbic regions (including anterior cingulate cortex, amygdala and locus coeruleus). 276 Many of these brain regions were also found to be significantly activated by CRD in rats. [25][26][27]33,277 The anterior cingulate cortex mediates key emotional-aversive aspects of pain and may also have a mnemonic role in which it allows transient storage of information during pain processing. 189,278 Wistar-Kyoto rats, high-anxiety rats exhibiting visceral hypersensitivity 111 have greater prefrontal cortex activation in response to CRD compared to Sprague-Dawley. 91 Another key limbic system structure that has been implicated in the affective component of pain is the central amygdala. It is involved in the processing of visceral information, attention, emotion and integrating the physical and psychological components of the stress response. 279 It has also been found to contribute to visceral hypersensitivity in rats. [280][281][282][283] Of relevance in the context of stress response, the CRF gene expression in the amygdaloid nucleus is upregulated in a mouse model of visceral pain and such a response is attenuated under conditions of anesthesia. 283,284 Likewise, the locus coeruleus is a well established target of stress that expresses CRF 1 receptors, receives CRF innervation from nearby Barrington nucleus and increases firing in response to CRD that is mediated by CRF 1 receptor activation as shown by the use of CRF receptor antagonists and the responsiveness of LC neurons to both CRD and to central injection of CRF. 53,[285][286][287][288][289][290] Therefore these limbic and pontine sites are well positioned to co-ordinate gut-brain interaction with visceral information from the gut impacting on cortical and limbic activities under conditions of stress-CRF 1 signaling activation which may modulate the visceral pain responses. 60,76,291 Thalamic relay nuclei have a key role in gating, filtering and processing sensory information en route to the cerebral cortex and are subject to similar activity-induced plasticity processes as the spinal cord. [292][293][294] Upregulation of CRF 1 receptor in the thalamus is associated with visceral hyperalgesia in the rat model of neonatal maternal separation stress. 275 Lastly, spinal visceral nociceptive reflexes are subject to facilitatory modulation from the rostroventral medulla, providing the basis for a mechanism by which visceral sensations can be enhanced from supraspinal sites 295,296 under stress conditions associated with development of visceral hyperalgesia. 297 Compromised engagement of descending pain inhibitory pathways as observed in maternally-stressed rats may also contribute to increase the visceral pain responses in those animals.
Therapeutic Implications-Treatment Targeting Stress Reduction in Irritable Bowel Syndrome
The modulatory role of stress-related brain-gut interactions in the IBS pathophysiology, in particular neuroimmune modulation associated with psychological factors and emotional state 16,189 has been confirmed by the encouraging outcome of non-pharmacologic and pharmacologic treatment modalities aimed at reducing stress perception. [298][299][300] A broad range of evidence-based mind-body interventions including psychotherapy, cognitive behavioral therapy, hypnotherapy, relaxation exercises or mindfulness mediation has been shown to amend stress coping strategies, both at a cognitive level (catastrophic or self-defeating thoughts) and at a behavioral level (problem solving, especially interpersonal problems). 300,301 The symptomatic improvement appears to result from the modulation of stress response, the autonomic nervous system balance restoration, and changes in the brain activation pattern in response to visceral stimuli. In addition to psychological mind-body approaches, clinical trials confirm the effectiveness of centrally-targeted pharmacological interventions such as with antidepressants, and anxiolytics, or combination of drugs from both groups in the treatment of chronic pain disorders. 299,302,303 Many other pharmacological agents with anxiolytic and/or antidepressant properties, such as serotonergic and opioidergic agents, cannabinoid receptor 1 (CB 1 ) and somatosta-tin receptors agonists, CRF 1 , tachykinin and cholecystokinin receptors antagonists, have been recently shown to modulate stress-induced visceral hyperalgesia in animal models (for detailed review see reference 304 ). Preliminary data suggest that anxiolytic activity of γ-aminobutyric acid-ergic agents (gabapentin) and α2δ ligand (pregabalin) may be also efficient in reducing central sensitization in hyperalgesia in clinical setting 305 as shown in experimental models. 306 New centrally acting agents providing analgesic effects include dextofisopam (2,3-benzodoazepine receptor modulator) and quetiapine (atypical antipsychotic agent). 307 Recent developments showing the critical interdependence between the composition and stability of the microbiota and GI sensory-motor function indicate a novel approach to IBS treatment with a use of probiotics, prebiotics and antibiotics. 260,308 Specific modulation of the enteric microbiota in the context of neuroimmune interactions within the brain-gut axis opens a new promising strategy for stress-related disorders, particularly in the aspects of comorbidity in functional GI disorders such as IBS. 257 However, some of the encouraging data from animal models concerning efficiency in alleviating stress-induced visceral hypersensitivity of such agents as CRF 1 receptor antagonist, 309 CB 1 /CB 2 receptor antagonist 310 or somatostatin receptor agonist (octreotide), 311 are yet to be confirmed in clinical trials, especially with regard to global symptoms improvement and well-being. For example, CRF 1 receptor antagonists are being investigated in Phase II/III clinical trials for depression, anxiety and IBS. 42 In fact, a recent clinical trial confirmed CRF 1 receptor antagonist efficacy in an anxiety model in healthy participants (7.5% CO 2 model). 312 Some observed discrepancies between preclinical models and clinical trials may result from limited correlation between readout from animal studies being based on pseudoaffective reflex responses or unlearned behaviors and symptoms in IBS patients reflecting subjective pain experience highly modulated by cortical influences. 1 As discussed in this review, the methods used to monitor visceral sensitivity in rodents by inducing some bias in the observed responses could also potentially contribute to the lack of clinical translation of some drugs. Amelioration of animal models of visceral pain, in their construct and face validity, particularly through the development of non-invasive methods to monitor visceral sensitivity together with a recently emerging algorithm of drug screening based on pharmacological brain imaging techniques opens promising venues in establishing an adequate approach in identifying effective treatment for IBS symptoms as well as IBS-related quality of life impairment. | 2014-10-01T00:00:00.000Z | 2011-07-01T00:00:00.000 | {
"year": 2011,
"sha1": "43b745b6d634c2554f2b664882a2338bc5170f99",
"oa_license": "CCBYNC",
"oa_url": "http://www.jnmjournal.org/journal/download_pdf.php?doi=10.5056/jnm.2011.17.3.213",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "43b745b6d634c2554f2b664882a2338bc5170f99",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
14410298 | pes2o/s2orc | v3-fos-license | Comparative Study of Heparin-binding Proteins Profile of Murrah Buffalo (bubalus Bubalis) Semen
(2014) Comparative study of heparin-binding proteins profile of Murrah buffalo (Bubalus bubalis) semen, Veterinary World 7(9): 707-711. Abstract Aim: The experiment was conducted to study the total seminal plasma protein (TSPP) and heparin-binding proteins (HBPs) in relation to initial semen quality of buffalo bull. Materials and Methods: Semen from two Murrah buffalo bulls (bull no. 605 and 790) with mass motility of ≥3+ were used for the study and categorized into three groups (Group I-Mass motility 3+, Group II-Mass motility 4+ and Group III-Mass motility 5+). Seminal plasma from semen was separated by centrifugation. HBPs was isolated and purified from heparin-agarose affinity column by modified elution buffer. TSPP and isolated HBPs concentration was estimated by Lowry's method. The purified HBPs were resolved on Sodium dodecyl sulfate polyacrylamide gel electrophoresis to check the protein profile of two bulls. Both the values of TSPP and HBPs were significantly higher (p<0.01) in bull no. 605 when compared to 790 in all the three groups. 31 kDa HBP was more intensely present in bull no. 605, thus may indicate its superiority over bull no. 790 in relation to fertility potential. Conclusion: TSPP and HBPs shows variation in concentration with respect to initial semen quality. Furthermore, presence of fertility related 31 kDa HBPs in one of the bull may be an indication of high fertility of a bull. In future, in-vivo and in-vitro correlative study on larger basis is needed for the establishment of fertility-related HBPs in semen which might establish criteria for selection of buffalo bull with high fertility potential.
Introduction
Traditional semen quality tests in routine use to evaluate semen quality provide limited information about the potential fertility of bulls and do not provide high correlation or even consistent results of bull fertility.An alternate approach to testing semen quality with more accuracy is the protein fertility marker.A correlation between seminal plasma proteins and fertility of the male have been reported in some of the domestic animals such as cow bull [1], stallion and boar [2], goat [3], and ram [4].Specific seminal plasma proteins have been identified as potential markers of human male fertility/infertility [5].
Bovine seminal plasma (BPS) contains factors that may have either beneficial and/or detrimental effect to sperm function.Some of these factors are contributed by accessory sex glands.These are proteinous/non proteinous, but the nature and characteristics of most of these factors are not well-understood [6].Over the past few decades, special attention has been paid to proteins present in seminal plasma and to their potential roles in sperm maturation events [7].
BPS constitutes of a number of proteins referred as heparin-binding proteins (HBPs) responsible for heparin binding ability of the spermatozoa [8].These peptides are testosterone dependent and produced by seminal vesicles, prostate and Cowper's glands, binds to sperm during ejaculation as sperm traverse the male reproductive tract [9].Seminal plasma from a number of mammalian species contains HBPs, which mediate sperm capacitation [10], were associated with fertility due to their modulatory role during the acrosomal reaction [11].Presence of HBPs in sperm membranes was indicative of the fertility potential of bulls [12].HBPs modulate capacitation and zona binding ability of buffalo cauda epididymal spermatozoa [13].
Copyright: The authors.This article is an open access article licensed under the terms of the Creative Commons Attributin License (http:// creative commons.org/licenses/by/2.0)which permits unrestricted use, distribution and reproduction in any medium, provided the work is properly cited.
The total seminal plasma protein (TSPP) concentration in buffalo bulls was ranged from 28 to 36 mg/mL [14,15] and concentration of buffalo HBPs reported in the range from 1.47 to 2.61 mg/mL [15][16][17].Eight major buffalo HBPs were reported in the molecular weight range of 13-71 kDa [13].Until date, the study on buffalo HBPs in terms of their concentration and numbers in relation with initial semen quality is meager.
In the present study, TSPP and HBPs were studied among the ejaculate of a bull and ejaculate between the bulls with the different mass motility.The present study may be helpful in buffalo bull HBPs proteomic research, and characterization -identification of fertility related protein in future.
Climate and experimental animals
Bareilly is located at 28°10' North latitude and 78°23' East latitude at an altitude of 172 m above the mean sea level, known to have moderate climate.Summer temperature goes up to 40°C while winter's goes down up to 8°C.The rainy season is from June to September with moderate humidity.Two healthy adult Murrah buffalo bulls maintained at Germ-Plasm Center of Animal Reproduction Division, IVRI, Izatnagar, Bareilly, UP, India were utilized for the study.
Chemicals
All chemicals were reagent grade and purchased from Sigma-Aldrich (St. Louis, MO, USA) and Merck, India.
Experimental animals and semen collection
Two healthy adult (4-6 years) Murrah buffalo bulls maintained under standard management conditions were used for semen collection for the entire period of study.Semen was collected during morning hours following standard procedure using an artificial vagina.
Semen evaluation
Semen samples were evaluated for mass motility immediately after collection.Ejaculates having mass motility 3+ (0-5 point scale) and above were selected for study and grouped in three groups (Group I-mass motility 3+, Group II -mass motility 4+ and Group III-mass motility 5+).Semen samples were aliquoted in 2 mL microcentrifuge tubes supplemented with protease inhibitor cocktail (P2714, Sigma Aldrich, USA).
Seminal plasma separation
Each ejaculate was centrifuged initially at 4000 g for 20 min at 25°C and supernatant was collected.Supernatant was again centrifuged at 10000 g for 60 min at 4°C to remove suspended spermatozoa and debris, if any.Clear seminal plasma (supernatant) was then stored at −20°C until further processing.
Estimation of protein concentration
TSPP and purified HBPs concentration were estimated as per the method given by Lowry [18].
Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE)
The isolated and purified HBPs were analyzed by 18% SDS-PAGE as described [19] using Coomassie Brilliant Blue G-50 stain.The apparent molecular mass was determined by using molecular weight marker (PG-PMT2922, Genetix Biotech Asia Ltd., India) Gel documentation and analysis system (Gel-Doc XR , Bio-Rad, USA).
Statistical analysis
To minimize the variation in the subjective evaluation of the present study, each sample was evaluated in duplicate and average was analyzed using SPSS version 18.0, (IBM Software Company, USA).Analysis of variance (ANOVA) was performed for statistical analysis of data, using PROC GLM of SAS 9.3 software (SAS Institute Inc., USA) for estimation of mean and standard error of variables under investigation.ANOVA and t-test were used for the comparison between the bulls for the variables viz., TSPP and respective HBPs.
TSPP
The mean value of TSPP concentrations in bull no.605 and 790 in Group I, II and III were 30.64±0.12,31.66±0.09,32.53±0.19and 28.51±0.09,29.49±0.15,30.45±0.17 mg/mL, respectively (Table -1).The TSPP concentration was significantly (p<0.01)higher in Group III when compared to Group I and II in both the bulls.TSPP levels of all the groups in bull No. 790 were significantly (p<0.01)lower than bull no.605, which might be due to lower level of plasma testosterone [20] and health status of accessory sex glands [21].The TSPP values in present study ranged from 28.51 to 32.53, which were in agreement (28-33 mg/mL) with previous researcher [14 -16].Variation in the concentration of TSPP in split ejaculates from the same Holstein bull and within the bull was also observed [22].The variation in the buffalo TSPP concentration might be due to individual bull variation, age, inherent character, size, season and health status of seminal vesicle [21] and other factors.
HBPs concentration and electrophoretic profile
Overall mean value of HBPs concentrations in bull no.605 and 790 in Group I, II and III was 3.11±0.07,3.32±0.06,3.46±0.08and 2.51±0.08,2.91±0.05,3.10±0.03mg/mL, respectively (Table -1).The bull no.605 showed significantly (p<0.01)higher HBPs concentration when compared to bull no.790 in all the three groups.HBPs level was significantly (p<0.01)higher in Group III for bull no.605 and also significantly (p<0.01)higher in both Group II and III than Group I for bull no.790.
In the present study, the overall mean values of HBP ranged from 2.84 to 3.30 mg/mL and found greater than the previously reported concentrations 2.61 mg/mL [13], 2.01 mg/mL [17] and 1.15-2.18mg/mL [23,24].Variation in the concentration of BPS in split ejaculates from the same Holstein bull and within the bull was also observed [22].HBP in seminal plasma may positively influence fertility [1].
SDS-PAGE analysis of isolated HBPs was done to assess the number of purified HBPs (Figure -1) in individual bull.In the present study HBPs showed 11-14 protein bands on SDS-PAGE in the molecular weight range of 11-129 kDa (11,14,15,16,20,28,29,31,43,48,53,71, 92 and 129) and of varying intensity in the Coomassie stained gel.Previous reports indicated 6 to 9 buffalo HBP bands [13,17,24].In bull no.605 there was the absence of three HBP bands at 20, 92 and 129 kDa positions, however, in bull no.790 only one HBP band was absent at 29 kDa positions.Variation in the intensity at 14,15,16,31 and 71 kDa HBP was also observed in bull no.605 and 790 (Figure -1), which shows the variation in their respective concentration in the seminal plasma.Absence or presence of a particular HBP in the seminal plasma may be associated with fertility potential of a bull.Presence of 31 kDa HBP (fertility associated antigen) showed better conception in cattle bull has been reported [25].In the present study also, a 31 kDa HBP was more intensely present in bull no.605, thus might indicate its superiority over bull no.790 in relation to fertility potential.
The variation in concentration and number of HBPs within and between the bulls might be due to testosterone dependent secretary activity of accessory sex glands [9], health status of accessory sex glands [26] and inherent character of an individual bull.Furthermore, aggregation product of low molecular weight proteins or degradation product of high molecular weight proteins [13] may alter the HBPs profile of a bull.
Variation in TSPP and HBPs concentration and number of HBPs between the bulls may be an indicative of low or high fertility potential of a bull.To the author's best knowledge, no literature is available in this regard.To establish any such correlation would require screening of a large number of ejaculates from number of bulls.Thus, further study is needed on a larger basis, which might be helpful in the selection of buffalo bull with high fertility.
Conclusion
In the present study, one of a bull (No. 605) showed significantly (p<0.01)higher concentration of TSPP and HBPs in relation to initial semen quality.Furthermore, more intensity of 31 kDa HBP in the same bull might reflect its better fertility potential.In the future, in-vivo and in-vitro correlative study on a larger basis is needed for the establishment of fertility related HBPs in semen that might establish the criteria for selection of buffalo bull with high fertility potential.
y Means bearing different superscripts (a, b and x, y) in a row differ significantly (p<0.01),Means bearing different superscripts (A, B and C) in a column differ significantly (p<0.01).n=number of ejaculates, MM=Mass motility, TSPP=Total seminal plasma protein, HBPs=Heparin binding proteins, SE=Standard error | 2015-03-21T21:52:17.000Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "4cb1c154b06543c00c8715afcb4872a27f1d7662",
"oa_license": "CCBY",
"oa_url": "http://www.veterinaryworld.org/Vol.7/September-2014/16.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4cb1c154b06543c00c8715afcb4872a27f1d7662",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
117896645 | pes2o/s2orc | v3-fos-license | On the origin of the holographic principle
It was recently suggested that quantum mechanics and gravity are not fundamental but emerge from information loss at causal horizons. On the basis of the formalism the holographic principle is also shown to arise naturally from the loss of information about bulk fields observed by an outside observer. As an application, Witten's prescription is derived.
I. INTRODUCTION
The holographic principle [1,2], including the AdS/CFT correspondence [3], asserts an unexpected connection between the physics in a bulk and quantum field theory (QFT) on its boundary surface. The origin of this mysterious connection is still unknown. On the other hand, many studies of black hole physics after Bekenstein and Hawking have implied a deep connection between gravity and thermodynamics [4]. For example, Jacobson suggested that Einstein's equation describes thermodynamics at local Rindler horizons, and Padmanabhan [5] proposed that classical gravity can be derived from the equipartition energy of horizons. Verlinde recently proposed an intriguing idea [6] linking both Einstein's gravity and Newton's mechanics to entropic forces. All these works emphasized the strange connection between thermodynamics and gravity the origin of which is also still mysterious [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. Since the thermodynamic entropy can be interpreted as a measure of information or information loss, the connection implies a close relationship between information and gravity.
In a series of works [23][24][25][26], based on information theory, my colleagues and I suggested that information loss (or quantum entanglement) at causal horizons is the key to understanding the origins of dark energy [23], black hole mass [24] and even Einstein's gravity [26]. Many other studies in quantum information science have supported the idea that information is fundamental. For example, one can obtain a quantum mechanical state discrimination bound from the condition that information propagation is not superluminal (see, for example, [27]). Landauer's principle in quantum information science implies that information is physical [28]. Zeilinger and Brukner [29,30] suggested that quantum randomness arises from the discreteness of information. 't Hooft also suggested that quantum mechanics has a deterministic theory that includes local information loss [31].
Since the number of degrees of freedom (DOF) in a bulk theory is proportional to the volume of the bulk, whereas the number of DOF in a boundary theory is proportional to its surface area, it seems impossible to prove a generic holographic principle within the context of QFT. It might be possible to prove the principle on the basis of new principles that can explain the origin of both gravity and quantum mechanics.
In [32] I showed that quantum mechanics is not fundamental but emerges from information loss at causal horizons. If gravity and quantum mechanics can be derived by considering information loss at causal horizons (see [33] for a review), it is natural to think that the holographic principle has a similar origin. This paper suggests that the principle can be derived by applying information theory to causal horizons.
In Sec. II, the derivation of quantum mechanics from information theory is reviewed. In Sec. III, a derivation of the holographic principle is presented, and Witten's prescription is derived as an application. Sec. IV contains conclusions.
II. QUANTUM FIELD THEORY FROM INFORMATION LOSS
In this section, I review the way in which QFT, and hence quantum mechanics, arises from the application of information theory at causal horizons. It is important to understand that our theory is based on neither QFT nor Einstein's gravity. On the contrary, they could be derived from the postulates below. Inspired by the works described in Sec. I, we can choose the following postulates as new general guiding principles on which any physical law should be based. We also assume the metric nature of spacetime (however, we do not assume Einstein's equation) and ignore any fluctuation of spacetime in this paper. Postulate 2 implies that for a given observer, there could be causal horizons that block information about a region the observer cannot access.
Although postulates 1 and 2 are familiar, postulate 3 deserves more explanation. It implies that interpretation of a physical reality depends on the information an observer can access, and hence there is no objective reality independent of observers. This sounds counterintuitive, but it is exactly what ordinary quantum mechanics says. For example, it is possible that a pure qubit state ψ = (|0 + |1 )/ √ 2 seen by one observer can be a maximally mixed state for another observer who could not access information about the state. An important point here is that both descriptions of the same qubit are perfectly valid and not in contradiction. Similarly, regardless of measurements done by an observer inside a causal horizon, quantum states of matter inside the horizon seen by an outside observer are maximally mixed. Two descriptions of the matter including the observer's state itself should be coincident. On average, no observer has priority. Considering the surface action terms in relativity, Padmanabhan also pointed out that physical theories must be formulated in terms of variables any given observer can access [34].
One special conclusion derived from the three postulates together is that physics inside a causal horizon should respect (or be consistent with) the ignorance of an outside observer about the inside region. This naturally introduces the notion of a horizon entropy arising by definition from information loss. Some authors have argued that this information loss is the origin of black hole entropy [35].
Using the above postulates, I showed in [32] that quantum mechanics is not fundamental but emerges from the application of classical information theory to causal horizons. The path integral (PI) quantization and quantum randomness can be derived by considering information loss for accelerating observers of fields or particles crossing Rindler horizons. This implies that information is one of the fundamental roots of all physical phenomena. I also investigated the connection between this theory and Verlinde's entropic gravity theory.
Let us briefly review the information theoretic formalism suggested in Ref. [32]. Consider an accelerating observer Θ R with acceleration a in the x 1 direction in a flat spacetime with coordinates x = (t, x 1 , x 2 , x 3 ) (See Fig. 1). The Rindler coordinates ξ = (η, r, x 2 , x 3 ) for the observer are t = r sinh(aη), x 1 = r cosh(aη). (1) There is another observer Θ M inside the Rindler horizon. Now, consider a field φ crossing the Rindler horizon and entering the future wedge F . The configuration for φ(x) in F is just a scalar function of the coordinates x, not a classical field. ) has no accessible information about the field φ in a causally disconnected region F . The observer can only estimate a probabilistic distribution of the field, which turns out to be equal to that of a quantum field seen by a Minkowski observer ΘM (dashed line).
As the field enters the Rindler horizon for the observer Θ R , the observer receives no further information about future configurations of φ. All that the observer can guess about the evolution of φ is the probabilistic distribution P [φ] of φ beyond the horizon. The information already known about φ constrains P [φ]. According to our postulates, the physics in the wedge F is determined not by deterministic classical physics but by the evolution of information. The maximum ignorance about the field can be mathematically expressed by maximizing the Shannon information we should use Boltzmann's theorem of maximum entropy to calculate the probability distribution P [φ]. Here, f j , (k = 1 · · · N j ) is a functional of φ and f j is its statistical expectation value with respect to P [φ]. According to the theorem, by maximizing the Shannon entropy with the constraints in Eq.
(2), one can obtain the following probability distribution
One constraint may come from the energy conservation
is the Hamiltonian as a function of the field configuration φ i and E is its expectation, and another trivial one is the unity of the probabilities n i=1 P [φ i ] = 1. Then, the probability with the constraints estimated by the Rindler observer should be where β is a Lagrangian multiplier, and the partition function Thus, the thermal nature of quantum fields is a natural consequence of classical information theory, when information loss with constraints occurs.
As an example, let us consider a scalar field with Hamiltonian and a potential V . For the Rindler observer with the coordinates (η, r, x 2 , x 3 ) the proper time variance is ardη and the Hamiltonian becomes where ⊥ denotes the plane orthogonal to (η, r) plane. Then, Z becomes Eq. (2.5) of Ref. [36]; Notice that Z (and hence Z R ) here is not a quantum partition function but a statistical one corresponding to the uncertain field configurations beyond the horizon. The equivalence of this form of Z R and a quantum partition function for a scalar field in the Minkowski spacetime (say Z Q ) is shown in Ref. [36], which is the famous Unruh effect. (See [37] for a review.) A continuous version of Eq. (7) in QFT is By further changing integration variables asr = r cos(aη),t = r sin(aη) and choosing β = 2π/a the region of integration is transformed into the full two dimensional flat space, which leads to Unruh temperature T U = a /2πk B , where k B is the Boltzman constant. Then, the partition function becomes that of an euclidean flat spactime; where I E is the Euclidean action for the scalar field in the inertial frame. Since both of Z R and Z Q can be obtained from Z E Q by analytic continuation, they are physically equivalent [37]. Thus, the conventional QFT formalism is equivalent to the purely information theoretic formalism for loss of information about field configurations beyond the Rindler horizon.
Recall that Eq. (7) was derived without using any quantum physics. Since quantum mechanics can be thought of as the single particle limit of QFT, this implies that quantum mechanics emerges from the application of information theory to Rindler horizons and is not fundamental in our formalism.
Near any static horizon having more generic static metrics ds 2 = −f 2 dt 2 + γ αβ dx α dx β , the metric reduces to the Rindler form [38] ds 2 ≃ −f 2 dt 2 +df 2 /κ 2 +dL 2 ⊥ , where κ is the surface gravity and dL 2 ⊥ is the metric for the orthogonal direction. Therefore, we expect the information theoretic interpretation of QFT to be valid for more generic static metrics.
This information theoretic approach is more than a reinterpretation of quantum mechanics. For example, it could explain the origin of quantum randomness and PI quantization, which were assumptions in ordinary quantum mechanics. Note that by extremizing Z R this approach also explains the origin of the thermodynamic relation dE = T dS in gravitational systems [33], which leads to entropic gravity and Jacobson's thermodynamic model of Einstein's gravity. Another bonus is the explanation of the well-known analogy between QFT and classical statistical mechanics; QFT is essentially a statistical system in disguise. Surprisingly, this formalism also gives rise to a derivation of the holographic principle, which will be presented in the next section.
III. HOLOGRAPHIC PRINCIPLE FROM INFORMATION LOSS
The information theoretic derivation of quantum mechanics in the previous section makes it simple to understand the physical origin of the holographic principle. Consider a d + 1-dimensional bulk region Ω with a d-dimensional boundary ∂Ω that is a one-way causal horizon (see Fig. 2) such as a black hole horizon, Rindler horizon or cosmic horizon. Imagine an outside observer Θ O (like Θ R ) who can not access information about matter or spacetime in the region because of the horizon. The situation in Ω is maximally uncertain to the outside observer, and the best the observer could do is to estimate the probability of each possible field configuration of φ in Ω, which turns out to be the probability amplitude in the PI [32]. During this estimation, Θ O would use the maximal information available to her/him. The previous section showed that the observer's ignorance leads to quantum fluctuations of fields in Ω. Thus, paradoxically, the outside observer's ignorance is an essential ingredient for any physics in Ω.
FIG. 2. Consider a bulk Ω with a causal horizon ∂Ω and an inside observer ΘI . The outside observer ΘO has no information about the field configuration φ(x)
in Ω except for its boundary values φ0(X) and derivatives. Thus, according to the information theoretic interpretation, the physics in Ω is completely described by the boundary physics on ∂Ω, which is just the holographic principle.
According to the postulate 2, there is no non-local interaction that might allow super-luminal communication. Therefore, we restrict ourselves to local field theory in this paper. For a local field, any influence on Ω from the outside of the horizon should pass the horizon. This means that, according to postulate 3, all the physics in the bulk Ω is fully described by the DOF on the boundary ∂Ω, which is just the essence of the holographic principle! In other words, information loss due to a horizon gives rise to quantum randomness in the bulk, and at the same time allows the outside observer Θ O to describe the physics in the bulk using only the DOF on the boundary. That is the best Θ O can do by any means, and the general equivalence principle demands that this description is sufficient for understanding the physics in the bulk, which is the holographic principle.
Therefore, the following version of the holographic principle is a natural consequence of the information theoretic formalism of QFT based on the three postulates.
Theorem 1 (holographic principle). For local field theory, physics inside a causal horizon can be described completely by physics on the horizon.
One may think of this as a derivation, albeit simple one, of the holographic principle from the information theoretic postulates. Note that this derivation is generic, because the arguments we used in this section rely on neither the specific form of the metric nor any symmetries the fields may have.
What else can the information theoretic formalism tell us about the holographic principle? First, the holographic principle cannot be applied to a general surface that is not a causal horizon. The range of application of the principle was a long standing problem. Second, according to postulate 2, there should be a finite length scale l, and a finite area l 2 that can contain a bit of information. Thus, the total entropy S that the surface ∂Ω, and hence the bulk Ω, can have is proportional to the surface area A; The area law naturally emerges too. Third, it implies that the black hole entropy represents the uncertainty of field configurations inside the black hole horizon. How can we relate bulk physics with boundary physics? To demonstrate the plausibility of theorem 1, I derive Witten's prescription of the holographic principle as an example. (Assume that the scalar field φ has a Lagrangian .) The conjecture says where Z ∂Ω [φ 0 ] ≡ exp(− d d Xφ 0 λ) ∂Ω is the generating functional on ∂Ω with φ 0 as a source for a boundary field λ.
Z Ω [φ| ∂Ω = φ 0 ] is the partition function for the bulk field φ on Ω which approaches φ 0 at the boundary. In Sec. II, we used only one nontrivial constrain on E. More generally, other constraints may exist regarding boundary field values φ = φ 0 on ∂Ω, which the outside observer Θ O can measure in principle. (When it is impossible to assign a field value on ∂Ω, one may consider a stretched horizon instead of the horizon itself.) Thus, the field value φ 0 and its derivatives at each point on the boundary ∂Ω could be the maximal information (besides E and the form of H(φ)) that the observer Θ O can measure or change to influence the physics in Ω and that constrains the probabilities for the field in Ω. For simplicity, we assume a Dirichlet boundary condition.
Alternatively, one can describe the bulk physics by using only the quantities defined on the boundary. Imagine that there is only one boundary field λ that has an action S λ and an interaction term φ 0 λ. Then, the partition function for the boundary field is The effective number of DOF of λ should be equal to that of φ, because theorem 1 implies that the boundary physics with (S λ (λ), E, φ 0 ) has all the information about the bulk field having (E, φ 0 , H(φ)) as parameters. For a given E and H, the classical field φ 0 is the only free parameter describing both of the bulk partition function and the boundary partition function. The partition function for a thermal system should contain all the information of the system. Since λ should describe the physics of φ, Z λ [φ 0 ] = Z Ω [φ| ∂Ω = φ 0 ] should hold for λ having φ 0 as a source. Here, S λ is not arbitrary but should be such that Z λ [φ 0 ] well reproduces Z Ω [φ| ∂Ω = φ 0 ]. In other words, there is a duality mapping (φ 0 , H(φ)) ←→ (φ 0 , S λ (λ)). λ could be hypothetical (mathematical) rather physical. In short, Witten's prescription is a natural consequence of the information theoretic formalism. However, the derivation above does not guarantee the existence of (λ, S λ ) for arbitrary bulk fields. Since a description is not a physical rule, the derivation is enough for our purpose.
Of course, the saddle point approximation of the bulk partition function becomes where I E (φ) is the Euclidean classical action with the boundary condition φ| ∂Ω = φ 0 in the curved spacetime, as usual. Then, Witten's prescription yields a relationship between QFT and gravity.
To be concrete, let us consider a derivation of the prescription for the Rindler metric in detail as an example. To show the equivalence we divide the surface ∂Ω into N j small patches and discretize the bulk field φ. We also assume that the field satisfies a Dirichlet boundary condition at the horizon. By repeating the calculation leading to Eq. (4) with additional constraints on the expectation values σ j of the boundary field φ 0 at the j-th patch φ 0j , one can easily obtain the probability distribution where φ 0j (φ i ) represents a boundary field value at the j-th patch corresponding to a specific bulk field configuration φ i , and λ j is the Lagrange multiplier field at patch j.
The index j denotes the position on ∂Ω and shall be promoted to the d-dimensional coordinate X in the continuum limit. Since the number of independent λ j values is N j and λ j couples to the boundary field, we can naturally think of λ j as another scalar field on the boundary ∂Ω. Taking a continuum limit and repeating the procedure leading to Eq. (8) we obtain where L 0 is the Euclidean Lagrangian and λ(X) and σ(X) are promoted continuous versions of λ j and σ j , respectively. Note that this term is just the right-hand side of Eq. (10), Z Ω [φ| ∂Ω = σ] with φ 0 (X) = σ(X) identified as a classical boundary value for φ at ∂Ω. This identification is physically reasonable because, strictly speaking, our theory and ordinary QFT contain no genuine classical field. The classical field is an approximate concept valid only in a specific limit.
Next, we need to show that Z E Q [σ] is equivalent to Z ∂Ω [σ]. Since σ(X) is a c-function defined on the boundary, it has no-dynamics on ∂Ω. Thus, one can think of σ(X) as a source function linked with some boundary field. Considering the action in Z E Q , the simplest candidate for the boundary field is the Lagrange multiplier field λ(X) itself. This dual field may have an action S λ (λ) on ∂Ω. Therefore, the partition function for the boundary field is Within the conventional QFT formalism, it seems impossible to prove the general equivalence of Z λ [σ] and Z E Q [σ] because of the difference in their spacetime dimensions. Here, we must recall theorem 1. According to the theorem, all the information in the bulk should be contained in the boundary, and σ should be the only messenger available (except for E) between the bulk and the boundary. (We also need to link S λ to H.) In other words, regarding φ in Ω and ∂Ω, the outside observer can change only σ. Furthermore, the relevant fields describing the effectively same system (the bulk and boundary scalars) are φ and λ. Thus, the two partition functions as functionals of σ should be equal, i.e., Z λ [σ] = Z E Q [σ] and Witten's prescription Eq. (10) holds for the Rindler metric. The generating functional and the Euclidean nature of Witten's prescription arise naturally in the formalism.
IV. CONCLUSIONS
In summary, this paper shows that the holographic principle, like quantum mechanics and gravity, is not fundamental but emerges from information loss at causal horizons. The derivation is generic because we assumed neither supersymmetry nor string theory. This suggests the universality of the holographic principle applied at causal horizons and validates the application of the principle to other quantum systems such as condensed matter. The principle is intimately related to quantum mechanics. The derivation of the holographic principle in this paper is not a simple transformation of the principle to the postulates, because with the postulates one can derive quantum mechanics and Einstein's gravity as well as the principle. Since quantum mechanics and holography originate in information loss at causal horizons, information seems to be the common root of physics. This could open a new route to unifying gravity and quantum mechanics.
In our future work, we need to verify the equivalence of the information theoretic formalism and QFT in a more generic curved spacetime. To check the usefulness of this formalism, it is desirable to show the relationships between partition functions for other spacetimes, especially the AdS/CFT correspondence. | 2011-08-01T06:58:30.000Z | 2011-07-18T00:00:00.000 | {
"year": 2011,
"sha1": "fe52ad72349d6f9ee6c5d76a7602047f44b0dee0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fe52ad72349d6f9ee6c5d76a7602047f44b0dee0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
31934895 | pes2o/s2orc | v3-fos-license | Interacting winds in massive binaries
Massive stars feature highly energetic stellar winds that interact whenever two such stars are bound in a binary system. The signatures of these interactions are nowadays found over a wide range of wavelengths, including the radio domain, the optical band, as well as X-rays and even gamma-rays. A proper understanding of these effects is thus important to derive the fundamental parameters of the components of massive binaries from spectroscopic and photometric observations.
Introduction
Whenever two massive stars (spectral types O or Wolf-Rayet) form a physical binary system, their stellar winds interact. Historically, this concept was first introduced by Prilutskii & Usov (1976) and Cherepashchuk (1976) who proposed that such systems should be bright X-ray sources. Following the detection of Xray emission from massive stars with the EINSTEIN satellite, and the finding that many of the X-ray brightest massive stars are indeed binaries (Chlebowski & Garmany 1991, Pollock 1987b, interest in this subject grew rapidly. Over the last decades, enormous progress has been achieved in our understanding of this phenomenon. It is now well established that the signature of wind-wind interactions is not restricted to the sole X-ray domain, but concerns most parts of the electromagnetic spectrum. Therefore, a proper understanding of wind-wind interactions is needed to consistently interpret multi-wavelength observations of massive binaries. To first order, stellar winds colliding at highly supersonic velocities produce an interaction zone limited by two hydrodynamical shocks. The shape of the contact discontinuity between the two winds is set by ram pressure equilibrium, which is mainly ruled by the so-called wind-momentum ratio R = Ṁ 1 v∞,1 M2 v∞,2 1/2 (Stevens et al. 1992), whereṀ and v ∞ are the mass-loss rate and the terminal wind velocity, respectively. At the shock fronts, the huge kinetic energy of the incoming flows is converted into heat, leading to a substantial increase of the plasma temperature in the post-shock region. The physical properties of the plasma in the wind interaction zone then depend on the efficiency of radiative cooling (Stevens et al. 1992). If cooling proceeds very slowly compared to the flow time of the winds, the wind interaction zone is said to be in the adiabatic regime. This regime is mainly encountered in long-period binary systems where the pre-shock densities of the winds are rather low. In this case, the X-ray luminosity is expected to scale witḣ r v 3.2 R 4 where r is the separation between the stars and v the pre-shock velocity (Stevens et al. 1992). Conversely, if radiative cooling operates very quickly, the wind interaction zone is radiative. This situation frequently occurs in short-period binary systems. In this case, the X-ray luminosity should scale asṀ v 2 .
Following the pioneering work of Stevens et al. (1992), much insight into the physics of colliding winds has been gained via hydrodynamical simulations. State of the art simulations account for 3-D effects, such as the Coriolis force (Parkin & Pittard 2008, Pittard & Parkin 2010) and clumpy winds (Pittard 2007). These models now allow a detailed confrontation with actual observations. Generally speaking, the wind interaction will have several consequences that are prone to produce observational signatures. The most obvious examples are the loss of spherical symmetry of the winds, the increase of density and temperature in the post-shock plasma, as well as the acceleration of particles through diffusive shock acceleration. A number of interacting wind systems have now been monitored over a broad range of wavelengths. In addition to high-resolution optical spectroscopy, modern X-ray spectroscopy (with XMM-Newton and Chandra) has tremendously contributed to the progress in this field. At the longer wavelength end, Very Long Baseline Interferometer radio observations allowed us, for the first time, to directly see the wind interaction zone in several systems.
X-ray emission as a tracer of hot gas in interacting wind systems
In principle, shock-heated plasma in massive binaries can produce copious amounts of X-rays, exceeding the level of intrinsic X-ray emission from the binary components by a large factor. Indeed, a number of colliding wind binaries have been found to be exceptionally bright in X-rays (e.g. WR 25, Pollock & Corcoran 2006). Yet, recent results demonstrated that the vast majority of the massive binaries show a rather 'normal' level of X-ray emission (e.g. De Becker et al. 2004, Nazé 2009, Nazé et al. 2010. The reason why large X-ray over-luminosities are restricted to a minority of the massive binaries is still to be understood. Possible explanations include a plasma cooling so efficiently that it emits mostly at longer wavelengths, reductions of the wind pre-shock velocity via radiative braking (Gayley et al. 1997), reductions of the mass-loss rates,...
In those cases, where a substantial X-ray emission is produced, the observable X-ray flux is often strongly variable with orbital phase. In circular systems, phase-locked variability is due to variations of the occultation and the circumstellar absorption towards the wind interaction zone (see e.g. the predictions of the hydrodynamical simulations of Pittard & Parkin 2010 for O + O binaries with P orb ≤ 10 days). Such a variation is encountered for instance in the 10.7 day binary HDE 228766 (e = 0). In this system, the primary is a normal O7 star, whilst the secondary is a more evolved Of + -WN8ha transition object (Rauw et al. 2002). The secondary star is thus expected to have a denser, and more opaque wind. Recent XMM-Newton observations indeed confirm this picture, with the soft Xray emission being strongly supressed at conjunction phase with the secondary in front (see Fig. 1, Rauw et al. in preparation). These differences in absorption can then be used to quantify the mass-loss rates of the winds. In long-period eccentric systems, one expects to observe considerable variations of the intrinsic level of X-ray emission with the changing separation between the stars. Indeed, if the wind interaction zone is adiabatic, one expects to observe an Xray flux that varies as 1/r where r is the orbital separation. Combining data from an XMM-Newton and Swift campaign, Nazé et al. (2012) found such a behaviour in the case of Cyg OB2 #9 (O5.5 I + O3-4 III, e = 0.71, P orb = 860 days). For eccentric O + O systems with intermediate orbital periods, the wind interaction zone remains at least partially radiative and can switch between essentially adiabatic (near apastron) and mainly radiative (near periastron). In such systems, the properties of the wind interaction zone depend on its history, and Pittard & Parkin (2010) therefore found a strong hysteresis in the X-ray fluxes as a function of orbital separation: the X-ray emission is predicted to be larger and harder at phases prior to periastron passage than at symmetric phases after periastron. A good example of such a behaviour is Cyg OB2 #8a (O6 I + O5.5 III, P orb = 21.9 days, e = 0.24, De Becker et al. 2006). An intense XMM-Newton and Swift campaign on this system (Cazorla et al. 2013) revealed a strong hysteresis effect in this system (Fig. 2), in good agreement with theoretical expectations. Whilst theoretical models of interacting winds have successfully reproduced many features seen in the X-ray observations, there remain a number of discrepancies. Probably the most important one concerns the X-ray luminosities. To illustrate this point, let us consider the case of the WN7ha + O binary WR 22 (P orb = 80.3 days, e = 0.56). Parkin & Gosset (2011) presented 3D hydrody-namical simulations of the wind-wind interaction in this system. The wind of the WN7ha star overwhelms that of its companion and the hydrodynamical simulations indicate that, near periastron, the wind collision collapses onto the surface of the O-star. However, the hydro simulations overestimate the observed flux by more than two orders of magnitude compared to the flux level reported by Gosset et al. (2009), based on phase-resolved XMM-Newton observations of WR 22. Also, the simulations fail to reproduce the observed spectral shape. Parkin & Gosset (2011) suggest that part of the discrepancy could be solved if the wind-wind collision remains attached to the surface of the O-star throughout most of the orbital cycle, and the mass-loss rates of both stars are reduced whilst simultaneously increasing the wind momentum ratio in favour of the WN7ha star.
In some cases, the failure to reproduce the correct X-ray luminosity might be related to the fate of clumps. Indeed, it is now well established that stellar winds of massive stars are clumpy. In this context, Zhekov (2012) analysed the X-ray emission from a sample of seven short-period WR + O binaries. The wind interaction zones in these systems were expected to be radiative. However, Zhekov (2012) found no correlation between the X-ray and wind luminosities. Instead, the X-ray luminosities follow the scaling law expected for adiabatic wind interaction zones, thus suggesting that the wind interactions are in fact adiabatic. This can only be the case if only part of the wind (i.e. the homogeneous, inter-clump component) contributes to the X-ray emission of the wind-wind collision. In this scenario, the clumps would pass freely through the interaction zone, without being destroyed. This contrasts with the results of the simulations of Pittard (2007) who found that the clumps dissolve as they cross the shock. However, the latter simulations were done for a long-period (wide) binary system, and simulations of shorter period systems are needed to further understand the implications of the results of Zhekov (2012).
Evidence for wind interactions in the optical domain
The loss of spherical symmetry of the winds as well as the overdensity of the postshock plasma affect the line profiles of optical, UV and IR emission lines that form in the wind (e.g. Stevens 1993). If the shocked material cools efficiently, the shock region collapses, resulting in very large overdensities of the material in the wind interaction zone with respect to the ambient winds. Therefore, emission lines that are produced through recombination (and have thus strengths proportional to ρ 2 ) form partially in the wind interaction zone. The phase-dependence of the line profiles can be used to constrain the properties of the wind interaction, as well as the orbital inclination via a model originally deviced by Lührs (1997).
The orbital motion introduces a curvature of the wind collision region at some distance from the stars. This feature can be observed directly in the so-called pinwheel nebulae where dust is formed in the outer parts of a highly curved interaction zone of a massive binary containing a carbon-rich WC-star (Tuthill 1999). Forming dust in the hostile environment of a massive binary is extremely difficult: close to the stars, the harsh UV radiation field is too strong, and at larger distances, where the UV field is diluted, the density of the wind has dropped to too low a level for dust formation. Only a wind-wind interaction with efficient cooling can provide the necessary increase in density that allows dust to form. In some systems this happens only around specific orbital phases. This is the case for instance in WR 140 (WC7 + O5.5, P orb = 7.94 years, e = 0.88, Williams 2008) where episodes of strong IR emission, attributed to dust formation, occur at periastron passage. In some other systems, dust is seen persistently. An example is WR 70 (WC9 + B0 I, Williams et al. 2012) which displays a persistent, but variable circumstellar dust emission. These authors found a best-fit period to the IR variations of 1030 days (2.82 years), which could thus reflect the orbital period, although the behaviour is not strictly regular. In WR 70, the fraction of carbon atoms of the WC9 wind going into the dust production varies between about 11 and 46% (Williams et al. 2012).
In eccentric binary systems, the curvature of the wind interaction zone changes with orbital phase, being minimum at apastron and maximum at periastron passage. An extreme example of such a situation is found in η Car, a highly eccentric binary (e ∼ 0.9) with an orbital period of 5.54 years. The primary star is an LBV with a mass-loss rate of 10 −3 M ⊙ yr −1 and a low wind velocity of 500 -600 km s −1 . The secondary is unseen, but is likely a mid-O supergiant withṀ ∼ 10 −5 M ⊙ yr −1 and v ∞ ∼ 3000 km s −1 . The wind-wind interaction results in an extended cavity carved by the secondary wind in the wind of the LBV primary component. This cavity is bordered by high-density shells of the shocked primary wind that form extended spirals. Okazaki et al. (2008) used smooth particle simulations to reproduce η Car's X-ray light curve as observed with RXTE. These simulations were subsequently used by Madura et al. (2012) to successfully reproduce the phase dependence of the spatial and radial velocity distribution of the [Fe iii] λ 4659 line as observed with HST/STIS. This high-ionization forbidden line emission arises in the extended primary wind and wind-wind collision regions that are photoionized by the hot secondary component, and have the right density and temperature for producing the forbidden line. The shape and extent of this emission region change with orbital phase (Gull et al. 2011), the photoionization region being most compact and the [Fe iii] emission being suppressed near periastron passage (Madura et al. 2012). These models allowed Madura et al. (2012) to show that the orbital axis is closely aligned with the symmetry axis of the Homunculus nebula.
When the components of η Car are approaching periastron, the strength of the He ii λ 4686 emission line rises suddenly up to EW ∼ −2.5Å. Around periastron, the equivalent width sharply drops to zero, before rising again and then decline (Teodoro et al. 2012). The most likely place where this line is formed is the shocked primary wind, indicating that the sharp decline is likely a result of an occultation of the emission region by the thick primary wind. The intrinsic luminosity of this line is up to 250 -300 L ⊙ , larger than the X-ray luminosity in the 2 -10 keV band (Teodoro et al. 2012). The variations of the He ii λ 4686 emission line are delayed by about 16.5 days with respect to the variations of the X-ray flux. This is attributed to the flow time from the apex of the wind-wind collision zone (where most of the hard X-rays are emitted) to the He ii λ 4686 emission region. Such a delay is expected as a result of the time needed for the post-shock plasma to cool down sufficiently for the formation of the optical recombination line.
Relativistic particles in colliding wind binaries
Diffusive shock acceleration in hydrodynamical shocks can accelerate particles up to relativistic energies through the first order Fermi mechanism (Bell 1978a(Bell , 1978b. When relativistic electrons interact with a magnetic field, they produce synchrotron radio emission (Eichler & Usov 1993.
From an observational point of view, a subset of massive stars display such a non-thermal radio emission associated with a wind-wind interaction (De Becker & Raucq 2013, and references therein). This non-thermal radio emission is often variable as a result of changing line-of-sight optical depth and, in eccentric systems, of changing intrinsic emission (White & Becker 1995, Blomme et al. 2010, 2012. VLBI observations allowed to resolve the emitting region of the non-thermal radio emission in several systems, revealing an arc-like morphology, consistent with a wind-wind interaction (Dougherty et al. 2005, Ortiz-León et al. 2011.
The presence of relativistic electrons, combined with the enormous supply of UV photons by the binary components, should result in a strong inverse Compton scattering emission in the hard X-rays and soft γ-rays (Pollock 1987a, Chen & White 1991. Such an emission was indeed detected in η Carinae (Leyder et al. 2010) and WR 140 (Sugawara et al. 2010). In addition, γ-ray emission associated with π 0 decay was also detected in η Car (Farnier et al. 2011, Reitberger et al. 2012, showing that particle acceleration in interacting wind regions is not restricted to electrons.
An open issue in this context, is why the non-thermal radio emission is only seen for a subset of the known interacting wind systems.
Conclusions
Interacting winds are inherent to massive binary systems. They produce a number of exciting properties and much progress has been achieved over recent years. Further insight into the dynamics of the phenomenon will be made with future instrumentations, such as the integral-field high-resolution X-ray spectrometer aboard the Athena+ mission, currently proposed to ESA (Sciortino et al. 2013). | 2014-01-15T07:54:46.000Z | 2013-02-01T00:00:00.000 | {
"year": 2014,
"sha1": "2087ea40702231172c85e234a6b2576c89663192",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1401.3508",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2087ea40702231172c85e234a6b2576c89663192",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
9520764 | pes2o/s2orc | v3-fos-license | Association of Cortactin, Fascin-1 and Epidermal Growth Factor Receptor (EGFR) Expression in Ovarian Carcinomas: Correlation with Clinicopathological Parameters
Cortactin, fascin-1 and EGFR are recognized as important factors in tumor progression. We tested the hypothesis that cortactin, fascin-1 and EGFR expression correlates with clinicopathological parameters of the four most common ovarian surface epithelial carcinomas – serous cystadenocarcinoma, mucinous cystadenocarcinoma, endometrioid adenocarcinoma, and clear cell carcinoma. Immunohistochemical analysis of cortactin, fascin-1 and EGFR was performed using tissue microarrays of 172 specimens comprising 69 serous cystadenocarcinomas, 44 mucinous cystadenocarcinomas, 45 endometrioid adenocarcinomas and 14 clear cell carcinomas. All ovarian carcinomas showed significant expression of cortactin, fascin-1 and EGFR in staining intensity, tumor percentages and immunostaining scores. In addition, higher immunostaining scores of fascin-1 correlated with more advanced cancer stages (TNM), poorer histological differentiation and poorer survival rate of mucinous cystadenocarcinoma. Similarly, higher immunostaining scores of cortactin correlated with T stages and histological differentiation of serous cystadenocarcinoma. The immunostaining scores of EGFR did not correlate with TNM stages, tumor differentiation or prognosis in the four ovarian surface epithelial carcinomas. Our findings suggest that cortactin and fascin-1 may serve as good biomarkers in evaluating aggressiveness of ovarian serous and mucinous cystadenocarcinoma. And the pharmacological inhibitors of fascin-1 activity may slow down tumor progression and prolong survival time in patients with mucinous cystadenocarcinoma.
Introduction
Cancer of the ovary represents about 30% of all cancers of the female genital organs. In developed countries ovarian cancer is about as common as cancers of the corpus uteri (35%) and invasive cancer of the cervix (27%). The age-adjusted incidence rates vary from fewer than two new cases per 100,000 women in most of Southeast Asia and Africa to over 15 cases per 100,000 in Northern and Eastern Europe. Migration of tumor cells outside the ovarian capsule accounts for a significant percentage of treatment failures in patients with ovarian malignancies [7,18,27] and degradation of basement membrane by matrix metalloproteinases (MMPs) is one of the most critical steps in various stages of tumor disease progression, including tumor angiogenesis, tumor growth, as well as local invasion and subsequent distant metastasis. In metastatic ovarian carcinoma, cancer cells produced more MMP-2, and MMP-9 and had the poorest prognosis [10]. In one study, the effect of fascin on cell invasion also depended on activation of MMP-2 and MMP-9 [42]. Although the detailed pathway is not established, we hypothesized that some relationships existed between cortactin, fascin-1 and EGFR expression and ovarian carcinomas.
Cortactin is an actin-binding protein that activates the Arp2/3 complex to regulate the actin cytoskeleton [9,36] and inhibit de-branching of dendritic actin networks [39]. The gene responsible for cortactin expression is in the chromosome 11q13 region and is frequently amplified in some human cancers, such as breast, head/neck carcinomas and gastric adenocarcinoma [23,31,33].
The epidermal growth factor receptor (EGFR; ErbB-1; HER1 in humans) is a tyrosine kinase receptor, a cellsurface receptor for members of the epidermal growth factor family (EGF-family) of extracellular protein ligands. The EGFR is a member of the ErbB family of receptors, a subfamily of four closely related receptor tyrosine kinases: EGFR (ErbB-1), HER2/c-neu (ErbB-2), Her 3 (ErbB-3) and Her 4 (ErbB-4). EGFR exists on the cell surface and is activated by binding of its specific ligands, including epidermal growth factor and transforming growth factor α (TGFα). EGF receptor activation induces a variety of changes in intracellular physiology, including activation of Na + /H + transporter [30], oncogene expression [12], stimulation of DNA synthesis and cell proliferation [8], among other changes.
Mutations involving EGFR could lead to its constant activation which could result in uncontrolled cell division -a predisposition for cancer. Consequently, mutations of EGFR have been identified in several types of cancer, and the identification of EGFR as an oncogene has led to the development of anticancer therapeutics directed against EGFR, including gefitinib and erlotinib for lung cancer, and cetuximab for colon cancer. Previous studies have suggested that high levels of EGFR expression are a marker for bad prognosis in ovarian cancer patients [4][5][6]. In contrast, a published study by Henzen-Logmans et al. showed that EGFR over-expression only occurs in about 12% of ovarian carcinomas [16]. Schilder and Andrew et al. detected mutations in the TK domain region in 2 of 56 (3.6%) of ovarian adenocarcinomas and observed that a patient on the clinical trial with a mutation in the catalytic domain of EGFR responded to gefitinib, suggesting a method to pre-select a subset of patients whose tumors may be more responsive to this EGF receptor-targeted therapy. While this mutation is a relatively rare event, but this finding could have a dramatic impact on clinical care, and have a profound effect for some ovarian cancer patients. However, the relationship between cortactin, fascin-1 and EGFR expression and clinicopathological parameters of the four most common ovarian carcinomas remains vague.
In this study, we tested the hypothesis that higher expression of cortactin, fascin-1 and EGFR in those patients with the most common ovarian carcinomas correlate with clinicopathological parameters and are associated with advanced cancer stages. We set out to test the hypothesis that increased cortactin, fascin-1 and EGFR immunostaining scores correlate with advanced histological grades, advanced clinical stages and a poorer survival rate for ovarian carcinoma patients
Materials and methods
Paraffin-embedded tumor tissues were obtained and tissue microarray slides were constructed. Tissue microarrays consisted of samples from 172 patients with primary ovarian tumors, including 69 serous cystadenocarcinomas, 44 mucinous cystadenocarcinomas, 45 endometrioid adenocarcinomas and 14 clear cell carcinomas.
The histopathological differentiation or clinical stage was determined according to TNM (WHO criteria) and FIGO staging systems. Stage T1 was defined as a tumor limited to ovaries. Stage T2 was defined as a tumor involving one or both ovaries with pelvic extension. Stage T3 was defined as a tumor involving one or both ovaries with microscopically confirmed peritoneal metastasis outside the pelvis. One core tissue sample was taken from a selected area of each paraffinembedded tumor tissue, and tissue microarray slides were constructed. Each representative core sample in the tissue microarray slide was 2 mm in diameter. The pathological diagnosis in each case was reviewed by two experienced pathologists. None of the patients had received chemotherapy before surgery.
Immunohistochemistry
Tissue microarray sections were dewaxed in xylene, rehydrated in alcohol, and immersed in 3% hydrogen peroxide for 10 minutes to suppress endogenous peroxidase activity. Antigen retrieval was performed by heating (100 • C) each section for 30 minutes in 0.01 mol/L sodium citrate buffer (pH 6.0). After 3 rinses in phosphate buffered saline (PBS) for 5 minutes, each section was incubated for 1 hour at room temperature with a mouse monoclonal anti-human fascin-1 antibody (NeoMarkers, Freemont, CA, USA, 1:100), a polyclonal mouse anti-human cortactin antibody (1:100; Santa Cruz Biotechnology, Santa Cruz, CA) and a mouse monoclonal anti-human EGFR antibody (DAKO, clone E30, 1:25) diluted in PBS. After 3 washes in PBS for 5 minutes, each section was incubated with horseradish peroxidase-labeled rabbit anti-mouse immunoglobulin (DAKO, Carpinteria, CA, USA) for 1 hour at room temperature. After 3 additional washes, peroxidase activity was visualized with a solution of diaminobenzidine (DAB) at room temperature.
To evaluate immunoreactivity and histological appearance, all tissue microarray experiments were repeated 2 times and slides were examined and scored by 2 experienced pathologists. The intensity of cytoplasmic and membrane immunostaining of tumor cells was scored on a scale of 0 (no staining) to 3 (strongest intensity), and the percentage of tumor cells with cytoplasmic or membranous staining at each intensity was estimated. The percentage of cells (from 0 to 100) at each intensity was multiplied by the corresponding immunostaining intensity (from 0 to 3) to obtain an immunostaining score ranging from 0 to 300.
Statistical analysis
All results are expressed as mean ± standard error of the mean (SEM). Immunostaining scores of cortactin, fascin-1 and EGFR for different types of ovarian carcinomas were compared to the score for normal ovarian epithelia. Statistical analysis was performed using Student's t-test and calculated with Pearson Product Method Correlation test to analyze the relationships between expression of these three biomarkers and clinicopathological parameters of the four most common ovar-ian carcinomas. Statistical significance was defined as a P value of less than 0.05. In addition, survival time of subjects was calculated from the date of surgery to the date of death. Sixty nine of serous cystadenocarcinoma and forty four of mucinous cystadenocarcinoma patients in this study received 5-year follow up; subjects were divided into two groups in order to compare survival times with cortactin and fascin-1 immunostaining scores. Statistical analysis of survival time was done using the Kaplan-Meier survival test.
Results
Immunohistochemical staining patterns of the different ovarian carcinomas are presented in Table 1 and representative samples are illustrated in Fig. 1.
Immunoscores of fascin-1 correlate with histological grades and clinical stages of mucinous cystadenocarcinoma
Among 172 ovarian tumors, fascin-1 immunostaining scores were significantly higher in the four ovarian epithelial carcinomas (73 ± 10 for serous cystadenocarcinoma; 26 ± 7 for mucinous cystadenocarcinoma; 63 ± 13 for endometrioid adenocarcinoma and 35 ± 12 for clear cell carcinoma; all P values <0.05) ( Table 1). Staining of fascin-1 was either very low or absent in normal ovarian epithelial cells. Thirty-seven (84%) of 44 mucinous cystadenocarcinomas were well differentiated, 4 (9%) were moderately differentiated, and 3 (6%) were poorly differentiated. Additional information, including TNM and AJCC clinical staging distribution, is listed in Table 2. Using the Pearson Product Method Correlation test, higher immunostaining scores of fascin-1 showed a positive correlation with histological grading, AJCC clinical stage, T stages and N stages, but not with M stages (Table 2; Fig. 2). In addition, no significant relationships were seen between fascin-1 immunostaining scores and clinicopathological parameters in the other three ovarian epithelial carcinomas, which included serous, endometrioid and clear cell carcinoma (data not shown).
Immunoscores of cortactin correlate with histological grades and T stages of serous cystadenocarcinoma
The cortactin immunostaining scores were significantly higher in four ovarian epithelial carcinomas (261 Table 3. Using the Pearson Product Method Correlation test, higher immunostaining scores of fascin-1 showed a positive correlation with histological grading and T stages, but not with AJCC, N and M stages (p < 0.01; Table 3; Fig. 3). In addition, no significant relationships were seen between cortactin immunostaining scores and clinicopathological parameters in the other three ovarian epithelial carcinomas, which included mucinous, endometrioid and clear cell carcinoma (data not shown).
Correlation of EGFR immunoscores in four ovarian carcinomas
In our study, no significant relationships were seen between EGFR immunostaining scores and clinicopathological parameters in the four most common Our results suggest that EGFR may not be a good biomarker in evaluating the aggressiveness of ovarian epithelial carcinomas.
Relationship of fascin-1 and cortactin expression with survival time
We divided 69 serous cystadenocarcinoma and 44 mucinous cystadenocarcinoma cases that had received 5-year follow up into two groups based on cortactin and fascin-1 immunoscores, respectively. For cortactin immunoscores, those patients with higher expression (n = 34, immunostaining scores 250) were in group one, the remaining cases had lower immunoreactivity (n = 35, immunostaining score <250) were in group two. The absolute number of patients that succumbed to disease in five years and the median survival time for higher expression of cortactin are 16 cases and 45.3 months, and for lower expression of cortactin are 11 cases and 47.4 months.
The two groups were also divided incuding higher fascin-1 expression (n = 11, immunostaining scores >0) and negative fascin-1 immunoactivity (n = 33, immunostaining scores = 0). The absolute number of patients that succumbed to disease in five years and the median survival time for higher expression of fascin are 7 cases and 31.3 months, and for negative expression of fascin are 9 cases and 54.4 months.
Using cortactin and fascin-1 immunoscores as variable parameters, higher scores in fascin-1 was significantly associated with higher mortality (p value < 0.001; Fig. 4). However, there was no significant difference in survival rate between higher or lower expression of cortactin (Fig. 5).
Discussion
At present, the emergence of cancer is a complex multi-step process in which the activation of oncogenes and inactivation of tumor suppressor genes act synergistically to produce the malignant phenotype. Tumor metastasis is also a complex process that involves direct invasion of tumor cells, infiltration into lymphovascular channels, survival in circulation, extravasation, and growth at secondary sites [3]. Although these mechanisms are poorly understood, we do know that one requirement is enhancement of cell motility. In fact, enhanced movement of cancer cells has been reported to correlate with greater metastatic potential in animal models and poorer prognosis in human cancers [24].
Our results suggest that expression of fascin-1 may be effective in predicting tumor clinicopathological parameters of ovarian mucinous cystadenocarcinoma in Chinese women. Average immunostaining scores for fascin-1 have a significant positive correlation with T, N, and AJCC stages, but not with M stage of mucinous cystadenocarcinoma. However, in our study, no M1-stage case was included, which makes it difficult to show statistical significance. In addition, higher fascin-1 immunostaining scores are significantly associated with poorer survival rate as demonstrated in our study.
Cortactin regulates the actin cytoskeleton through its involvement in several processes, including cell motility, adhesion, polarization, contraction, and others [9,35,40]. The activation of actin-related (Arp) 2/3 protein complex and neuronal Wiscott-Aldrich syndrome protein (N-Wasp) by cortactin nucleates actin polymerization and promotes cellular motility. Cortactin is a p80/p85 multidomain actin filament-binding protein [32]; it was first identified as an src kinase substrate in chicken fibroblasts [41]. Human cortactin maps to chromosome 11q13 [22]. Amplification of chromosome 11q13 has been reported in several human carcinomas as has increased expression of cortactin [37]. Over-expression of cortactin induces cell motility and migration, inhibits cell-cell adhesion, and accelerates tumor spreading [36]. In addition, the effects of cortactin may be related to expression of E-cadherin and its effects on intercellular adhesion [15,20,38] mor invasion, and metastasis has been shown to be associated with esophageal and head/neck squamous cell carcinomas [22,29]. However, direct evidence is still lacking to establish a relationship between cortactin over-expression and tumor progression and metastasis in ovarian carcinomas. Our current results demonstrate that cortactin is over-expressed in the four most common ovarian carcinomas in Chinese women, and higher immunostaining scores of cortactin are associated with more advanced T stage and histological differentiation in serous cystadenocarcinoma (Table 3, Fig. 3). EGFR is one of a family of receptors that help regulate cell growth, division, and death. Normal epithelial cells contain two copies of the EGFR gene and produce low levels of EGFR protein on the cell surface. In a variety of cancers, increased amounts of EGFR protein are present in tumor tissue. This can be due to amplification (too many copies of the gene are produced), over-expression (an increased amount of the protein is produced), and/or decreased protein destruction. Tumors with increased EGFR protein tend to grow more aggressively, are more likely to metastasize, and are more resistant to standard chemotherapies. Patients with these tumors tend to have poorer outcomes.
In our study, EGFR is over-expressed in four ovarian carcinomas in Chinese women, but over-expression of EGFR was not associated with aggressive clinicopathological parameters in these four ovarian carcinomas. To our knowledge, this is the first report to evaluate the association between cortactin, fascin-1 and EGFR expression and tumor progression in the four most common ovarian epithelial carcinomas.
In conclusion, higher fascin-1 immunostaining scores are associated with poorer tumor differentiation, more-advanced TNM stages and shorter survival time in mucinous cystadenocarcinoma. Similarly, we found a correlation between higher immunostaining scores of cortactin and T stages and histological differentiation in serous cystadenocarcinoma. Accordingly, our results may support the hypothesis that fascin-1 and cortactin are important factors in migration or invasion of some ovarian epithelial carcinomas. Although unknown mechanisms still exist in tumor progression, we demonstrated that fascin-1 is a satisfactory biomarker for predicting clinical outcome in mucinous cystadenocarcinoma. Therefore, the development of pharmacological agents to target the fascin-1 pathway may prolong survival time and slow tumor progression in patients with mucinous cystadenocarcinoma. | 2018-04-03T00:42:47.117Z | 2008-08-22T00:00:00.000 | {
"year": 2008,
"sha1": "e3c5eb1e7ac00fca3c6f40fcc21d385c40093fa1",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/dm/2008/284382.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96a563c1881c7ae41c2c020edff7fdee7f542577",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85822990 | pes2o/s2orc | v3-fos-license | Research on the Repeated Sequences among tRNA Sequences
Many theories thought that present-day tRNA sequences evolved from some short RNA hairpins which contain a simple stem-loop structure. To find out these significant fragment sequences, the repeated sequences of different length within 3420 tRNA sequences are counted and analyzed. The results show that: 1) the probability of occurrence P(i) with the given repeated sequences i follows a power-law distribution when the length K of repeated sequences is longer than four bases, and in this case, the total number N(n) of occurrence with the repeated times n follows a power-law distribution too; 2) the sequence of the length K which repeats the largest times is just only sequence of the length K-b wobbling b bases on its left or right side (b varies between 1 and K-1); 3) the same repeated sequences are found nearly at the identical site in different tRNA sequences as the length K of repeated sequences is longer than five bases. Then a hypothesis of the origin and evolution mechanisms of tRNA sequences is proposed and discussed.
Introduction
As we know, the repeated sequence is widespread existed in the genome, which accounts for a large proportion especially in the eukaryotic genomes.Studies have shown that the repeated sequence is of great biological significance.It is important for chromosomes to maintain their space structure, gene expression and gene recombinant [1] [2].In recent years the studies of repeated sequences turn into a hot issue, many molecular biologists are trying to reveal the structure, function and evolution mechanisms of genes by researching on the repeated sequences.And the repeated sequences have been applied to many fields, e.g. the short tandem repeated sequences are expected to become the second-generation molecular markers.
All modern tRNA sequences evolved from some common ancestral short RNA hairpin [3]- [6], but their evolutionary mechanism remained an open question.Normally, the gene has the important function must be conservative, then to try to find out the distribution and content of these important fragments of modern tRNA sequences, 3420 tRNA sequences are put as a whole in this paper, and then the repeated sequences of different length within all tRNA sequences are counted.By the analysis of the repeated sequences of all tRNA sequences, the origin and evolution mechanisms of tRNA sequences are discussed further.
The Source of tRNA Sequences
The tRNA sequences database was created in 1998 by Sprinzl [7] at first, and it updates constantly day by day.More and more tRNA sequences were collected in this database.All the tRNA sequences used in our paper are downloaded from the database (http://trna.bioinf.uni-leipzig.de/DataOutput/).There are 3719 tRNA sequences in this database altogether which include 61 different anticodons and 429 different species which belonged to three kingdoms respectively: Archaea, Bacteria and Eucarya.Considering the variable loop of the tRNA sequence, then there are 99 bases in each tRNA sequence, and the missing bases are replaced with the line "-" in the tRNA sequence by Sprinzl et al.
The Method of Counting Repeated Sequences
Firstly, we compare all the 3719 tRNA sequences and remove the high similar or identical sequences, and then just only 3420 tRNA sequences are left and used in our paper.Selecting a fixed length of K string sequences, then various K string sequences of true appearance are counted by us among 3420 tRNA sequences.Considering that there are overlapping within K string sequences and every three bases may represent code information, so we choose three bases as a step when count the K string sequences, such as counting begins with the first base, and once again every three bases until the end of each tRNA sequence.And all the repeated sequences of different length are counted and analyzed.
The Repeated Sequences of Different Length with the Highest Occurrences among tRNA Sequences
For the convenience of analysis, just the repeated sequences of the highest occurrences are listed in Table 1.As shown in Table 1, obviously, we can observe that: 1) the highest occurrences of repeated sequences decrease with the increase of the length K.This is because the total number of K string sequences of true appearance within all the tRNA sequences decrease with the increase of the length K. 2) the repeated sequence of the highest occurrences is "TT" which occurs 7183 times when the length K = 2, the repeated sequence "GTT" occurs the most times (3282 times) when K = 3, and the repeated sequence "GTTC" occurs the most times (2080 times) when K = 4.If observe carefully, we find the repeated sequences of the length K with the highest occurrences is just the highest occurrences repeated sequences of the length (K-b) adding b bases on its left or right side (b varies between 1 and (K-1)).This seems to indicate that all the repeated sequences of different length with highest occurrences are at the same site of tRNA sequences and tRNA sequences may put one of the core fragments as a primer to amplify during their formation process.
3) The location of the highest occurrence sequences can be observed in the any arm of tRNA sequence when the length K varies between 1 and 3 bases.However the location of these highest occurrence sequences is nearly observed at the same site when the length K is between 4 and 6 bases, and it fully situates at the same site if only the length K is bigger than 6 bases (see Figure 1).What's more, the locations of various repeated sequences in the tRNA sequences are counted in this paper.And our results suggest that the same repeated sequences are found nearly at the identical site in the tRNA sequences when the length K of repeated sequences is longer than five bases.4) The repeated sequences accounts for approximately 82.22% in all the tRNAs.And the longest repeated sequence (AAGATTACCCAAGTCCGGCTG AAGGGATCGGTCTTGAAAACCGAGAGTCGG: containing 51 bases) occurs two times, whose anticodon is "TGA" and derived from the Mycoplasma.
In Figure 1, the location of the most repeated sequences is clearly observed in the secondary structure of tRNA sequences.We find that the location of these repeated sequences mainly lies in the anticodon arm and T ψ C arm of tRNA sequence.It seems that the repeated sequences take the anticodon arm and Tψ C arm as cen- ter and expand towards both directions with the increasing of the length K of repeated sequence (see Figure 1, and the arrow represents the direction of the expansion of repeated sequence).This may indicate that the anticodon arm and Tψ C arm are more significant for tRNA in their evolution process.
The Power-Law Behavior of the Repeated Sequences
The power-law behavior is frequently observed in different fields, such as the population distributions, the social interactions [8], the World Wide Web [9] and so on.It is also known as Zipf's law [10], it was first widely recognized for word usage in text documents.Previous studies [12]- [14] have suggested that the number of distinct parts with a given genomic occurrence followed a power-law distribution.The power-law behavior is observed in our studying of repeated sequences among the 3420 tRNA sequences.
The occurrence frequency of one given repeated sequence i divided by the total number of repeated sequences of true appearance may be taken as the probability of appearance of the repeated sequence i among all the tRNA The repeated sequence with 16 bases sequences.As it is shown by Figure 2, the abscissa denotes the repeated sequence i, and the ordinate denotes the probability P(i) of i. Taking into account the paper's space, therefore we only insert two diagrams into this paper with the length of repeated sequences K = 6 and K = 10.Clearly, Figure 2 shows that the probability P(i) with the given repeated sequence i follows a power-law distribution as the length of repeated sequences K = 6,and K = 10 which means that a few repeated sequences are occurring many times and most occurring infrequently among all the tRNA sequences.What's more, our researches suggest that the probability P(i) with the given repeated sequence i always follows a power-law distribution when the length of repeated sequences K is longer than four bases.
In Figure 3, the abscissa n denotes the repeated sequences occurring n times, and the ordinate denotes the total number N(n) of repeated sequences which occur n times.As Figure 3 shows, the total number N(n) of repeated sequences which occur n times with the occurrences n follows a power-law distribution too when the length of repeated sequences K = 6 and K = 10.It displays that a few repeated sequences occurring many times and most occurring few times among all the tRNA sequences in these cases as well.Also, the total number N(n) with the occurrences n always follows a power-law distribution when the length of K is longer than four bases.
Conclusion and Discussion
The repeated sequences of different length within all the tRNA sequences are counted.Our results show that: 1) the probability P(i) with the given repeated sequences i follows a power-law distribution when the length K of repeated sequences is longer than four bases, and in this case, the total number N(n) of repeated sequences which occur n times with the occurrences n follows a power-law distribution too; 2) the highest occurrence sequence of the length K is just only the result of the most repeated sequence of the length K-b wobbling b bases on its left or right side (b varies between 1 and K-1 ); 3) the same repeated sequences are found nearly at the identical site in the different tRNA sequences when the length of repeated sequences is longer than five bases.Many views have been put on studying the evolutional relationship of tRNA sequences, such as a new tRNA gene may survive through a point mutation in the anticodon sites [15].Subsequently, the complementary duplication mechanism is also presented as the primary mechanism and point mutation are supporting mechanisms for modern tRNAs' evolution [16] [17].So many repeated fragment sequences distribute in the tRNAs, if it hides important information of its evolution?How they arise?We hypothesize that modern tRNA sequences are formed by some fragment sequences acting as primers to duplicate for amplification in their formation process.Supposing that there were only a few fragment sequences in the earliest stage, later the few fragment sequences amplified after replication, and then tRNA sequences didn't form a stable structure and stopped to amplify until up to their length, or that they could not survive as their length shorter or longer than the length of modern tRNA sequences.Considering the fragment sequences can be affected by the natural environment or suffered AT/GC pressure in the evolution process, the fragment sequences may experience random mutations (such as bases substitution, bases deletion, bases insertion and so on) during evolution, and then the new fragment sequences can be generated [18] [19].Apart from mutations, Ragan [20] thinks the lateral gene transfer can also be a source of new fragment sequences.Similarly, the new fragment sequences can be used as core primers to duplicate for amplification.And each tRNA sequence must randomly select some fragment sequences as the core primers to duplicate for amplification at first before it turned into a stable molecular structure.In this way, naturally, the higher occurrences of fragment sequences, the more chance of being choose as a core primer to replicate.These fragment sequences underwent selective evolution so a long period that they had resulted in a few repeated sequences occurring many times and most occurring infrequently among all the tRNA sequences and all the tRNA sequences with high similarity in their functions and structures.And the repeated sequences occurring many times may be closer to the earliest fragment sequences.
Our hypothesis of tRNA sequences on the one hand supports the theory that a primitive tRNA consists of seven bases presented by Crick et al. in 1976 [21] and verifies the possibility that tRNA molecule chooses a hairpin RNA as the precursor of tRNA [6]; on the other hand, our hypothesis not only sustains the view that a hairpin structure is via indirect duplication and then produces another hairpin structure which evolves though base changes, insertions and deletions into the tRNA molecule proposed [5], but also better supports the model based on a direct duplication of a hairpin structure [22].
Figure 1 .
Figure 1.The location of the most repeated sequence in the secondary structure of tRNA sequences.
Figure 2 .Figure 3 .
Figure 2. The probability P(i) of one given repeated sequence i versus the given repeated sequence i.(a) K = 6; (b) K = 10.
Table 1 .
The repeated sequences of different length with highest occurrences.In
Table 1 ,
Ac represents acceptor arm, D represents D arm, An represents anticodon arm, E represents extra arm, and T represents Tψ C arm. | 2018-05-15T23:57:49.590Z | 2015-11-16T00:00:00.000 | {
"year": 2015,
"sha1": "5f8308b6d3ebd9c651dbd6cc768ec66594dcd00e",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=61437",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5f8308b6d3ebd9c651dbd6cc768ec66594dcd00e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
210177228 | pes2o/s2orc | v3-fos-license | Achieving postprandial glucose control with lixisenatide improves glycemic control in patients with type 2 diabetes on basal insulin: a post-hoc analysis of pooled data
Background To examine the impact on glycemic control of achieving postprandial glucose (PPG) target with lixisenatide, a once-daily glucagon-like peptide-1 receptor agonist approved in the US, in patients with uncontrolled type 2 diabetes (T2D) on basal insulin, an agent that primarily targets fasting plasma glucose. Methods A post hoc pooled analysis was conducted using clinical trial data extracted from the intent-to-treat subpopulation of patients with T2D who participated in the 24-week, phase 3, randomized, double-blind, placebo-controlled, 2-arm parallel-group, multicenter GetGoal-L (NCT00715624), GetGoal-Duo 1 (NCT00975286) and GetGoal-L Asia trials (NCT00866658). Results Data from 587 lixisenatide-treated patients and 484 placebo-treated patients were included. Patients on lixisenatide were more likely to achieve a PPG target of < 10 mmol/L (< 180 mg/dL) than placebo-treated patients (P < 0.001), regardless of baseline fasting plasma glucose (FPG) levels. More importantly, those who reached the PPG target experienced a significantly greater reduction in mean HbA1c, were more likely to achieve HbA1c target of < 53 mmol/mol (< 7.0%), and experienced weight loss. Those outcomes were achieved with no significant differences in the risk of symptomatic hypoglycemia compared with placebo. Conclusion Compared with placebo, addition of lixisenatide to basal insulin improved HbA1c and reduced PPG, without increasing hypoglycemia risk. These findings highlight the importance of PPG control in the management of T2D, and provide evidence that adding an agent to basal insulin therapy that also impacts PPG has therapeutic value for patients who are not meeting glycemic targets. Trial registration NCT00715624. Registered 15 July 2008, NCT00975286. Registered 11 September 2009, NCT00866658. Registered 20 March 2009.
Due to the progressive nature of T2D, many patients eventually require the use of basal insulin to reach and maintain glycemic targets [4,7]. However, many patients and health care providers fail to intensify treatment in a timely fashion [8]. The HbA1c level reflects contributions from both fasting plasma glucose (FPG) and postprandial plasma glucose (PPG). PPG has been shown to play a predominant role in residual hyperglycemia as HbA1c levels approach 53 mmol/mol (7.0%) and FPG levels are within target range (4.4-7.2 mmol/L [80-130 mg/dL]) [4,9]. In patients with T2D and uncontrolled hyperglycemia on OADs, treatment intensification with basal insulin resulted in HbA1c and FPG reductions, while PPG accounted for approximately two-thirds of residual hyperglycemia, suggesting an important role for PPG-targeting therapies in helping patients to achieve glycemic goals [10].
Lixisenatide is a once-daily GLP-1 RA that lowers PPG, reduces appetite, and leads to weight loss, which is typical of the GLP-1 RA class [11][12][13]. Like other GLP-1 RAs, lixisenatide is associated with a very low risk of hypoglycemia due to its glucose-dependent mechanism of action [6,12,13]. While also impacting FPG, treatment of patients with T2D with lixisenatide results in robust reductions in PPG [14]. The pronounced effects of lixisenatide on lowering PPG provide a rationale for combining lixisenatide with basal insulin to achieve additive effects on glycemic control [15].
Because of the recognized contribution of PPG to overall hyperglycemia and the impact of lixisenatide on PPG, it was hypothesized that reducing PPG to < 10 mmol/L (< 180 mg/dL), as recommended by the ADA, would also increase the likelihood of patients with T2D achieving HbA1c < 53 mmol/mol (< 7.0%). In this study, the contribution of lixisenatide to achievement of ADArecommended PPG target was investigated in patients with T2D uncontrolled on basal insulin. In addition, we evaluated whether achieving PPG targets affected HbA1c and other efficacy and safety outcomes.
Study design
This post hoc pooled analysis used clinical trial data extracted from the intent-to-treat subpopulation of patients with T2D who participated in standardized meal tests (measured 2 h after a standard liquid breakfast) as part of the 24-week, phase 3, randomized, double-blind, placebo-controlled, 2-arm parallelgroup, multicenter GetGoal-L (NCT00715624) [16], GetGoal-Duo1 (NCT00975286) [17], and GetGoal-L Asia (NCT00866658) [18] trials; all trials are registered at ClinicalTrials.gov. These were the 3 trials that evaluated the efficacy and safety of adding lixisenatide to basal insulin therapy in patients with T2D inadequately controlled on basal insulin, with or without OADs (metformin [MET], thiazolidinediones [TZD], or sulfonylurea [SU]). GetGoal-L enrolled patients from 15 countries who were inadequately controlled (HbA1c = 53-86 mmol/mol [7.0-10.0%]) on an existing, stable dose of basal insulin therapy for ≥3 months with or without MET [16]. Patients in GetGoal-L Asia were from Japan, Republic of Korea, Taiwan, and the Philippines. These patients were on existing basal insulin therapy with or without a SU. Patients in GetGoal-Duo 1 were from 25 countries and were inadequately controlled (HbA1c 53-86 mmol/mol [7.0-10.0%]) on existing OAD therapy. If present, SU therapy was discontinued, and patients were initiated on basal insulin therapy with or without MET or a TZD during the run-in phase. After the 12-week run in, patients with an HbA1c ≥ 53 mmol/mol (≥ 7.0%) to ≤75 mmol/mol (≤ 9.0%) and FPG ≤ 7.8 mmol/L (≤ 140 mg/dL) were randomly assigned to add lixisenatide or placebo [17].
Assessments
The primary endpoint was the proportion of patients who achieved PPG < 10 mmol/L (< 180 mg/dL) at Week 24. Secondary endpoints in PPG responders and nonresponders with or without controlled FPG levels at baseline were change in mean HbA1c from baseline to Week 24; the percentage of patients with HbA1c < 53 mmol/mol (< 7.0%) at Week 24; body weight change over 24 weeks; and the rate of symptomatic hypoglycemia during the study period (defined as typical symptoms of hypoglycemia accompanied by an SMPG value of ≤60 mg/dL [3.9 mmol/L]). Hypoglycemia was recorded via patient diaries; patients recorded any hypoglycemic events daily, which were passed on to investigators at the next visit.
Statistical analyses
Continuous efficacy assessments were analyzed using analysis of variance carried out by baseline FPG category using last observation carried forward at Week 24, with treatment group and PPG category as fixed effects. Response rates and hypoglycemia rates were analyzed using chi-square tests. Analyses excluded measurements obtained after the use of rescue medication and/or after treatment cessation. Rescue medication of short-acting or rapid-acting insulin was given to patients with a measurement of FPG > 11.1 mmol/L (> 200 mg/dL) or > 75 mmol/mol (HbA1c > 9%), for 3 consecutive days (Weeks 0-8), when changes to diet and study medication did not resolve the high readings. This threshold changed to FPG > 10 mmol/L (> 180 mg/dL) or HbA1c > 69 mmol/mol (> 8.5%) for Weeks 8-24. Results were combined across studies using a fixed-effects meta-analysis with inverse variance weights calculated separately by FPG and PPG categories. All analyses were performed using SAS Version 9.2® [SAS Institute Inc. Cary, NC, USA] or higher.
Patient baseline characteristics
Data from 587 lixisenatide-treated patients and 484 placebo-treated patients were included in this analysis. Baseline characteristics between the 2 treatment groups were comparable. Females made up 53% of patients in the lixisenatide group and 50% in the placebo group. In the lixisenatide versus placebo groups, respectively, mean body weight was 82 kg versus 81 kg, and mean duration of T2D was 11.8 years versus 11.3 years. Mean HbA1c was 65 mmol/mol (8.1%) in both groups.
When baseline variables were examined in the groups stratified by baseline FPG and PPG goal achievement the majority of variables were similar between the groups ( Table 1). In the group with controlled FPG, the placebo PPG responders were less likely to be female and had a lower baseline HbA1c (Table 1). Baseline HbA1c was higher in lixisenatide non-responders with uncontrolled FPG than in responders. Both groups of non-responders with controlled FPG at baseline had higher baseline PPG, though there was no differences between the groups with uncontrolled FPG at baseline.
Impact of achieving PPG target on other efficacy and safety outcomes
Regardless of baseline FPG status, patients who reached PPG target achieved a significantly greater mean reduction in HbA1c compared with patients who did not reach PPG target. The magnitude of HbA1c change from baseline was similar for patients with controlled and uncontrolled FPG ( Fig. 2; Table 2), with greater absolute HbA1c reductions in PPG responders than non-responders. Regardless of whether PPG target was achieved, patients with controlled FPG experienced increased FPG, whereas patients with uncontrolled FPG experienced FPG reductions. Among patients with controlled FPG, the increase was significantly lower in PPG responders, and among patients with uncontrolled FPG the reductions were significantly greater among PPG responders ( Table 2). Patients who reached PPG target were also more likely to achieve HbA1c goal < 53 mmol/ mol (< 7.0%) compared with patients who did not reach PPG target (Fig. 3).
There was no significant change in the risk of symptomatic hypoglycemia associated with achieving PPG target ( Table 2). Patients who reached PPG target also experienced greater average reductions in body weight compared with those who did not reach PPG target with lixisenatide treatment; in contrast, all the placebo groups showed an increase in body weight (Table 2).
Discussion
The results of this post hoc, pooled analysis of patient data from the GetGoal-L, GetGoal-Duo 1, and GetGoal-L Asia trials found that adding the shortacting GLP-1 RA lixisenatide to basal insulin in patients with T2D improves control of postprandial hyperglycemia, with more than half of patients achieving the ADA-recommended PPG target of < 10 mmol/ L (< 180 mg/dL). Patients whose PPG reached target through treatment with lixisenatide were also more likely to achieve ADA-recommended HbA1c target of Abbreviations: HbA1c glycated hemoglobin, FPG fasting plasma glucose, PPG postprandial glucose. a P = 0.008. b P < 0.001 for comparison between PPG response categories for both baseline FPG categories; Cochran-Mantel-Haenszel test stratified by study < 53 mmol/mol (< 7.0%) than patients who continued on basal insulin plus placebo. This was true regardless of whether or not patients had an FPG < 7 mmol/L (< 126 mg/dL) at baseline, but the observed effect was greater in patients with controlled FPG compared with patients with uncontrolled FPG.
These results are consistent with those of studies investigating other GLP-1 RAs combined with basal insulin. When exenatide, another GLP-1 RA, was added to insulin glargine, patients with T2D experienced a − 1.74% change in HbA1c which was completely driven by reductions in PPG [19]. This is further supported by a trial comparing exenatide and lispro with optimized basal insulin, which showed the non-inferiority of exenatide, with a change of − 1.13% (12.4 mmol/mol) in HbA1c [20].
In this analysis, the greater efficacy of lixisenatide compared with placebo in improving glycemic control was achieved without any increased risk of hypoglycemia and with body weight reduction, especially in patients achieving the PPG target. In a previous study, continuous glucose monitoring of patients administered lixisenatide at breakfast or the main meal of the day demonstrated reduced glucose exposure and a reduction of HbA1c of 0.6% over the 24-week study period [21]. This greater efficacy, together with mitigation of unwanted side effects, such as body weight gain, without increasing the risk of hypoglycemia, may aid in removing obstacles to treatment intensification.
Conclusion
In this pooled analysis of the three GetGoal trials, addition of lixisenatide to basal insulin improved HbA1c and reduced postprandial glucose, regardless of baseline fasting plasma glucose, with no increased risk of symptomatic hypoglycemia while mitigating weight gain in patients with T2D. These findings support the hypothesis that achieving PPG < 10 mmol/L (< 180 mg/dL) with lixisenatide increases the likelihood of HbA1c goal achievement in patients with T2D, and further highlights the importance of PPG control in diabetes management. | 2020-01-14T21:43:35.083Z | 2020-01-14T00:00:00.000 | {
"year": 2020,
"sha1": "03e2276c5ba9732280afbc44ff760b7d0532be0d",
"oa_license": "CCBY",
"oa_url": "https://clindiabetesendo.biomedcentral.com/track/pdf/10.1186/s40842-019-0088-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03e2276c5ba9732280afbc44ff760b7d0532be0d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4989908 | pes2o/s2orc | v3-fos-license | Optimal contracts with a risk-taking agent
Consider an agent who can costlessly add mean-preserving noise to his output. To deter such risk-taking
Introduction
Contractual incentives motivate employees, suppliers, and partners to exert effort, but improperly designed incentives can instead encourage excessive risk-taking. These risktaking motives are most obvious when they have dramatic consequences for society as a whole. For instance, following the 2008 financial crisis, Federal Reserve Chairman Ben Bernanke stated that "compensation practices at some banking organizations have led to misaligned incentives and excessive risk-taking, contributing to bank losses and financial instability" (The Federal Reserve 2009). Garicano and Rayo (2016) suggest that poorly designed incentives led the American International Group (AIG) to expose itself to massive tail risk in exchange for the appearance of stable earnings. Rajan (2011) echoes these concerns and suggests that misaligned incentives worsened the effects of the crisis.
Even without such disastrous outcomes, agents face opportunities to game their incentives by engaging in risk-taking in many other settings. Portfolio managers can choose riskier investments, as well as exert effort, to influence their returns (Brown et al. 1996, Chevalier and Ellison 1997, de Figueiredo et al. 2015. Executives and entrepreneurs work hard to innovate, but also choose whether to pursue moonshot or incremental projects (Matta and Beamish 2008, Rahmandad et al. 2018, Vereshchagina and Hopenhayn 2009). In what we will see is a related phenomenon, salespeople can both work to sell more products and choose when those sales count toward their quotas (Oyer 1998, Larkin 2014.
In addition to the obvious social costs of excessive risk, the fact that agents can game their incentives in this way has a second cost as well: the possibility of risk-taking makes it harder for firms to motivate their agents to work hard. In this paper, we focus on this incentive cost by exploring how risk-taking constrains optimal contracts in a canonical moral hazard setting. We argue that the fact that the agent can game his incentives in this way renders convex incentives ineffective. Consequently, the principal can do no better than to offer a contract that makes the agent's utility concave in output. This simple but central result spurs us to analyze optimal concave contracts, with the goal of exploring how this additional concavity constraint changes the structure of incentives, profits, and productivity.
Our model considers a principal who offers an incentive contract to a potentially liquidity-constrained and risk-averse agent. If the agent accepts the contract, then he exerts costly effort that produces a noncontractible intermediate output, the distribution of which satisfies the increasing marginal likelihood ratio property. The key twist on this canonical framework is that the agent can engage in risk-taking by costlessly adding mean-preserving noise to this intermediate output, which in turn determines the contractible final output.
Building on the arguments of Jensen and Meckling (1976) and others, Section 3 shows that the agent engages in risk-taking wherever the contract makes his utility convex in output. In so doing, the agent makes his expected utility concave in intermediate output. As long as both the principal and the agent are weakly risk-averse, the principal finds it optimal to deter risk-taking entirely by offering an incentive scheme that directly makes the agent's utility concave in output. We refer to this additional constraint-that utility be weakly concave in output-as the no-gaming constraint. Wherever the nogaming constraint binds, the optimal contract makes the agent's utility linear in output.
In Section 4, we consider the case of a risk-neutral agent and a weakly risk-averse principal. Absent the no-gaming constraint, the principal would like to offer a convex contract in this setting so as to concentrate high pay on high outcomes and so inexpensively motivate the agent while respecting his limited liability constraint. As a result, we show that the no-gaming constraint binds everywhere, which means that a linear (technically, affine) contract is optimal, remains so regardless of the principal's attitude toward risk (even if she is risk-loving), and is uniquely optimal if the principal is riskaverse. In particular, relative to any strictly concave contract, we show that there is a linear contract that both better motivates the agent and better insures the principal.
Section 5 explores the consequences of risk-taking in the case of a risk-averse agent and a risk-neutral principal. In this setting, the no-gaming constraint implies that the agent's utility must be concave in output. Similar to Section 4, the optimal contract makes the agent's utility linear wherever this constraint binds. Unlike that section, however, the no-gaming constraint does not necessarily bind everywhere, so the agent's payoff under the optimal contract might have both linear and strictly concave regions.
We develop a set of necessary and sufficient conditions that characterize the unique profit-maximizing contract in this setting. We cannot directly apply the techniques of Mirrlees (1976) and Holmström (1979) because the resulting contract might violate the no-gaming constraint. Instead, we identify two perturbations of a candidate contract that respects this constraint while changing either the level or the slope of the agent's utility over appropriate intervals of output. Perhaps surprisingly, we prove that it suffices to consider these two perturbations so that a contract is profit-maximizing if, and only if, it cannot be improved by them.
We use this characterization to identify how the constraint that incentives be concave shapes the optimal contract. If the limited liability constraint binds and the participation constraint is slack in this setting, then the optimal contract follows a logic similar to the case with a risk-neutral agent. The principal would like to offer a contract that makes the agent's payoff convex over any output that suggests less than the desired effort. The profit-maximizing contract therefore makes the agent's utility linear over low outputs. Unlike the case with a risk-neutral agent, however, the optimal incentive scheme might make the agent's utility strictly concave following high output, since the principal finds it increasingly expensive to give the agent higher and higher utility.
If the limited liability constraint is slack, then the optimal contract is shaped by the same trade-off between incentives and insurance that arises in classic moral hazard problems. In the absence of risk-taking, the optimal contract would equate outputby-output the principal's marginal cost of paying the agent to the marginal benefit of relaxing his participation and incentive constraints (as in Mirrlees 1976 andHolmström 1979). Where this constraint binds, however, optimizing output-by-output would violate the no-gaming constraint. Over such regions, we show that the optimal contract is ironed, in the sense that it is linear in utility over an interval. Expected marginal benefits equal expected marginal costs on that interval. For instance, if the no-gaming constraint binds for low output but not for high output, then the optimal contract makes the agent's utility linear for low outputs and otherwise sets marginal benefits equal to marginal costs output-by-output. If no-gaming is slack everywhere, then the contract characterized by Mirrlees (1976) and Holmström (1979) is optimal; if it binds everywhere, then the optimal contract makes the agent's utility linear in output.
The final part of this section presents simulations of the optimal contract in a discrete approximation of the model. We show that optimal incentives are characterized by a standard convex optimization program, and we consider examples that illustrate how parameters of the model influence the optimal contract.
The unifying idea behind all of our results is that the possibility of risk-taking renders convex incentives ineffective. Section 6 extends this intuition to three other settings, all of which assume that both the principal and the agent are risk-neutral. First, we modify the agent's payoff so that he incurs a cost that is increasing in the variance of his risktaking distribution. It turns out that this extension can be reformulated as a variant of our analysis in Section 4. We show that the unique optimal contract is strictly convex in output, but not so convex as to induce gaming, and that this contract converges to a linear contract as gaming becomes costless.
Second, we alter the timing of the model so that the agent engages in risk-taking before he observes intermediate output. We show that the possibility of ex ante risktaking leads optimal incentives to be a concave function of the agent's effort, rather than a concave function of intermediate output. This modified no-gaming constraint binds under mild conditions, in which case a linear contract is optimal.
Finally, we exhibit a close connection between risk-taking and another type of gaming: manipulating the timing of output. To do so, we study a dynamic setting in which the principal offers a stationary contract that the agent can game by choosing when output is realized over an interval of time. For example, Oyer (1998) and Larkin (2014) document how salespeople accelerate or delay sales so as to game convex incentive schemes over a sales cycle. We show that this setting is equivalent to our risk-taking model. Thus, a linear contract is optimal, since a strictly convex contract would induce the agent to bunch sales over short time intervals and a strictly concave contract would provide subpar effort incentives.
Our analysis is inspired by Diamond (1998) and Garicano and Rayo (2016). The latter includes a model of risk-taking that is similar to ours, but it fixes an exogenous (nonconcave) contract to focus on the social costs of excessive risk. The former is a seminal exploration of optimal contracts when the agent can both exert effort and make other choices that affect the output distribution. In particular, part of Diamond (1998) argues that linear contracts are (nonuniquely) optimal in an example with risk-neutral parties, binary effort, and an agent who can choose any mean-preserving spread of output. Our Proposition 2 expands this result to settings with a risk-averse principal as well as more general effort choices and output distributions. In doing so, we identify an additional advantage of linear contracts with a risk-neutral agent: relative to any strictly concave contract, they better insure the principal and so are uniquely optimal if the principal is even slightly risk-averse.
The rest of our analysis departs further from Diamond (1998). Section 3 shows that the fundamental consequence of agent risk-taking is to constrain incentives to be concave, not necessarily linear. Linear contracts are instead a consequence of this concavity constraint binding everywhere, as it does if the agent is risk-neutral. However, as Section 5 demonstrates, the concavity constraint need not necessarily bind everywhere if the agent is risk-averse, in which case the optimal contract may make utility strictly concave in output. Our analysis shows how risk-taking affects contracts in a classic moral hazard setting. Section 6 explores how a similar logic shapes optimal contracts in several related settings.
Our model of risk-taking is embedded in a classic moral hazard problem. With a risk-neutral agent, our model builds on Innes (1990), Poblete and Spulber (2012), and other papers in which limited liability is the central contracting friction. With a riskaverse agent, we build on Mirrlees (1976) and Holmström (1979) if the limited liability constraint is slack, and on Jewitt et al. (2008) if it binds. Within the classic agency literature, our analysis is conceptually related to papers that study principal-agent relationships in which the agent both exerts effort and makes other decisions. Classic examples include Lambert (1986) on how agency problems in information-gathering can lead to inefficient investment in risky projects and Holmström and Ricart i Costa (1986) on project selection under career concerns. Malcolmson (2009) presents a general model of such settings, but differs from our analysis by assuming that decisions are contractible. Other papers consider settings in which the principal also chooses actions other than the agent's wage contract, such as an endogenous performance measure; see, for example, Halac and Prat (2016) and Georgiadis and Szentes (2019).
A growing literature studies agent risk-taking. Some papers in this literature assume that an agent chooses from a parametric class of risk-taking distributions in either static (Palomino and Prat 2003, Hellwig 2009) or dynamic (DeMarzo et al. 2013) settings. We differ by allowing our agent to choose any mean-preserving spread of output, which means that our optimal contract must deter a more flexible form of gaming. Therefore, we join other papers that study nonparametric risk-taking, again in either static (Robson 1992, Diamond 1998, Hébert 2018 or dynamic (Ray andRobson 2012, Makarov andPlantin 2015) settings. We differ from these papers by identifying concavity as the key constraint on the optimal incentive scheme if the agent can costlessly take on risk and then characterizing optimal incentives given this constraint. 1 More broadly, our work is related to a longstanding literature that argues that optimal contracts must both induce effort and deter gaming. A seminal example is Holmström and Milgrom (1987), who display a dynamic environment in which linear contracts are optimal. Ederer et al. (2018) show how opacity (i.e., randomization over compensation schemes) can be used to deter gaming. Others, including Chassang (2013), Carroll (2015, and Antić (2016), depart from a Bayesian framework and prove that simple contracts perform well under min-max or other non-Bayesian preferences. In contrast, our paper considers contracts that deter gaming in a setting that lies firmly within the Bayesian tradition. While Carroll's paper considers a max-min rather than a Bayesian solution concept, its intuition is related to ours. In that paper, Nature selects a set of actions available to the agent so as to minimize the principal's expected payoffs. As in our setting, Nature might allow the agent to take on additional risk to game a convex incentive scheme. However, Nature might also allow the agent to choose a distribution with less risk to game a concave incentive scheme, while we allow the agent to add risk but not reduce it. That is, we model a moral hazard problem in which output is intrinsically risky and that risk cannot be completely hedged away. This difference is most striking if the agent is risk-averse, in which case Carroll's optimal contract makes the agent's utility linear in output, while ours might make utility strictly concave. One advantage of our approach is that our model results in a canonical contracting problem with an additional concavity constraint. Consequently, our technology would be straightforward to embed in Bayesian models of other applications.
Model
We consider a game between a principal (P, "she") and an agent (A, "he"). The agent has limited liability, so he cannot pay more than M ∈ R to the principal. Let [y y] ≡ Y ⊆ R be the set of contractible outputs, with y < 0. The timing is as follows.
(i) The principal offers an upper semicontinuous contract s(y) : Y → [−M ∞). 2 (ii) The agent accepts or rejects the contract. If he rejects, the game ends, he receives u 0 and the principal receives 0.
(iii) If the agent accepts, he chooses effort a ≥ 0.
(vi) Final output y is realized according to G x , and the agent is paid s(y).
The principal's and agent's payoffs are equal to π(y − s(y)) and u(s(y)) − c(a), respectively. We assume that π(·) and u(·) are strictly increasing and weakly concave, with u(·) onto, and that c(·) is infinitely differentiable, strictly increasing, and strictly convex. We also assume that F(·|a) has full support for all a ∈ [0 y), satisfies E F(·|a) [x] = a, and is infinitely differentiable with a density f (·|a) that is strictly monotone likelihood ratio property-(MLRP-) increasing in a, with f a (·|a)/f (·|a) uniformly bounded for all a. 3 This game is similar to a canonical moral hazard problem, with the twist that the agent can engage in risk-taking by choosing a mean-preserving spread G x of intermediate output x. Let denote the set of mappings x → G x . Without loss, we can treat the agent as choosing a and G ∈ G simultaneously.
Intermediate output has different interpretations in different settings. For instance, chief executive officers (CEOs) typically have advance information about whether they 2 One can show that the restriction to upper semicontinuous contracts is without loss: if the agent has an optimal action given a contract s(·), then there exists an upper semicontinuous contract that induces the same equilibrium payoffs and distribution over final output. 3 We assume that y is sufficiently large such that the principal never offers a contract that induces the agent to choose a = y. Together with y < 0 and a ≥ 0, this also ensures that the agent can always choose a nondegenerate distribution G x .
will hit their earnings targets in a given quarter, and they can cut maintenance or research and development expenditures if they are likely to fall short, taking on tail risk for the appearance of higher earnings (Rahmandad et al. 2018). Similarly, portfolio managers are typically compensated based on their annual returns and can adjust the riskiness of their investments over the course of the year so as to game those incentives (Chevalier and Ellison 1997). After the agent observes x but before y is realized, we have a setting with both a hidden type and a hidden action. The principal might therefore benefit from asking the agent to report x before y is realized. By punishing differences between this report and y, the principal might be able to dissuade at least some gambling. 4 We do not allow such mechanisms in our analysis. This restriction makes sense if the principal cannot intervene between the realization of x and the outcome of gambling, as is the case if x is realized at a random time and gambling is instantaneous. We think that this is the economically correct modeling assumption in many settings. For instance, financial advisors realize their expected returns and choose their investment strategies over time, rendering it impossible to identify a single moment at which intermediate output has been realized but final output has not. The spirit of the model is that the principal cannot catalog the precise moments or ways in which an agent might engage in risk-taking.
Risk-taking and optimal incentives
This section explores how the agent's ability to engage in risk-taking constrains the contract offered by the principal.
We find it convenient to rewrite the principal's problem in terms of the utility v(y) ≡ u(s(y)) that the agent receives for each output y. If we define u ≡ u(−M), then an optimal contract solves the following constrained maximization problem: The main result of this section is Proposition 1, which characterizes how the threat of gaming affects the incentive schemes v(·) that the principal offers. The principal optimally offers a contract that deters risk-taking entirely, but doing so constrains her to incentive schemes that are weakly concave in output. Define G D so that for each x ∈ Y, G D x is degenerate at x.
4 Allowing these types of mechanisms does not necessarily eliminate the agent's gaming incentives. The the supplementary file on the journal website, http://econtheory.org/supp3660/supplement.pdf, includes a supplemental section that studies this case and shows that gaming continues to constrain incentives. Indeed, if both parties are risk-neutral, then linear contracts are optimal.
The proof of Proposition 1 is in Appendix A. For an arbitrary incentive scheme v(·), define v c (·) : Y → R as its concave closure: At any outcome x such that the agent does not earn v c (x), he can engage in risk-taking to earn that amount in expectation (but no more). But then the principal can do at least as well by directly offering a concave contract, and if either the agent or the principal is strictly risk-averse, then offering a concave contract is strictly more profitable than inducing risk-taking. Given Proposition 1, we can write the optimal contracting problem as one without risk-taking but with a no-gaming constraint that requires the agent's utility to be concave in output, with the caveat that our solution is one of many if (but only if ) both parties are risk-neutral over the relevant payments: For a fixed effort a ≥ 0, we say that v(·) implements a if it satisfies (IC)-(NG) for a, and it does so at maximum profit if it maximizes (Obj) subject to (IC)-(NG). An optimal v(·) implements the optimal effort level a * ≥ 0 at maximum profit. Mathematically, the set of concave contracts is well behaved. Consequently, we can show that for any a ≥ 0, a contract that implements a at maximum profit exists and is unique if either π(·) or u(·) is strictly concave.
Lemma 1 (Existence and uniqueness). Fix a ≥ 0 and suppose that u > −∞. Then there exists a contract that implements a at maximum profit and does so uniquely if either π(·) or u(·) is strictly concave.
This result, which follows from the theorem of the maximum, is an implication of Proposition 9 in Appendix D. Existence is guaranteed by (NG); for example, without this constraint, no profit-maximizing contract would exist with risk-neutral parties. 5 If at least one player is strictly risk-averse, then Jensen's inequality implies that a convex combination of two different contracts that implement a also implements a and gives the principal a strictly higher payoff, which proves uniqueness.
Optimal contracts for a risk-neutral agent
Suppose the agent is risk-neutral, so u(y) = y, v(·) = s(·), and u = −M. In this setting, the key friction is the agent's limited liability constraint, which might prevent the principal from simply "selling the firm" to the agent.
For any effort level a, define where w a := min{M c (a)(a − y) − c(a) − u 0 }. Intuitively, s L a (y) is the least costly linear contract that implements a. Note that for a linear contract, (IC) can be replaced by its first-order condition because expected output is linear in effort and the cost of effort is convex.
Define the first-best effort a FB ∈ R + as the unique effort that maximizes y − c(y) and so satisfies c (a FB ) = 1. We prove that an optimal contract is linear and implements no more than first-best effort.
Proposition 2 (Risk-neutral agent). Let u(s) ≡ s. If a * is optimal, then a * ≤ a FB and s L a * (·) is optimal.
The proofs for all results in this section can be found in Appendix A. To see the intuition for Proposition 2, consider s L a FB (·), which both implements a FB and provides full insurance to the principal. If s L a FB (·) satisfies (IR) with equality, then it is clearly optimal. Suppose instead that (IR) is slack for s L a FB (·), in which case (LL) must bind. Suppose that (a * s * (·)) is optimal and s * (y) = s L a * (y) for at least some y. To prove the result, we construct a linear contract that satisfies (IC)-(NG) and performs better than s * (·). Toward this goal, defineŝ(·) to be the linear contract that agrees with s * (·) at y and gives the agent the same utility as s * (·) if he optimally responds to that contract. As shown in Figure 1,ŝ(·) must single-cross s * (·) from below, effectively moving payments from low to high outputs. Since F(·|a) satisfies MLRP, this shift in pay from low to high outputs motivates more effort:ŝ(·) implements someâ ≥ a * .
The effortâ might be either larger or smaller than first-best effort, a FB . Ifâ ≥ a FB , then it must be thatŝ(·) ≥ s L a FB (·). But then s L a FB (·) implements a FB , perfectly insures the principal, and entails smaller payments thanŝ(·). The principal therefore prefers s L a FB (·) to s * (·).
If insteadâ < a FB , then the slope ofŝ(·) must be strictly less than 1, which means that the principal's wealth underŝ(·), y −ŝ(y), is increasing in y. Consequently, the principal prefers high outputs and so she likes thatâ ≥ a * . Moreover,ŝ(y) > s * (y) exactly when output is high and so her marginal utility is low (and vice versa), which means thatŝ(·) also insures the principal better than s * (·). Therefore, the principal prefersŝ(·) to s * (·). She a fortiori prefers s L a (·), which lies weakly belowŝ(·), to s * (·). We conclude that any optimal contract s * (·) must coincide with the the linear contract that implements a * , s * (·) ≡ s L a * (·). Lemma 1 implies that s L a * (·) is uniquely optimal if the principal is even slightly riskaverse. If she is risk-neutral, then s L a * (·) is optimal but not uniquely so; in particular, any contract with a concave closure equal to s L a * (·) would give the same expected payoff as s L a * (·).
For any a > 0, the agent's promised utility under s L a (·) depends on y, the worst possible outcome over which the agent can gamble. In particular, s L a (·) starts at y and has a strictly positive slope, so that the agent's expected compensation increases without bound as y decreases. That is, as the agent's ability to take on left-tail risk becomes arbitrarily severe, motivating effort while deterring risk-taking becomes arbitrarily costly to the principal. Consequently, the optimal effort level converges to 0 as y becomes arbitrarily negative. 6 The possibility of risk-taking unambiguously harms the principal. However, the agent might either benefit or be harmed by risk-taking. The reason is that risk-taking both increases the agent's rent for a fixed effort level and changes the optimal effort level, which changes the agent's rent. Consequently, we can find examples in which the agent earns higher rent when we impose (NG) as well as examples in which he earns strictly lower rent.
In some applications, the principal might have risk-seeking preferences over output, for instance because she also faces convex incentives. For example, Rajan (2011) argues that, anticipating the possibility of bailouts, shareholders of financial institutions might have had an incentive to encourage risk-taking prior to the 2008 financial crisis. We can model such settings by allowing π(·) to be any strictly increasing and continuous function. Proposition 1 does not directly apply in this case because the principal might 6 If the principal is risk-neutral, then we can prove the stronger result that effort is strictly increasing in y: as the agent's ability to take left-tail risks becomes more severe, the principal responds by inducing lower effort. See the supplementary file on the journal website, http://econtheory.org/supp3660/supplement. pdf. strictly prefer the agent to at least sometimes engage in risk-taking. Nevertheless, we can modify the argument from Proposition 2 to show that a linear contract is optimal.
Corollary 1 (Risk-neutral agent, risk-loving principal). Let u(s) ≡ s and let π(·) be an arbitrary continuous and strictly increasing function that has concave closure π c (·). If a * is optimal, then a * ≤ a FB and s L a * (·) is optimal.
To see the proof of Corollary 1, note that the principal's expected payoff cannot exceed π c (·) for reasons similar to Proposition 1. Therefore, the contract that maximizes E F(·|a) [π c (x − s(x))] subject to (IC)-(NG) provides an upper bound on the principal's payoff. But Proposition 2 asserts that s L a * (·) is optimal in this problem because π c (·) is concave. Given s L a * (·), the agent is indifferent among distributions G ∈ G, so he is willing to choose G such that the principal's expected payoff equals π c (·).
Optimal contracts if the agent is risk-averse
This section characterizes the unique contract that implements a given a > 0 at maximum profit in a setting with a risk-averse agent and a risk-neutral principal. In Section 5.1, we develop necessary and sufficient conditions that characterize the profitmaximizing contract in this setting. We explore the implications of this characterization in Section 5.2; in Section 5.3, we show how to numerically derive the optimal contract in a discrete approximation of the model.
We impose two simplifying assumptions to make the analysis tractable. First, letting w denote the infimum of the domain of u(·), we assume that lim w↓w u (w) = ∞ and lim w↑∞ u (w) = 0. Second, we replace (IC) with the weaker condition that local incentives are slack at the implemented effort level a > 0: Replacing (IC) with (IC-FOC) entails no loss under mild regularity conditions on F(·|·). Given (NG), Proposition 5 of Chade and Swinkels (2016) shows that the agent's expected utility is concave in effort as long as expected output is concave in effort and F aa (·|a) is never first negative and then positive. For a fixed effort a ≥ 0, define the principal's problem For a ≥ 0 and y ∈ Y, define the likelihood function Define ρ(·) as the function that maps 1/u (·) into u(·); that is, for every z in the range of 1/u (·), ρ(z) = u((u ) −1 (1/z)). Then ρ −1 (v(y)) equals the marginal cost to the principal of giving the agent extra utility at y.
If u > −∞, then Lemma 1 implies that a unique solution to (P) exists. If u = −∞, then one can show that a unique solution exists as long as u (·) is not too convex. In particular, we can define the concavity of a positive function h(·), con(h), as the largest number t such that h t t is concave. If h is concave, then con(h) ≥ 1, while if h is log concave, then con(h) ≥ 0. For the case u = −∞, an optimal contract exists as long as con(u ) ≥ −2, which is weaker than u (·) being log concave. 7 Our results in this section apply in either setting. Unless otherwise noted, proofs for this section can be found in Appendix B.
Given the program (P), let λ and μ be the shadow values on (IR) and (IC-FOC), respectively. For a fixed a ≥ 0 and an incentive scheme v(·) that implements a, define as the net cost of increasing v(·) at y, taking into account how that increase affects (IR) and (IC-FOC). In particular, increasing v(y) increases the principal's cost at rate ρ −1 (v(y))f (y|a), relaxes (IR) at rate f (y|a), which has implicit value λ, and relaxes (IC-FOC) at rate f a (y|a), which has implicit value μ. Taking the difference between these costs and benefits, and dividing by f (y|a) yields n(y).
Let us ignore (LL) for the moment. Absent (NG), the optimal contract would set n(y) = 0 output-by-output and so v(·) = ρ(λ + μl(·|a)). Indeed, this incentive scheme (with the appropriate λ and μ) is the Holmström-Mirrlees (HM) contract characterized in Mirrlees (1976) and Holmström (1979). However, setting n(y) = 0 at each y might violate (NG). In the following section, we develop necessary and sufficient conditions for a profit-maximizing contract. These conditions guarantee that the contract cannot be improved by a set of perturbations that respect (NG) and affect an interval of an incentive scheme. We show that these perturbations are enough to pin down the optimal contract.
A characterization
We begin our characterization by defining several features of v(·) that will be useful for our construction.
Definition 1. Given v(·): is linear on [y L y H ] but not on any strictly larger interval. Point y is free if it is not in the interior of any linear segment.
(ii) A free y ∈ (y y) is a kink point of v(·) if two linear segments meet at y and is a point of normal concavity otherwise.
Consider the following two perturbations, formally defined in Appendix B and illustrated in Figure 2. Raise increases the level of v(·) by a constant over an interval, while 7 For a (rather complicated) proof of existence for u = −∞, available in a supplementary file on the journal website, http://econtheory.org/supp3660/supplement.pdf. This condition is satisfied for, for instance, u(w) = w α for α < 1 2 . See Prékopa (1973) and Borell (1975) for details. Figure 2. Raise and tilt. These perturbations require care around y L and y H to ensure that concavity is preserved. For this reason, we need both y L and y H to be free for raise. For tilt up, we need y L to be free, while y H must be free for tilt down.
tilt increases the slope of v(·) by a constant over an interval. Raising an interval typically introduces nonconcavities into v(·) at both endpoints of the interval. Tilting it a positive amount may introduce a nonconcavity at the lower end of the interval and tilting it a negative amount may introduce a nonconcavity at the upper end of the interval. Appendix B shows that for small perturbations, we can repair these nonconcavities on an arbitrarily small interval as long as the relevant endpoints are free. Raise and tilt affect both (IR) and (IC-FOC). However, Appendix B uses the fact that F(·|a) satisfies MLRP to show that these two perturbations have noncollinear effects on (IR) and (IC-FOC), which means that we can construct combinations of them to affect each constraint separately. Therefore, as long as there exists at least one free pointŷ < y such that v(ŷ) > u, we can use raise and tilt on [ŷ y] to establish the shadow values λ and μ of relaxing (IR) and (IC-FOC). If no such point exists, then v(·) is linear and v(y) = u.
A profit-maximizing incentive scheme v(·) cannot be improved by either raise or tilt on any valid interval. That is, raising v(·) on an interval [y L y H ] with both endpoints free must have a nonnegative expected net cost: If v(y L ) > u, then we can raise v(·) by a negative amount on [y L y H ], in which case (2) holds with equality. Similarly, if y L is free, then tilting v(·) on [y L y H ] must have nonnegative expected net cost, where the first term represents the fact that tilt increases the slope of v(·) from y L to y H and the second represents the resulting higher level of v(·) from y H to y. If y H is free, then applying negative tilt yields the reverse inequality: Our characterization combines these perturbations with the usual complementary slackness condition that λ = 0 if (IR) is slack (so that (LL) binds). (ii) if y L is free, then (3) holds; (iii) if y H is free, then (4) holds.
Our main result in this section characterizes the unique incentive scheme that implements any a > 0 at maximum profit.
Proposition 3 (Risk-averse agent, risk-neutral principal). Suppose u(·) is strictly concave and π(y) ≡ y. Then for any a > 0, v(·) implements a at maximum profit if and only if it is GHM.
The necessity of GHM follows from the arguments above. To establish sufficiency, we first show that if anyṽ(·) implements a at higher profit than v(·), then there exists a local perturbation that improves v(·). Then we show that among local perturbations, it suffices to consider tilt and raise on valid intervals. This result follows because any perturbation that respects concavity can be approximated arbitrarily closely by a combination of valid tilts and raises. Therefore, if any perturbation improves the principal's profitability, then so must some individual tilt or raise.
One implication of Proposition 3 is that net cost equals 0 for any output where both (LL) and (NG) are slack.
Corollary 2. Suppose u(·) is strictly concave and π(y) ≡ y. For any a > 0, let v(·) solve (P) and suppose y ∈ (y y) is free. Then n(y) ≤ 0 and n(y) = 0 if y is a point of normal concavity.
At any point of normal concavity y, we can find two free points that are arbitrarily close to y. 8 Proposition 3 implies that (2) holds with equality between these points; taking a limit as these points approach y yields n(y) = 0. If y is a kink point, then we cannot perturb v(·) around y and preserve concavity. However, there is a sense in which (NG) binds on the linear segments on either side of y: Lemma 3 in Appendix B proves that absent (NG), the principal would want to increase payments near the ends of a linear segment and decrease them somewhere in the middle of that segment. Therefore, n(y) ≤ 0 at the endpoints of any linear segment, which includes any kink point.
Implications of the no-gaming constraint
This section builds on Proposition 3 to illustrate how risk-taking affects the trade-off between insurance and incentives that lies at the heart of this moral hazard problem. For a broad class of settings, we show that optimal incentives are linear in output where (NG) binds and otherwise equate the marginal costs and benefits of incentive pay at each output.
Intuitively, if setting n(y) = 0 at some y would violate (NG), then this constraint binds, and so the optimal contract is locally linear in utility. These linear segments are "ironed" in the sense that they set net cost equal to 0 in expectation, even if they do not do so point-by-point. Outside of these ironed regions, (NG) is slack and so n(y) = 0 output-by-output.
We demonstrate this intuition if ρ(λ + μl(·|a)) is first convex and then concave, which we argue is a natural case to consider.
Lemma 2. Suppose u(·) and F(·|a) are analytic and con(ρ ) + con(l y ) > −1. Then for any λ and μ, there exists y I such that ρ(λ + μl(·|a)) is convex on [y y I ) and concave on (y I y]. The proof of Lemma 2 can be found in Appendix D.2. The requirement that con(ρ ) + con(l y ) > −1 is relatively mild. It is automatic if ρ and l y are log concave, but it also holds, for example, if l y is strictly log concave and the agent's utility function is from a broad class that satisfy hyperbolic absolute risk aversion, including u(w) = log w. 9 The following proposition characterizes the optimal contract if ρ(λ + μl(·|a)) is first convex and then concave and (LL) is slack.
10 The supplementary file on the journal website, http://econtheory.org/supp3660/supplement.pdf, gives conditions under which an optimal contract exists even if u = −∞. Under those conditions, this existence proof also shows that (LL) is slack if u > −∞ is sufficiently negative.
Under the condition that ρ(λ + μl(·|a)) is first convex and then concave, and (LL) is slack, the profit-maximizing contract v * (·) is linear in utility for low output and otherwise sets n(y) = 0 output-by-output. Moreover, on the linear region of v * (·), expected net costs equal 0. See Figure 3 for an illustration.
In the extremes, if ρ(λ + μl(·|a)) is convex everywhere, then the profit-maximizing contract is linear, 11 while the profit-maximizing contract equals ρ(λ + μl(·|a)) if the latter is concave. Intuitively, ρ(λ + μl(·|a)) is likely to be convex if the principal would like to "insure against downside risk" by offering low-powered incentives for low output and "motivate with upside risk" by giving steeper incentives for high output. For instance, ρ(·) tends to be more convex if prudence is large relative to absolute risk aversion, which means that risk aversion declines sufficiently quickly as compensation increases. 12 Conversely, ρ(λ + μl(·|a)) is likely to be concave if the principal would like to motivate with downside risk and insure against upside risk.
Proposition 4 focuses on the case where (LL) is slack, but (NG) has a similar effect if (IR) is slack so that (LL) binds. In that case, the principal would like to pay the agent as little as possible for any y with l(y|a) < 0, since paying for low output both increases the 11 This case obtains if, for example, l(·|a) is convex and ρ(·) is convex on the range of λ + μl(·|a). Note that ρ(·) cannot be convex over its entire domain, because ρ(0) = −∞.
agent's rent and tightens (IC-FOC) (Jewitt et al. 2008). But paying the agent as little as possible for low output and rewarding high output would violate (NG), so this constraint binds following low output.
Proposition 5 (Slack (IR)). Fix a ≥ 0 and π(y) ≡ y. Let v * (·) solve (P) and suppose that If (IR) is slack and v * (·) is strictly concave for y < y 0 , then making it "flatter" on [y y 0 ] by taking a convex combination of it with the linear segment that connects v(y) and v(y 0 ) improves the agent's incentives and decreases the principal's expected payment. So the profit-maximizing v * (·) is linear on [y y 0 ], though it can be strictly concave for higher output.
Numerical examples
In this section, we present simulations of the profit-maximizing contract for a version of the model with discrete outputs. Fix N ∈ N and define We constrain output to satisfy y ∈ Y = {y 1 y N }, where the probability that y = y i is p i (a) ≡ y i y i−1 f (z|a) dz and we define y 0 = y. For any a > 0, the profit-maximizing contract in this discrete setting solves the discrete version of (P), where v i represents the agent's utility following output y i . The benefit of this discrete setting is that the analog to the no-gaming constraint, (5), can be written in a way that is linear in v i . Therefore, the contracting problem is a convex optimization program that can be solved using standard techniques. It can be shown that the solution of (P N ) and the corresponding payoffs converge to the solution of the original problem as N → ∞. Figure 4 gives an example that fixes the effort level and varies the lower bound on the agent's utility, u. In each panel, the contract that solves (P N ), the no-gaming contract, is denoted by a dashed line, while the contract without risk-taking, the HM contract, does not impose (5) and is denoted by a solid line. For all of our examples, N = 1,000. 13 Consider the left panel of Figure 4. In this example, the no-gaming constraint binds, and so the no-gaming contract is linear in utility for an interval of outputs including the lowest one. This result echoes our observation that the optimal contract resembles an ironed version of the contract without risk-taking. Note, however, that the no-gaming constraint affects the optimal contract even for outputs where (NG) is slack. This global effect arises because the no-gaming constraint distorts the multipliers λ and μ and so changes the net cost of paying the agent following any output realization. In both panels, the limited liability constraint is binding and so the no-gaming contract makes the agent's utility linear following low output, as Proposition 5 suggests. Increasing u makes this liability constraint binding over a wider range of outputs and so expands the region over which the agent's utility is linear. Figure 5 uses a similar example to illustrate how the profit-maximizing contract changes with the agent's outside option, u 0 . 14 As u 0 increases, the limited liability constraint becomes "less binding" in the sense that it binds for a smaller range of outputs. 13 These examples assume that u(ω) = 2 Again consistent with Proposition 5, the no-gaming contract is linear over a smaller range of outputs as u 0 increases. Thus far in this section, we have characterized the profit-maximizing contract for a fixed effort level. We can numerically solve for the (approximately) optimal effort level by solving (P N ) for a fine grid of efforts. Of course, the possibility of gaming unambiguously increases the total incentive cost of inducing any fixed effort level. However, imposing the no-gaming constraint has an ambiguous effect on the incentive cost of inducing increased effort. Consequently, the possibility of risk-taking can either increase or decrease the optimal effort level. Indeed, Figure 6 illustrates examples in which each of these possibilities obtains. 15
Extensions and reinterpretations
This section considers three extensions, all of which assume that both the principal and the agent are risk-neutral. Section 6.1 changes the agent's utility so that he must incur a cost to gamble. Section 6.2 alters the timing so that the agent gambles before observing intermediate output. Section 6.3 reinterprets the baseline model as a dynamic setting in which, rather than gambling, the agent can choose when output is realized so as to game a stationary contract. Proofs for this section can be found in Appendix C.
Costly risk-taking
In many settings, the agent might have to bear a cost to engage in risk-taking. A portfolio manager, for example, might spend time and effort to identify investments that allow for risk-taking without being detected. If larger gambles are harder to hide from investors, then the manager's cost is increasing in the dispersion of the risk-taking distribution. In this section, we adapt the arguments in Propositions 1 and 2 to a model with costly risk-taking. The resulting contracts are strictly convex, providing a rationale for such contracts in practice.
Consider the model from Section 2, and suppose that the agent must pay a private cost E G x [d(y)] − d(x) to implement distribution G x following the realization of x, where d(·) is smooth, strictly increasing, and strictly convex, with d(y) = 0. For example, this cost function equals the variance of G x if d(y) = y 2 . More generally, d(·) captures the idea that the agent must incur a higher cost to take on more dispersed risk. The principal's and agent's payoffs are y − s(y) and s(y) − c(a) − d(y) + d(x), respectively. 16 For any contract s so that conditional on effort, the agent's payoff equalsṽ(y) −c(a). Then the principal's payoff equalsπ(y) −ṽ(y), whereπ(y) ≡ y − d(y) is strictly concave. As in Section 3, the agent chooses G x so that his expected payoff equalsṽ c (x). Sincẽ π(·) is strictly concave, the principal prefers to deter risk-taking by offering a contract 16 We are grateful to Doron Ravid for suggesting this formulation of the cost function. that makes the agent's payoffṽ(·) concave. Consequently, we can modify the proof of Proposition 2 to show that the principal's optimal contract makesṽ(·) linear. The optimal s(·) equalsṽ(·) + d(·) and is, therefore, strictly convex.
Proposition 6 (Costly risk-taking). Assumec(·) is strictly increasing and strictly convex. For optimal effort a * ≥ 0, define s * (y) =c (a * )(y − y) This result follows a logic similar to Proposition 2, where the optimal s * (·) ensures thatṽ(·) is linear. Intuitively, s * (·) is the most convex contract that deters the agent from gambling. Note that the principal earns more if risk-taking is costly, since she can offer somewhat convex incentives without inducing gaming.
Risk-taking before intermediate output is realized
If the agent engages in risk-taking before observing intermediate output, then he gambles to "concavify" his expected utility given effort. This section gives conditions under which linear contracts are optimal for this alternative timing.
Consider the following game. Move 2. The agent accepts or rejects the contract. If he rejects, the game ends, he receives u 0 and the principal receives 0.
Move 3. The agent chooses an effort a ≥ 0 and a distribution G(·) ∈ (Y) subject to the constraint E G [x|a] = a. 17 Move 4. The outcome of the gamble x ∼ G(·) is realized and final output is realized according to y ∼ F(·|x). We assume that F(·|x) has full support, with E F(·|x) [y] = x and a density f (·|x) that satisfies strict MLRP in x.
The principal and agent earn y − s(y) and s(y) − c(a), respectively, where c(·) is strictly convex. By choosing G(·), the agent essentially randomizes his level of effort. This feature means that the contract cannot increase the agent's expected payoff following effort a without also increasing the expected payoff of exerting less effort and randomizing between x = a and some lower x. The agent will therefore engage in risk-taking whenever his expected payoff as a function of effort is convex. One advantage of this model is that our tools extend naturally to it, a feature that is not shared by every model of ex ante risk-taking. 18 As an example of the kind of risk-taking that fits this setting, suppose the principal is an investor and the agent is an entrepreneur who chooses among many possible 17 With some notational inconvenience, one can extend this argument to more general mappings from a to E G [x|a].
18 For example, if the agent could instead choose the distribution of an additively separable noise term that affects output, then linear contracts would not necessarily be optimal.
projects. The entrepreneur can exert more effort to identify better projects, but he can also work less hard and choose a riskier project that succeeds wildly in some environments but fails miserably in others. The inherent riskiness of the project is then captured by the entrepreneur's choice of G(·), while F(·|x) represents residual uncertainty that remains even if the entrepreneur picks the "safest" project that he has identified. 19 Given s(·) and x, the agent's expected payoff equals V s (x) ≡ y y s(y)f (y|x) dy As in (1), let V c s (·) be the concave closure of V s (·). Analogous to Proposition 1, the agent will optimally choose G such that We prove that a linear contract solves this problem.
Proposition 7 (Ex ante risk-taking). If a * ≥ 0 is optimal in the program (6), then a * ≤ a FB and s L a * (·) is optimal.
To see the argument, relax the optimal contracting problem by assuming that the principal can choose V c s (·) directly, subject only to the constraints that V c s (·) is concave and V c s (·) ≥ −M. This relaxed problem is very similar to (Obj)-(NG) except that V c s (·) is a function of effort rather than of intermediate output. Nevertheless, a linear V c s (·) is optimal for reasons similar to Proposition 2. But V c s (·) is linear if V s (·) is linear, and V s (·) is linear if s(·) is linear because E F(·|x) [y] = x. Hence, s L a * (·) induces the optimal V c s (·) from the relaxed problem and so is optimal.
Manipulating the timing of Output 20
In this section, we argue that risk-taking is very similar to another common form of gaming: manipulating when output is realized over time. To make this point, we consider a model in which the principal offers a stationary contract that the agent can game by shifting output across time, rather than by engaging in risk-taking. This model turns out to be equivalent to the setting in Section 4. 19 If z y F xx (y|x) dy ≥ 0 for all z ∈ Y and x, then a riskier G(·) leads to a riskier distribution over final output (in each case, in the sense of second-order stochastic dominance). 20 We are grateful to Lars Stole for suggesting this interpretation of the model. Move 2. The agent accepts or rejects. If he rejects, he earns u 0 and the principal earns 0.
Move 3. The agent chooses an effort a ≥ 0.
Move 4. Total output x is realized according to F(·|a) ∈ (Y).
Move 5. The agent chooses a mapping from time t to output at time t, y x : [0 1] → Y, subject to Crucially, the principal must offer a stationary contract s(·) in this model. Without this restriction, the principal could eliminate gaming incentives entirely, for instance, by paying only for cumulative output at t = 1. While stationarity is a significant restriction, we believe it is realistic in many settings: as documented by Oyer (1998) and Larkin (2014), contracts tend to be stationary over some period of time (such as a quarter or a year).
This problem is equivalent to one in which, rather than choosing the realized output y x (t) at each time t, the agent instead decides what fraction of time in t ∈ [0 1] to spend producing each possible output y ∈ Y. In particular, define G x (y) as the fraction of time for which y x (t) ≤ y. 21 Then G x (·) is a distribution that satisfies E G x [y] = x, and the agent's and principal's payoffs are E G x [s(y)] − c(a) and E G x [y − s(y)], respectively. That is, intertemporal gaming plays exactly the same role as gambling in our baseline model.
Proposition 8 (Intertemporal gaming). The optimal contracting problem in this setting coincides with (Obj F )-(LL F ) with u(y) ≡ y and π(y) ≡ y. Hence, if a * ≥ 0 is optimal, then a * ≤ a FB and s L a * (·) is optimal.
Intuitively, the agent will adjust his realized output so that his total payoff equals the concave closure of s(·). He does so by smoothing output over time if s(·) is concave, or bunching it in a short interval if s(·) is convex. This behavior is consistent with Oyer (1998) and Larkin (2014), which find that salespeople facing convex incentives concentrate their sales. Conversely, Brav et al. (2005) find that CEOs and chief financial officers smooth earnings to avoid the severe penalties that come from falling short of market expectations. 21 Formally, G x (y) = L({t|y x (t) ≤ y}), where L(·) denotes the Lebesgue measure.
Concluding remarks
Risk-taking fundamentally constrains how a principal motivates her agents. This paper argues that risk-taking blunts convex incentives, which have significant effects on optimal incentive provision. Apart for Corollary 1, the agent does not engage in risk-taking under our optimal contract. Therefore, our analysis focuses on the incentive costs of risk-taking, rather than any direct costs that risk-taking has on society.
Nevertheless, our framework provides a natural starting point to consider why contracts might not deter risk-taking. Corollary 1 suggests one reason: the principal might be risk-seeking, for instance, because her own incentives are nonconcave. A second reason is implicit in our assumption that the principal can commit to an incentive scheme. Commitment might be difficult in some settings, for instance, because output can serve as the basis for future compensation (Chevalier andEllison 1997, Makarov andPlantin 2015). More generally, an agent's competitive context shapes the incentives they face, which in turn determine the kinds of risks they optimally pursue; see Fang and Noe (2016) for a step in this direction. Our model provides a foundation on which to study the consequences of risk-taking behavior for markets, organizations, and society.
Appendix A: Proofs for Sections 3 and 4 For notational convenience, we use the indefinite integral to indicate an integral on [y y] in all of the appendices. Proofs are ordered based on where the corresponding results appear in the text. Some proofs depend on later results. We point out each of these dependencies as they arise; see footnotes 22 and 24.
A.1 Proof of Proposition 1
Fix a ≥ 0 and let v(·) implement a at maximum profit. We first claim that following each realization x, the agent's payoff equals v c (x) and the principal's payoff is no larger than π(x −v c (x)).
Fix x ∈ Y. Since v is upper semicontinuous, there exist p ∈ [0 1] and z 1 z 2 ∈ Y such that pz 1 + (1 − p)z 2 = x and pv(z 1 ) + (1 − p)v(z 2 ) = v c (x). Since the agent can choosẽ G x to assign probability p to z 1 and 1 − p to z 2 , his expected equilibrium payoff satisfies But v c is concave and v c (y) ≥ v(y) for any y ∈ Y, so by Jensen's in- , and, hence, the contract v c (x) satisfies (IC F )-(LL F ) for effort a and the degenerate distribution G.
Next consider the principal's expected payoff. Since π(·) is concave, applying Jensen's inequality and the previous result yields where the first inequality is strict if π is strictly concave and the second is strict if u is strictly concave (so that −u −1 is also strictly concave). Therefore, the principal weakly prefers the contract v c (x) and strictly so if either π(·) or u(·) is strictly concave.
A.2 Proof of Lemma 1
Existence follows from Proposition 9 in Appendix D. 22 To prove uniqueness, suppose at least one of π(·) or u(·) is strictly concave, and suppose that two contracts v(·) and v(·) both implement a ≥ 0 at maximum profit, with v(x) =ṽ(x) for some x ∈ Y. Since v(·) andṽ(·) are upper semicontinuous and concave, they must differ on an interval of positive length. But then the contract v * (·) ≡ 1 2 (v(·) +ṽ(·)) satisfies (IC F )-(LL F ) for effort a, and the principal's payoff under v * is by Jensen's inequality, where at least one of the inequalities is strict.
A.3 Proof of Proposition 2
For any contract s, write U(s) = max a {E F(·|a) [s(y)] − c(a)}. Fix an optimal pair (a * s * ), where s * (·) implements a * . Recall that for each a, s L a is the lowest-cost linear contract that implements a and that s L a FB has slope 1.
The first inequality is Jensen's and is strict unless either y − s * (y) is constant or the principal is risk-neutral. The second inequality uses U(s * ) ≥ U(s L a FB ) and a * − c(a * ) ≤ a FB − c(a FB ), and is strict unless a * = a FB and U(s * ) = U(s L a FB ). The final equality uses that y − s L a FB (y) is a constant. For (a * s * ) to be optimal, these inequalities must hold with equality, so a * = a FB , s L a FB (·) is optimal, and, moreover, s * = s L a FB if the principal is risk-averse.
Assume instead that U(s L a FB ) > U(s * ). Then, since U(s * ) ≥ u 0 , it follows that s L a FB (y) = −M. For each a, letŝ a (·) be the linear contractŝ a (y) = s * (y) + c (a)(y − y) that equals s * (y) at y and implements a. Note thatŝ a FB (y) ≥ s L a FB (y) for any y, so U(ŝ a FB ) ≥ U(s L a FB ) > U(s * ).
Since U(ŝ a ) is continuous in a and U(ŝ a FB ) > U(s * ) ≥ U(s a * ), there existsâ ∈ [a * a FB ) such that U(ŝâ) = U(s * ). Since s L a is weakly belowŝâ, Here, the second equality uses that E F(·|a) [ŝâ(y)] is linear in a and thatŝâ(·) implementŝ a, and the second inequality uses that c(·) is convex. Chooseŷ so that s L a (·) crosses the concave contract s * (·) from below atŷ, where if s L a (y) < s * (y) for all y, thenŷ = y. Sinceâ < a FB and, hence, s L a (·) has slope strictly less than 1, it follows that for all y <ŷ and t > s L a (y), and strictly so if π(·) is not linear. Similarly, for all y >ŷ and t < s L a (y), and strictly so if π(·) is not linear. That is, the marginal cost to the principal of paying the agent is no less than π (ŷ − s L a (ŷ)) for y <ŷ, and no more than this amount for y >ŷ. 23 The relevant version of Beesack's inequality states that if a function h(·) single-crosses 0 from below and satisfies h(x) dx = 0, then for any increasing function g(·), h(x)g(x) dx ≥ 0, and strictly so if g(·) is strictly increasing and h(·) is not everywhere 0. See Beesack (1957), available online at https://www.jstor. org/stable/2033682. But then, since E F(·|a * ) [s L a (y)] ≤ E F(·|a * ) [s * (y)] and s L a (y) < s * (y) if and only if y <ŷ, and strictly so unless the principal is risk-neutral or s L a (·) and s * (·) agree. Finally, since the slope of s L a (·) is strictly less than 1 andâ ≥ a * , and strictly so unlessâ = a * . To conclude the proof, note that since (a * s * ) is optimal, each of these inequalities is an equality and, hence, a * =â ≤ a FB . If the principal is risk-averse, then s * = s L a as well. If the principal is risk-neutral, then s L a (·) is optimal but not uniquely so.
A.4 Proof of Corollary 1
Fix a > 0 and consider the problem (Obj F )-(LL F ) with an arbitrary π(·) and u(s) ≡ s. Define E G x [π(y)] = π c (x), where π c (·) denotes the concave closure of π(·). Modify (Obj)-(NG) so that the principal's utility equals π c (·). Since π c (y) ≥ π(y) for any y, the principal's payoff in this modified problem must be weakly larger than under the original problem. But π c (·) is concave and s L a (y) = −M, so Proposition 2 implies that s L a (·) implements a at maximum profit in this modified problem. So the principal's expected payoff equals E F(·|a) [π c (x − s L a (x))] in this modified problem. Now, consider the contract s L a (x) in the original problem (Obj)-(NG). For any dis- because s L a is linear. Therefore, as in Proposition 1, there exists some G P x such that E G P x [π(y − s L a (y))] = π c (x − s L a (x)). Furthermore, conditional on x, the agent's expected payoff satisfies The principal's expected payoff if she offers s L a equals E F(·|a) [π c (x − s L a (x))], her payoff from the modified problem. So s L a a fortiori implements a at maximum profit for any a ≥ 0.
Appendix B: Proofs for Section 5 First we prove some preliminary properties of optimal incentives schemes. If u > −∞, Lemma 1 has shown that any profit-maximizing incentive scheme v(·) must be unique, and supplementary file on the journal website, http://econtheory.org/ supp3660/supplement.pdf, show the same for u = −∞. We prove that v(·) must be monotonically increasing and satisfy (IC-FOC) with equality.
Suppose v(·) is concave and not everywhere increasing. Then we can findỹ ∈ Y such that if we replace v(y) by a constant v(ỹ) to the right ofỹ, the resultant contract is concave, gives the same utility to the agent, is cheaper, and, using MLRP and Beesack's inequality, makes (IC-FOC) slack. So any optimal v(·) must be increasing.
Suppose v(·) does not satisfy (IC-FOC) with equality. Then a convex combination of v and the contract that gives utility constant and equal to max{u u 0 + c(a)} ≥ 0 implements a, is strictly cheaper than v, and satisfies (IC-FOC) with equality. So any optimal v(·) must satisfy (IC-FOC) with equality.
Consider an interval [y L y H ]. The initial impact of raising the agent's utility on this interval is given by Similarly, tilting this interval has an initial impact on the agent's utility given by In Section B.1.2, we will carefully define the perturbations raise and tilt and show that they respect concavity.
Our first result proves two useful properties of any contract that is GHM.
Lemma 3. Let v be GHM and let [y L y H ] be a linear segment of v. Then, for eachŷ ∈ (y L y H ), there isỹ ∈ (ŷ y H ) such that If v(y L ) > u, then such aỹ exists in (y L ŷ) as well. But somewhere on (y L y H ), n(y) ≥ 0.
Proof. Note that for y > y H , tŷ y H (y) = y H −ŷ = (y H −ŷ)r y H y (y). Since v satisfies (IC), since a > 0, and since v is concave and weakly increasing, v must be strictly increasing near y. Hence, since y H > y, v(y H ) > u. We thus have n(y)r y H y (y)f (y|a) dy = 0 by Definition 2(i). Hence, by Definition 2(iii), we have 0 ≥ n(y)tŷ y H (y)f (y|a) dy = n(y)tŷ y H (y)f (y|a) dy − (y H −ŷ) n(y)r y H y (y)f (y|a) dy = y Ĥ y n(y)tŷ y H (y)f (y|a) dy and so at some pointỹ ∈ (ŷ y H ), the integrand is weakly negative. Since tŷ y H (ỹ) > 0, it follows that n(ỹ) ≤ 0.
Similarly, note that if v(y L ) > u, then n(y)r y L y (y)f (y|a) dy = 0 by Definition 2(i), and so by Definition 2(ii), 0 ≤ n(y)t y L ŷ (y)f (y|a) dy = n(y)t y L ŷ (y)f (y|a) dy − (ŷ − y L ) n(y)r y L y (y)f (y|a) dy where, since the bracketed term is strictly negative on (y L ŷ), it follows that n(y) is somewhere weakly negative on (y L ŷ).
Finally, since n(y)r y L y H (y)f (y|a) dy ≥ 0 and since we have established that n(y) is weakly negative somewhere on (y L y H ), we must also have n(y) weakly positive somewhere on the same interval.
B.1 Proof of Proposition 3
The discussion prior to the statement of Proposition 3 proves necessity, given well defined perturbations that satisfy concavity and well defined shadow values. This section begins by formally defining the relevant perturbations, showing that they preserve concavity, and then showing how they can be used to establish shadow values for (IR) and (IC-FOC). We then turn to sufficiency. 24 B.1.1 Preliminaries Definition 2 and Proposition 3 are phrased in terms of free points. But not every free point is a convenient place to define a perturbation. Instead, for any given v, let C v be the set of points y at which there exists a supporting plane L such that L(y ) > v(y ) for all y = y.
Clearly any kink point (see the discussion immediately before Corollary 2) is an element of C v . The next claim shows that for every other free point, there is an arbitrarily close-by element of C v . Claim 1. Letŷ be any point of normal concavity. Then, for each δ, there is a point in {(ŷ − δ ŷ + δ) \ŷ} ∩ C v . From this, it follows that for each ε > 0, there exists y L < y H such that y L y H ∈ C v , and such that y L y H ∈ [ŷ − ε ŷ + ε].
Proof. We show first that for each δ, there is a point in {(ŷ − δ ŷ + δ) \ŷ} ∩ C v . To see that this suffices to show the second part, apply the result first to find a point y 1 in {(ŷ − ε ŷ + ε) \ŷ} ∩ C v . Apply the result again to find y 2 in {(ŷ − δ ŷ + δ) \ŷ} ∩ C v , where δ = (1/2)|y 1 −ŷ|, and finally take y L and y H as the smaller and larger of y 1 and y 2 .
So fix δ > 0. Sinceŷ is not on the interior of a linear segment and not a kink point, there is at least one side ofŷ, without loss of generality the right side, such that v(·) is not linear on (ŷ ŷ + δ). Let S(·) be the correspondence that, for each y, assigns the set of slopes of supporting planes at y and let s(·) be any selection from S(·). Note that since v is concave, for any y > y , max{S(y )} ≤ min{S(y )} and, hence, s is decreasing. Assume first that there is a pointỹ ∈ (ŷ ŷ + δ), where s(·) jumps downward, say from s to s < s . Then the supporting plane atỹ with slope (s + s )/2 qualifies. Assume instead that s(·) is continuous on (ŷ ŷ + δ). It cannot be everywhere constant, since v(·) is not linear on (ŷ ŷ + δ). Hence, since s(·) is continuous, there is a pointỹ at which it is strictly decreasing, so that, specifically, s(ỹ) < s(y) for all y <ỹ and s(ỹ) > s(y) for all y >ỹ. The supporting plane atỹ with slope s(ỹ) then qualifies.
To see that why Claim 1 is helpful, assume that some part of Definition 2 is violated. For example, assume some optimal contract has a pair of free points y L and y H such that n(y)r y L y H f (y) dy < 0. If either y L or y H is a kink point, then it is also an element of C v . If not, then we can apply Claim 1 to replace each relevant point by a sufficiently closeby element of C v that the strict inequality is maintained. Hence, it is enough to prove Proposition 3 when each restriction to a free point is tightened to a restriction to C v . B.1.2 Formal definition and properties of the perturbations This section defines raise and tilt, being careful, in particular, to maintain concavity at the endpoints of the perturbed interval. We need to consider as many as three perturbations at once, where, given the previous discussion, we require the relevant points to be in C v . First, we have some small amount ε p of a perturbation p, where p could be r y L y H or t y L y H in each case with ε p positive or negative. Second, for someŷ ∈ C v , we need to consider some amount ε t of tŷ y and ε r of rŷ y . Intuitively, we use tŷ y and rŷ y to establish shadow values for (IC-FOC) and (IR), and then, for any particular perturbation p, we consider the three deviations together where one uses tŷ y and rŷ y to undo the effect of p on (IC-FOC) and (IR).
Fix y L , y H , andŷ. A priori,ŷ may have arbitrary position relative to y L and y H , and, moreover, in the case where p is t y L y H , one of y L or y H may not be in C v , depending on whether ε p is negative or positive. Define y 0 < y 1 < · · · < y K , K ≤ 4, as elements of the set {y y L y H ŷ y} ∩ C v . For any given ε = (ε p ε t ε r ), let d(·; ε) : [y y] → R be given by d(·; ε) = ε p p(·) + ε t tŷ y (·) + ε r rŷ y (·) If y L and y H are both elements of {y 0 y K }, as must be true if p is r y L y H , then it follows that d is linear on each interval of the form (y k−1 y k ). Assume that y H / ∈ {y 0 y K }. Then it must be that p is t y L y H with ε p ≥ 0. In this case, if y H / ∈ (y k−1 y k ), then d(·; ε) is linear on (y k−1 y k ), while if y H ∈ (y k−1 y k ), then, since ε p ≥ 0, d(·; ε) is concave with two linear segments on (y k−1 y k ). Finally, assume y L / ∈ {y 0 y K }. Then p is t y L y H with ε p ≤ 0 and once again, if y L / ∈ (y k−1 y k ), then d(·; ε) is linear on (y k−1 y k ), while if y L ∈ (y k−1 y k ), then since ε p ≤ 0, d(·; ε) is once again concave with two linear segments on (y k−1 y k ).
For each k, let L − k (·; ε) be the line that coincides with the linear segment of d(·; ε) immediately to the right of y k−1 and let L + k (·; ε) be the line that coincides with the linear segment immediately to the left of y k (these are the same line if d is linear on (y k−1 y k )), and let y ∈ (y k−1 y k ) L + k (y; ε) y ≥ y k Note that d k is concave and that as |ε| ≡ |ε p | + |ε t | + |ε r | → 0, d k converges uniformly to the function that is constant at 0.
For each k, let L k be a supporting line to v at y k , where since y k ∈ C v , we can choose L k such that L k (y) > v(y) for all y = y k , and let v k (y) = As the minimum over concave functions,v(·; ε) is concave. Fix k and consider any y ∈ (y k−1 y k ). Since d k (y 0) = 0 and by the fact that for each k , L k (y) > v(y) for all y = y k , k is the unique minimizer of v k (y) + d k (y; 0). From this, it follows first thatv(y; 0) = v k (y) = v(y) and, second, that for all ε in some neighborhood of 0 (where ε p is restricted in sign if p = t y L y H and if one of y L or y H is not in C v ), v ε p (y; ε) = d ε p (y; ε) = p(y) v ε t (y; ε) = d ε t (y; ε) = tŷ y (y) v ε r (y; ε) = d ε r (y; ε) = rŷ y (y) But then, except on the zero-measure set of points {y 0 y K }, v ε p (·; 0) = p(·) (8) v ε t (·; 0) = tŷ y (·) v ε r (·; 0) = rŷ y (·)
B.1.3 Shadow values
We need to establish that starting from ε = 0, the effects of perturbation p can be undone via tŷ y and rŷ y . To do so, let The top row of Q tracks the rate at which ε t and ε r , respectively, affect (IC-FOC), while the bottom row tracks the rate at which ε t and ε r , respectively, affect (IR). Then, from Thus, |Q(0)| has the same sign as the difference between two expectations of l(·|a). Using that (y −ŷ) is strictly increasing, the density in the first integral strictly likelihoodratio dominates the density in the second integral. Since l(·|a) is strictly increasing, it follows that |Q(0)| is strictly positive (and remains so for all ε in some ball around 0). But then by the implicit function theorem, for each p ∈ {t y L y H r y L y H }, we can on the appropriate neighborhood implicitly define ε r (·) and ε t (·) by v y; ε p ε t (ε p ) ε r (ε p ) f (y|a) dy = c(a) + u 0 v y; ε p ε t (ε p ) ε r (ε p ) f a (y|a) dy = c (a) so that starting from ε = 0, if we make the small perturbation ε p to v, we can restore (IC-FOC) and (IR) by a suitable combination of small applications ε t and ε r of tŷ y and rŷ y .
Let λ be the rate of change of costs as one relaxes (IR) using tŷ y and rŷ y . That is, if we let then the rate of change of costs as one relaxes (IC-FOC) using tŷ y and rŷ y is μ = ρ −1 v(y) q IC t tŷ y (y) + q IC r rŷ y (y) f (y|a) dy Given the shadow values λ and μ, the argument in Section 5 (prior to Definition 2) completes the proof of necessity in Proposition 3.
B.1.4 Proof of sufficiency
We begin by proving the following useful result.
Lemma 4. Let v(·) be GHM and suppose y ∈ (y y) is free. Then n(y) ≤ 0, and n(y) = 0 if y is a point of normal concavity (as defined immediately before Corollary 2).
Proof. If y is a kink point, then Lemma 3 applied to the left of y implies that n(y) ≤ 0. If y is a point of normal concavity, then by Lemma 1, there exist sequences of points {y L k } {y H k } ∈ C v such that y L k < y < y H k for all k ∈ N and lim k y L k = lim k y H k = y. These points are free, so (2) holds with equality on each interval [y L k y H k ]. Hence, in the limit, n(y) = 0.
Now let v, with associated λ and μ, be GHM. Let us show that v is optimal. We argue by contradiction. Assume v is not optimal, and let v * be a lower-cost contract satisfying (IC-FOC) and (IR)-(NG). As in the argument at the beginning of Appendix B, v * can be taken to be increasing and to satisfy (IC-FOC) exactly, and as in the proof of Lemma 6 in Appendix D.3, v * (y) and v * (y) can be taken to be finite.
Enumerate the closed linear segments S 1 S 2 of v and let S = S i . Let δ(y) = v * (y) − v(y), and letv(y; ε) = v(y) + εδ(y), so thatv(· 0) = v(·) andv(· 1) = v * (·). Then, for each ε,v(·; ε) is a convex combination of the concave contracts v and v * . Hence, v(·; ε) satisfies (IC-FOC) and (IR)-(NG). Since u −1 (·) is convex, and since for each y, v(y; ε) is linear in ε, it follows that u −1 (v(y; ε))f (y|a) dy is convex in ε. Thus, since and so, since every point in Y \ S is a point of normal concavity (noting that we took the sets S i to be closed and so any kink point is in S), we have where the first equality follows by Lemma 4.
Both v and v * satisfy (IC-FOC) with equality and, hence, δ(y)f a (y|a) dy = 0, from which Since v is linear on S i and since v * is concave, δ 1 is concave. For any given K, let = (y H − y L )/2 K , and consider the function δ K on [y L y H ] that agrees with δ 1 on the set of points {y L y L + y H } and is linear between these points. Note that δ K is concave and continuous on [y L y H ], and that for each y, δ K (y) is monotonically increasing in K with limit δ(y). Hence, we can chooseK large enough that S i n(y)δK(y)f (y|a) dy < 0 2K}, let y k = y L + k and let s k be the slope ofδ on (y k−1 y k ). Then we claim that for all y in [y L y H ], To see (9), note first that for y < y 0 = y L , both sides of the equation are 0. At y 0 , each side is δ(y 0 ), since r y 0 y (y 0 ) = 1 and since t y 0 y k (y 0 ) = 0 for all k. Thus, since both sides are continuous and piecewise linear on [y 0 y], it is enough that the two sides have that same derivative where defined. So fixk ∈ {1 2K} and let y ∈ (yk −1 yk). Note that for k <k, t y 0 y k (y) = 0, and for k ≥k, t y 0 y k (y) = 1. Hence, the derivative of the right-hand side is as desired, and so, noting thatδ (y) = 0 for y > y K = y H , we have established (9). Since n(y)δ(y)f (y|a) dy < 0, we must thus have at least one of (i) δ(y 0 ) n(y)r y 0 y (y)f (y|a) dy < 0; (ii) for some k < 2K, (s k − s k+1 ) n(y)t y 0 y k (y)f (y|a) dy < 0; (iii) s 2K n(y)t y 0 y 2K (y)f (y|a) dy < 0.
By Definition 2(i), and since y 0 is free, n(y)r y 0 y (y)f (y|a) dy = y y 0 n(y)f (y|a) dy ≥ 0 and so (i) cannot hold. Sinceδ is concave on [y L y H ], it follows that s k − s k+1 ≥ 0, and so, since y 0 is free, it follows by Definition 2(ii) that (ii) cannot hold either. Finally, since y 0 and y 2K are both free, the integral in (iii) is, in fact, 0 by Definition 2(ii) and Definition 2(iii). We thus have the required contradiction, and v is, in fact, optimal.
B.2 Proof of Corollary 2
This result follows immediately from Proposition 3 and Lemma 4.
B.3 Proof of Proposition 4
Suppose that there exists some y I ∈ [y y] such that ρ(λ + μl(·|a)) is convex on [y y I ] and concave on [y I y], let v * (·) implement a ≥ 0 at maximum profit, and suppose v * (y) > u.
First, we show that v * (·) has no more than one linear segment. Since v * (·) implements a at maximum profit, it is GHM by Proposition 3. Consequently, if v * (·) has more than one linear segment, then Lemma 3 implies that n(·) must be positive, then negative, then positive over each segment. Hence, v * (·) − ρ(λ + μl(·|a)) must be negative, then positive, then negative over each linear segment. But then ρ(λ + μl(·|a)) must have two disjoint nonconcave regions, which is ruled out by assumption.
If y I > y, then v * (·) must have a linear segment because it cannot coincide with ρ(λ + μl(·|a)) everywhere. We claim that this linear segment must be [y ŷ] for somê y ≥ y I . If the linear segment starts at someỹ > y, then every y ∈ (y ỹ) must be a point of normal concavity. But then v * (·) = ρ(λ + μl(·|a)) on (y ỹ), which violates (NG) because ρ(λ + μl(·|a)) is convex on that region by assumption. Similarly, ifŷ < y I , then every y ∈ (ŷ y I ) must be a point of normal concavity, which again violates (NG). So v * (·) has a single linear segment [y ŷ], whereŷ ≥ y I . Since v * (·) is GHM and v * (y) > u, (2) holds with equality on this linear segment and so ŷ y n(y)f (y) dy = 0. Finally, any y ∈ (ŷ y) is again a point of normal concavity, and so v * (·) = ρ(λ + μl(·|a)) at all such points. This proves the result.
B.4 Proof of Proposition 5
Let v(·) be an optimal incentive scheme and suppose (IR) does not bind. Toward a contradiction, suppose that v(·) is strictly concave at some y < y 0 . Consider the alternative contractṽ Note thatṽ(·) is concave,ṽ(y) ≤ v(y) for all y ∈ Y,ṽ(y) ≥ u, and there exists an interval in [y y 0 ] such thatṽ(y) < v(y) on that interval. Therefore,ṽ(·) is strictly less expensive than v(·) to the principal. Since (IR) does not bind, there exists some α ∈ [0 1) such that where the strict inequality follows because f a (y|a) is negative on y ∈ [y y 0 ]. Hence,ṽ(·) satisfies (IC-FOC). Soṽ(·) implements a, contradicting that v(·) is optimal.
Appendix C: Proofs for Section 6 C.1 Proof of Proposition 6 Given the definition ofṽ(·),c, andπ, the optimal a andṽ(·) solve max a G∈G ṽ(·) subject to a G ∈ arg max a G ∈G As in Proposition 1, following any intermediate output x, the agent optimally chooses G x so that E G x [ṽ(x)] =ṽ c (x), whereṽ c (·) is the concave closure ofṽ(·). Therefore, the principal's payoff following x equals E G x [π(y)−ṽ(y)] ≤π(x)−ṽ c (x). Sinceπ(·) is strictly concave, this inequality holds with equality only if G x is degenerate. Consequently, we can restrict attention to contracts in whichṽ(·) is concave, and, hence, for every x, the agent will optimally choose G x (y) = I {y≥x} .
C.2 Proof of Proposition 7
Since s(·) ≥ −M, V s (x) = s(y)f (y|x) dy ≥ −M and so V c s (·) ≥ −M. Consider relaxing (6) so that the principal can choose any V s (·) that is concave and satisfies V s (·) ≥ −M. In this relaxed problem, the principal solves Suppose (a * V s (·)) is optimal in this relaxed program. Note that s L a * (·) is feasible in this relaxed problem, so V s (a * ) ≤ s L a * (a * ). Suppose s L a * (·) is not optimal, so V s (a * ) < s L a * (a * ). Then s L a * (a * ) − c(a * ) > u 0 and so s L a * (y) = −M. Define s L (·) as the linear function that intersects V s (·) at y and a * , so s L (y) = V s (y) + V s a * − V s (y) a * − y (y − y) Since V s (a * ) is concave, s L (y) ≤ V s (y) for all y ∈ [y a * ].
For the agent to be willing to choose a * under V s (·), it must be that ∂ − V s (a * ) ≥ c (a * ), where ∂ − V s (y) is the left derivative of V s (·) at y. Since V s is concave, Since V s (y) ≥ M, we conclude that s L (y) ≥ s L a * (y) for all y ∈ Y. But then V s (a * ) = s L (a * ) ≥ s L a * (a * ), which gives a contradiction. So (a * s L a * (·)) is also optimal. Note that for any a * > a FB , (a * s L a * (·)) is strictly dominated by (a FB s L a FB (·)), which generates higher total surplus and gives a (weakly) lower payment to the agent. So a * ≤ a FB and s L a * (·) is optimal in this relaxed problem.
Finally, note that for any a ≥ 0 and , and so the optimal linear V s (·) in the relaxed problem can be implemented in the full problem by s L a * (·).
C.3 Proof of Proposition 8
It suffices to prove that for any total output x, For t ≤ α, y x (t) = w, with y x (t) = z for t > α. This function y x guarantees that the agent earns s c (x). Now s(y x (t)) ≤ s c (y x (t)) for all y x (t). Since s c is weakly concave and 1 0 y x (t) dt = x, we conclude that 1 0 s(y x (t)) dt ≤ 1 0 s c (y x (t)) dt ≤ 1 0 s c (x) dt = s c (x). So the agent earns (and the principal pays) s c (x) following intermediate output x, which proves the claim.
Appendix D: Additional results
The first part of this section proves existence and some properties of the optimal contract for the case of a finite limited liability constraint. The second part gives sufficient conditions on ρ and l for Proposition 4. The final part proves a result about how the optimal contract varies in u that we use in Appendix B.
D.1 Proof of existence, uniqueness, and continuity for u finite Proposition 9. Let U and be the set of increasing concave utility functions for the agent and the principal satisfying our assumptions, and let V be the set of concave (but not necessarily increasing) functions from [y y] to R, where each of U, , and V has the topology of almost everywhere pointwise convergence. Fix a. Then (i) for each z = (M u 0 π u), there exists an optimal contract v that implements a given z and (ii) at any point z where at least one of π or u is strictly concave, the optimal contract implementing a is unique and continuous in z.
Proof. The proof relies on Berge's theorem. Fix a. For any given z = (M u 0 u π), let v L (·|z) be given by v L (y|z) = c (a)(y − y) + β, where β = min(u(−M) c(a) + u 0 − c (a)(a − y)), be the maximum-profit linear (in utils) contract that implements a. In particular, v L (·|z) satisfies (IC) since, under our assumptions, the agent's utility from income given v L (·|z) is linear in effort while −c(·) is concave and so the first-order condition implies (IC) Let B : R × R × × U →→ V be the correspondence that for each M ∈ R, u 0 ∈ R, π ∈ , and u ∈ U gives the set of contracts v such that where the second through fifth constraints are simply the translations of (IC)-(NG) when z is a parameter, and the first constraint restricts attention to contracts that come within 1 util for the principal of v L (·|z). Since v L (·|z) ∈ B(z), this constraint is innocuous, and it also follows that B is non-empty-valued. For any given v ∈ V , define v max = max y∈[y y] v(y). We begin by proving the following statement.
(*) For each compact subset Z ⊆ R × R × × U, there is u such that v max ≤ u for all z ∈ Z and v ∈ B(z).
To see (*), begin by noting that v L (·|·) is continuous on the compact set [y y] × Z, and so −∞ < m ≡ min y∈[y y]×Z {π(y − u −1 (v L (y|z)))}. Using that Z is compact, let u * < ∞ satisfy that for all z ∈ Z, π(y − u −1 (u * )) ≤ m − 2, so that any time the principal gives the agent utility u * or above, the principal is at least 2 utils worse off than under v L (·|z).
Fix z ∈ Z and v ∈ B(z). Choose y max so that v(y max ) = v max . Let u min = min z∈Z u(−M), and definev as the function that equals u min at y and y, equals v max at y max , and is linear to the left and right of y max . That is,v(y max ) = y max and v(y) = Note that E F(·|a) π y − u −1 v(y) ≥ E F(·|a) π y − u −1 v L (y|z) − 1 using that the concave function v is everywhere at or abovev and the first constraint in (11). We show that (12) implies a uniform bound on v max . Intuitively, when v max is large, the piecewise linear functionv(y) is above u * for nearly all of [y y], implying losses compared to v L (·|z) that contradict (12).
A uniform bound on v max is, of course, trivial for v such that v max ≤ u * . So assume v max > u * . Let y L ∈ [y y max ) solvev(y L ) = u * , where if y max = y, we let y L = y and, similarly, define y H ∈ (y max y] byv(y H ) = u * , where if y max = y, y H = y.
Sincev(·) is concave,v(y) ≥ u * for all y ∈ [y L y H ] and, hence, π y − u −1 v(y) − π y − u −1 v L (y|z) ≤ −2 while for any y, where b ≡ π(y + max z∈Z M) − m. So from (12) we must have F(y H |a) − F(y L |a) (−2) + 1 − F(y H |a) − F(y L |a) b ≥ −1 or, equivalently, where the right-hand side is strictly less than 1 because ∞ > b > 0. But if y L = y, then y L = y + u * − u min v max − u min (y max − y) ≤ y + u * − u min v max − u min (y − y) and so as v max → ∞, y L → y. Similarly, if y H = y, then and so as v max → ∞, y H → y. But then by (13), v max is bounded, establishing (*). From (*) and the dominated convergence theorem, each expectation in (11) is continuous in z, and, hence, noting that each of (IC) and (NG) can be expressed as a collection of weak inequalities, B(·) is upper hemicontinuous.
Fix z and let {v k } be a sequence in B(z). Since each v k is concave and thus has variation at most 2(u − u(−M)), it follows from Helly's selection theorem that {v k } has a convergent subsequence. Thus, B is compact-valued and, from Berge's theorem, the set of maximizers of E F(·|a) [π(y − u −1 (v(y)))] on B(·) is non-empty-valued and upper hemicontinuous.
Finally, consider any z where at least one of π and u is strictly concave. Then if v 1 ,v 2 ∈ B(z), it is direct that (v 1 + v 2 )/2 ∈ B(z) is strictly more profitable than either v 1 or v 2 . Thus, the maximum is unique and, hence, continuous in z.
D.2 Mild sufficient conditions for Proposition 4
This appendix gives sufficient conditions under which ρ(λ + μl(·|a)) is first convex and then concave. We show that this case obtains if con(ρ ) + con(l y ) > −1, where for an interval X ⊆ R and analytic function h : X → R + , con(h) = inf X {1 − (hh )/(h ) 2 }. For any analytic function q with domain a subset of the reals, let q (k) be the kth derivative of q.
Using this lemma, we can prove the following claim, from which our sufficient condition is immediate.
Claim 2. Let g and h be strictly positive analytic functions with con(g ) + con(h ) > −1, and g and h everywhere strictly positive. Then (g(h(·))) is never first strictly concave and then weakly convex.
Proof. Let θ(·) = g h(·) = g h 2 + g h If both g and h are linear, then θ ≡ 0 and we are done. Assume g and h are not both linear, and consider any pointŷ at which θ = 0. We show that immediately to the right of y, θ < 0. This rules out that θ is ever first strictly negative and then weakly positive over any interval of nonzero length. To see this, note that θ = g h 3 + 3g h h + g h Consider any pointŷ at which θ = 0. Consider first the case that g (ŷ)h (ŷ) = 0. Then, since g > 0, it follows by (14) that g (ŷ) and h (ŷ) have opposite sign. Hence, g (ŷ)h (ŷ)h (ŷ) < 0 and so, evaluated atŷ, where in the second line we substitute for (h ) 2 in the first term using (14) and that θ(ŷ) = 0, and similarly for g in the third term. Hence, θ is negative on an interval to the right ofŷ. Assume instead that g (ŷ)h (ŷ) = 0, where, since θ(ŷ) = 0, it follows that g (ŷ) = h (ŷ) = 0. Thus, since con(g ) > −∞, it follows from Lemma 5 applied to q = g that the first nonzero derivative of g is strictly negative and similarly for h . But then the first nonzero derivative of θ will be of the form g (k) (h ) k + g h (k) , with k ≥ 3, and at least one term strictly negative, and so, taking a Taylor expansion, θ is strictly negative on an interval to the right ofŷ and we are done.
D.3 Stability of optimal contract as u decreases
This appendix shows that if v u (·) is an optimal contract for some limited liability constraint u and v u (y) > u, then v u (·) remains optimal in the problem with any less binding limited liability constraint u , including u = −∞.
Lemma 6. Assume that for some u > −∞, v u (y) > u. Let u < u. Then v u = v u .
Proof. Assume v u has v u (y) > u, but that when the limited liability constraint is some u < u, there exists a superior concave contractv that implements a. We show that this leads to a contradiction.
Assume first thatv(y) > −∞ (as is automatic if u is finite). Then, for small enough ε, the contract (1 − ε)v u (·) + εv(·) is both strictly cheaper than v u (since u is strictly concave) and implements a subject to limited liability constraint u, yielding the desired contradiction.
Assume instead thatv(y) = −∞. Begin by picking any point x > y where x ∈ Cv (sincev(y) = −∞, such points exist) and constructṽ by applying a sufficiently small positive amount of t x y such thatṽ remains strictly cheaper than v u . Since this adds a positive increasing function tov, both (IC-FOC) and (IR) are strictly slack atṽ.
For each y ∈ [y y], let h y (·) be a supporting plane toṽ at y. Let the concave contract v y (·) be given by v y (x) =ṽ(x) for x > y and by v y (x) = h y (x) for x ≤ y. For each x, v y (x) is weakly decreasing in y, with lim y→y v y (x) =ṽ(x). Thus, by the monotone convergence theorem, as y → y, v y (x)f (x|a) dx → ṽ(x)f (x|a) dx, v y (x)f a (x|a) dx → ṽ(x)f a (x|a) dx, and u −1 (v y (x))f (x|a) dx → u −1 (ṽ(x))f (x|a) dx. Hence, for y close enough to y, v y implements a and is cheaper than v u . For any such y, v y (y) is finite and we are back to the previous case. | 2017-07-18T07:20:36.850Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "6ace61280f4ffeaa4e93ebdbcb2cc8d46d9beb6a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3982/te3660",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8f769f29ede75298afaff3fa30c4d0b1adc318f4",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
41681001 | pes2o/s2orc | v3-fos-license | Light acclimation and pH perturbations affect photosynthetic performance in Chlorella mass culture
Chlorella spp. are robust chlorophyte microalgal species frequently used in mass culture. The pH optimum for growth is close to neutrality; at this pH, theoretically little energy is required to maintain homeostasis. In the present study, we grew Chlorella fusca cells in an open, outdoor, thin-layer cascade photobioreactor (TLC), under ambient photon flux at the theoretically preferred pH (7.2), and let the culture pass the exponential growth phase. Using pH drift experiments, we show that an alkalization to pH 9 supported photosynthesis in the TLC. The increased photosynthetic activity under alkaline conditions was a pH-dependent effect, and not a dissolved inorganic carbon (DIC) concentrationor light intensity-dependent effect. Re-acidification (in one step or in increments) lowered gross oxygen production and increased non-photochemical quenching in short-term experiments. Gross oxygen production and electron transport rates in PSII were uncoupled during the pH perturbation experiments. Electron transport rates were only marginally affected by pH, whereas oxygen production rates decreased with acidification. Alternative electron pathways, electron donation at the plastid terminal oxidase and state-transitions are discussed as a potential explanation. Because cell material from the TLC was not operating at maximal capacity, we propose that alkalization can support photosynthesis in challenged TLC systems.
Scenescence and mass culture
Microalgae mass culture generally aims for high biomass yields, often associated with intended nutrient starvation to increase lipid contents (Jacobsen et al. 2010, Liu et al. 2013).Nitrogen starvation lowers protein contents, RuBisCO levels, and activates bio-chemical responses that lead to an elevation of lipid bodies by a process that is not entirely understood.Algae mass-culture conditions are frequently less controlled, the exponential growth phase might have passed during the time of cell harvest and cells have experienced senescence.Under these conditions cells can exhibit acclimation properties which are comparable to nutrient starvation conditions.Lower photosynthetic performance is one result of senes-cence (Humby et al. 2013).When cells enter the stationary growth phase, chloroplast structures become disorganized, LHCII, D1, PsaA, Cyt f, and RuBisCO levels decline, and photosynthesis is down-regulated while non-photochemical quenching (NPQ) remains stable compared to the exponential growth phase (Humby et al. 2013).These substantial changes within the chloroplast might affect the cell's response to external factors such as pH and light intensity.Gardner et al. (2011) showed that a Chlorella strain demonstrated elevated lipid contents and displayed morphological changes at high pH, which indicates that the elevated pH induced stress to the cell.
Homeostasis: effect of external pH manipulation
The external pH under which the cell is capable of operating at maximal capacity is species-specific but usually range between pH 6 and 8.5 (Taraldsvik & Mykleastad 2000, Hinga 2002, Lundholm et al. 2004, Liu et al. 2007, Middelboe & Hansen 2007).Although Chlorella spp.prefer a pH between 6 and 6.5 (Myers 1953), Chlorella vulgaris can grow effectively at an alkaline pH of 10.5 (Goldman et al. 1982).Chlorella saccharophila grows optimally at pH 7.0 but can maintain homeostasis when external pH is as low as 5.0 (internal pH 7.3) (Gehl & Colman 1985).This strain can successfully grow at pH 2.5 (Beardall 1981), which shows its immense internal pH regulation capacity.Internal pH is generally maintained within narrow limits above ~pH 7 in Chlorella (Smith & Raven 1979, Beardall & Raven 1981, Sianoudis et al. 1987) and other taxa (Dixon et al. 1989, Kurkdjian & Guern 1989).Homeostasis is maintained by 2 main processes: proton binding and H + translocation between cell organelles and the external medium by active, energy-demanding, or passive ion exchange mechanisms (Smith & Raven 1979).The energy demand for active proton translocation is estimated to be 1 ATP per H + transported (Briskin & Hanson 1992).The abundance of H + ATPase complexes can be controlled by the cell via pH sensing (Weiss & Pick 1996).However, Nielsen et al. (2007) showed that light intensity (and hence energy supply) did not correlate with cell growth under extreme pH conditions, which indicates that elevated light input does not necessarily enhance the cell's capacity to regulate internal pH.In short-term pH manipulations, H + pumping is important (Bethmann & Schönknecht 2009); however, under longer-term (growth) conditions, anion exchange might be the predominant proton translocation process (Lew 2010).It is not clear how internal pH is regulated in cells that have passed their exponential growth phase.
Carbon acquisition and pH
Changes in the media's pH or supplementation of CO 2 affects the bicarbonate equilibrium.Introducing CO 2 into the medium lowers the pH and shifts the bicarbonate equilibrium towards higher concentrations of CO 2 and bicarbonate at the expense of carbonate.If the pH is increased at a constant pCO 2 , the dissolved inorganic carbon (DIC) concentration increases, while the CO 2(aq) :HCO 3 − ratio decreases.The form of DIC is relevant for the cellular DIC acquisition system, while the concentration of DIC matters when substantial rates of photosynthesis must be sustained.Most aquatic primary producers possess means of elevating the CO 2 concentration at RuBisCO, which suppresses RuBisCO's oxygenase function and provides sufficient CO 2 even when high photon flux (PF) allows high CO 2 fixation rates in the Calvin-Benson-Bassham-cycle.Most algae can regulate these CO 2 -concentrating mechanisms (CCMs) effectively (Giordano et al. 2005).Chlorella spp.possess effective CCMs and acquire both CO 2 and/or HCO 3 − actively (Shelp & Canvin 1980, Beardall 1981, Beardall & Raven 1981), and are able to raise the pH in a closed vessel to pH 11 (Myers 1953), which shows that cells are capable of carrying out effective photosynthesis.The interlinked pH and CO 2 concentration behavior makes it difficult to separate both effects.
In the present study, we tested Chlorella fusca cells grown in an open, outdoor, thin-layer cascade photobioreactor (TLC) under post-exponential growth phase conditions.We investigated if cells are susceptible to high PF, are limited by external DIC concentrations or affected by changes in the medium pH.We perturbed pH in short-term (i.e.minutes) and long-term (i.e.hours: pH drift) experiments to test the cells' capacity to regulate pH-related photosynthesis.The results show that cells performed better in alkaline conditions and were stressed by acidification of the media.
Organism, growth conditions and growth history
The chlorophyte microalga Chlorella fusca (Culture Collection of Marine Microalgae, ICMAN-CSIC, Cádiz, Spain) was grown in an outdoor thin-layer cas cade photobioreactor (TLC3; TLC number is given to enable comparison with to other studies using the same experi mental setup) in September 2012 in southern Spain (Málaga), using Bold's Basal Medium modified with 3-fold nitrate content plus the addition of vitamins (3N-BBM-V) (Andersen et al. 2005).The cell suspension was held in an open tank (~120 l), pumped to the top of a flat panel slide (4 m 2 , 2% inclination), and allowed to flow back to the tank by gravity (surface to volume ratio 27 m −3 , 145 l working volume).The cells received full sunlight on the cascade (for a duration of ~11 s) and were exposed to low light conditions in the tank.Completion of a single cycle took approximately 70 s.Macronutrients (NaNO 3 , MgSO 4 and KH 2 PO 4 ) were added once a day using common farming fertilisers (Welgro Hydroponic, Comercial Química Massó); nu trient concentrations were ≥540 mg l −1 (PO 4 − and SO 4 ), and ≥650 mg l −1 (NO 3 − ; R. Abdala pers.obs.).The TLC was aerated; pure CO 2 gas was injected into the air stream (final 1%) and the pH was maintained at between 7.2 and 7.6.pH drift experiments showed very limited capacity of the cells to increase the pH by means of photosynthesis (see Fig. 1A), even when DIC was added in form of NaHCO 3 (see Fig. A1A in the Appendix), or when diluted with fresh medium (Fig. A1C).We therefore tested the photophysiological fitness in a light acclimation ex periment (see Figs. 2 & 3), which showed that photosynthetic capacity was not severely im paired.However, increasing the pH by substantial addition of NaHCO 3 (20 mM) in the TLC increased pH drift (see Fig. 1).pH perturbance experiments were carried out after pH adjustments had been performed in the TLC.Cell numbers de creased for 2 d prior to the experiments (see Figs. 4-7) (from ~1 × 10 6 to ~4 × 10 5 cells ml −1 ), but remained stable thereafter (counted using a Neubauer chamber; R. Abdala pers.obs.).At that point, the TLC had been maintained for 12 d, at a temperature of ~25°C.The TLC used in the present study was part of a nutrient-concentration effect study.A TLC with replete nutrients concentrations (TLC presented in the present study), was maintained parallel to a TLC with low nitrogen (TLC-N) and another TLC with lowered sulphate concentrations (TLC-S).We expected the TLC with replete nutrients to show highest cell numbers.However, maximal cell concentrations were lower than expected (TLC of the present study, i.e. replete nutrients = 2 × 10 7 cells ml -1 , TLC-S = 2.3 × 10 7 cells ml -1 , and TLC-N = 1 × 10 6 cells m -1 l; R. Abdala pers.comm.).The experiments in the present study were carried out after the maximal TLC capacity had passed, and cell suspen-sions experienced grazing, grazer treatment and were contaminated with Nannochloris sp. and Chlamy domonas sp. cells in variable concentrations.
More information about the hydrodynamics of the TLC used and light exposure effects in the TLC on photo synthetic performance is given in Jerez (2014, this Theme Section).Further information on photoacclimation in the TLC will be published elsewhere (J.C. Kromkamp et al. unpubl.).
pH drift experiments
Samples were withdrawn from the TLC in the morning, transferred to 20 ml glass scintillation vials, and either continuously exposed to ambient sunlight or shaded by neutral density filters.pH was measured several times a day (Basic 20, Crison Instruments), calibrated with standard laboratory solutions.Measurement duration was standardized to 1 min vial −1 to avoid erratic gas exchange amongst replications.Temperatures were in the range of 25 to 28°C and maintained by placing vials in a non-controlled tub water bath and regularly exchanging the cooling fluid.
Light acclimation experiment
Cell suspensions for continuous light exposure experiments were withdrawn from the TLC in the evening.250 ml were filled into 500 ml conical glass flasks, which were kept in the dark overnight and then exposed to ambient light conditions (HL, high light) the following day while aerated with ambient air (~1 l min −1 ).Shading was achieved by neutral density filters to 70% (ML, medium light) or 30% (LL, low light) of the ambient PF.Temperature conditions were maintained at approximately 25 ± 3°C by placing flasks in a styrofoam tub filled to one-third with water (which was replaced when needed).At given times, 2 ml samples were withdrawn, immediately placed in an AquaPen fluorometer (AquaPen, P.S.I.) and fast fluorescence induction curves measured.Maximum and effective quantum yield of PSII (F v /F m and F v '/F m ') and Vj (accumulation of the 'primary' acceptor of PSII, Q A − ) were computed by the software provided by the instrument.Initial samples were dark-incubated overnight.Other samples were measured in the absence of actinic light, which allows Q A − to oxidize.However, because samples were not dark-acclimated prior to measurements, nor exposed to ambient actinic light, residual NPQ can lower the fluorescence signal and decrease variable fluorescence.This results in F 0 ' dur-ing the O phase of the fast fluorescence induction curve, rather than F' or F 0 (i.e.quantum yields are F v '/F m ' rather than ΔF/F m ') (van Kooten & Snel 1990, Kromkamp & Forster 2003).
Fast fluorescence induction curves
Fast fluorescence induction curves represent the consecutive reduction of electron carriers in PSII and the plastoquinone (PQ) pool within about 1 s of illumination with saturating light.After the O phase (first data point at t = 50 µs), Q A gets reduced until the J phase is reached (100% Q A -, 0% Q A , 2 ms), while the consecutive reduction of Q B and formation of plastoquinol (PQ pool reduction) is associated with the inflection (I phase) and the maximal fluorescence signal (P phase at t = 1 s) by a fully reduced electron transport chain from the PSII reaction centre to the PQ pool (Tomek et al. 2001, Zhu et al. 2005).This concept is commonly accepted, however, alternative interpretation of fast fluorescence induction curves have been frequently presented due to the high number of fluorescence quenchers in PSII and the photosynthetic unit (Strasser et al. 1995, Bukhov et al. 2004, Stirbet & Govindjee 2012).The reduction state of the PQ pool, for instance, was not a determinant factor for maximal fluorescence signals during P, but the reduced PQ pool has been shown to affect the J phase (Tóth et al. 2005(Tóth et al. , 2007)).Nevertheless, we will interpret data of this study following Zhu et al. (2005), where signals show the accumulation of the following electron carrier: , PQ is fully reduced.Vj was calculated as (J − O) / (P − O), with J phase = F 2ms (where F = fluorescence), O = F 50 µs , P = F 1s . 1 − Vj is indicative for the electron transport capacity past Q A .Low values represent a lower probability that electrons are efficiently transported.Note that P = F m (or F m ') in multiple-turnover saturation pulse fluorometers (e.g.pulse-amplitude modulation, PAM), while J is equivalent to maximal fluorescence in a single-turnover fluorometer (e.g.fast repetition rate fluorometer, FRRf).
pH perturbations: simultaneous PAM, O 2 and pH measurements
Cell suspensions were withdrawn from the TLC and placed in a 20 ml glass scintillation vial.The lid was modified to allow submersion of a standard pH electrode (Basic 20, Crison Instruments), and an oxygen optode (Microx TX3 PreSens) into the cell sus-pension, while minimizing the contact surface of the cell suspension with ambient air.The optical fibre of a Mini-PAM (Walz) was placed against the top onethird of the vial and orientated perpendicular to the actinic (white LED) light source, which provided a constant PF of 140 µmol photons m −2 s −1 .Higher actinic PF was achieved by usage of the internal light source of the fluorometer and exposed cells to 400 µmol photons m −2 s −1 when switched on.However, cells moved into and out of the additional light field since only a part of the scintillation vial was exposed to the additional PF.This can limit comparison with the FRRf experiment, where the entire volume of the cell suspension was ex posed to PF conditions.Cell suspensions were stirred con tinuously.After a ~4 min acclimation phase, additions of known amounts of HCl perturbed the pH, which was recorded manually every 30 s.To test for DIC limitation, 300 µl of NaHCO 3 (1 mM final concentration) were injected at the times indicated.Experiments were conducted from morning until noon, before high ambient light could cause inhibition.
pH perturbations: fast repetition rate fluorescence measurements
FRRf fluorometers use a different excitation and measurement protocol compared to PAM instruments.While PAM measures F m or F m ', during a multiple turnover (of PSII) saturation pulse (approx.800 ms duration), FRRf uses a sequence of subsaturating flashlets to consecutively reduce PSII to a single turnover.The FRR fluorometer used (Fast-Tracka Mark II, Chelsea Technology Group) was equipped with a bench-top illumination extension (FastAct, Chelsea Technology Group) and was set to apply a sequence of 100 flashlets, each 1 µs long and spaced 50 µs apart.Data from 12 such sequences were averaged by the standard FRRf software (Fast-Pro) and produced quantum yields (F v /F m , ΔF/F m ') of a single turnover in PSII, i.e. all Q A reduced, PQ-pool not affected.An iterative curve fit of a single excitation flashlet sequence provides measurements of the functional absorption cross-section of PSII (in nm 2 ).A sequence of single measuring flashlets after the single turnover excitation protocol shows Q A − re-oxidation kinetics (τ PSII ), i.e. the electron transport capacity from a fully reduced Q A .Further information about FRRf fluorometry can be found in Kolber & Falkowski (1992, 1993) and Kolber et al. (1998).
A total of 3 ml of microalgae culture was withdrawn from the TLC on the same day as PAM pH perturbation experiments were carried out, and immediately placed in the FRR fluorometer.An internal LED provided actinic PF of 3, 140, 400, 140 and 0 µmol photons m −2 s −1 in this sequence for the times indicated.Acidification 5 min after the start of the protocol was achieved by addition of 8.6 µl of 0.5 M HCl.After 7 min, the pH was brought back up by injection of 51 µl 0.1 M NaOH.Samples were homogenized by aeration between each saturation flashlet application, which enables gas exchange between the cell suspension and air, and complicates data interpretation regarding the bicarbonate system and pH.
RESULTS pH drift experiments
pH drift was monitored daily.Until the pH was increased in the TLC, incubations could elevate the pH by a maximum of 0.5 pH units (examples are given in Fig. A1 in the Appendix).To test if cells were DIC limited, we added small amounts of DIC (400 and 800 mM), which increased the initial value (pH 7.11) marginally (pH 0.06 and 0.23 units respectively).Incubations with increased DIC concentrations showed similar pH rise kinetics compared to the control (Fig. A1A,B).pH drift experiments where fresh media was added to cell suspensions from the TLC (1:1 by volume) did not stimulate pH drift compared to the control, neither did CO 2 bubbling performed before the experiment (Fig. A1C).This indicates that cells were not nutrient starved or depleted.When the pH in the TLC was adjusted (from pH 7 to 9), pH drift increased (initial pH 8.47, final pH 9.57; Fig. 1).No pH increase was recorded in suspensions collected before the DIC addition.A stronger pH drift was observed on the following day (initial pH 8.65, final pH 11.5 ± 0.06; Fig. 1B).A similar effect was pre viously observed in an other TLC (TLC2) (S.Ihnken pers.obs., J. C. Kromkamp et al. unpubl.),where the pH was increased from pH 7 to 8.5 and drift experiments showed increased pH rise (initial pH 8.5, final pH 10.5).
Fig. 1B shows that shielding the vials at 70% of ambient PF (i.e.ML) resulted in the highest final pH.Higher and lower PF levels showed a similar pH drift capacity, with some variation due to photoinhibition (HL), or light limitation (LL).Control flasks, which experienced the same treatment but remained closed for the entire period, showed similar final pH values compared to vials that were used for pH measurements, indicating that the measurements and the accompanying brief exposure to air did not influence the results.In samples which experienced full ambient PF (HL), however, a lower final pH was found in the control vials (pH 11.09 ± 0.25 and pH 10.02 ± 0.18 for opened and continuously closed vials respectively).
Light acclimation status
Fig. 2 shows the light acclimation capacities in a separate experiment where cells were withdrawn from the TLC and continuously exposed to ambient PF.F v '/F m ' decreased with increasing ambient PF in all treatments.The lowest quantum yields were found in full light exposed samples after the highest ambient PF, while ML cells had already started to recover at this point (t = 15:00 h).HL cells recovered from low midday yields in the evening.LL cells lowered values only marginally around noon.F v '/F m ' depressions in HL and ML samples were partly caused by impairment of electron transport capacities past Q A . 1 − Vj decreased by 1 unit at the highest PF, but recovered later during the day (Fig. 2B).Low re-oxidation turnover for electron carriers past Q A , as shown by 1 − Vj, were also visible in low P phase values during the fast fluorescence induction in HL samples (Fig. 3).The lowest P phase values were found in the afternoon (15:00 h, HL), but samples re covered thereafter.J phase signals were only re pressed in HL samples but remained low in the evening.In LL samples, the J phase was hardly affected; however, P phase values increased over the course of the day, with higher values in the evening than in the morning.
To test if cells suffered from DIC limitation, samples were withdrawn from the experiment (Fig. 3) at 3 times (morning, noon, afternoon), exposed to actinic PF of approximately 140 µmol photons m −2 s −1 for 2 min, spiked with DIC to a final concentration of 300 µM, and ΔF/F m ' followed for 4 min by a Mini-PAM.ΔF/F m ' was not enhanced by DIC additions (see Fig. A2 in the Appendix).
pH perturbation: decreasing pH
Perturbation experiments were carried out with subsamples of the TLC, 2 d after the pH was in creased from ~7.2 to ~9.Acidification of the medium lowered the fluorescence signal rapidly (Fig. 4A).Both F ' (i.e.steady-state fluorescence under actinic light conditions) and F m ' (maximal fluorescence mea sured during a saturation pulse by the fluorometer) decreased within approximately 1 s to a lower state when the pH was dropped.Continuously recorded fluorescence signals indicate a recovery from acidification, as can be seen by a slow increase in F ' and F m ' until the pH was raised.The decrease in F m ' was caused by a rapid increase in NPQ.However, lower pH did not affect relative electron transport rates (rETR) effectively; values remained stable after HCl additions.
Acidification decreased gross oxygen evolution from 11.2 ± 1.74 to 3.7 ± 2.61 µM l −1 min −1 (data averaged from Figs. 4 & 5).This is surprising as fluorescence measurements suggest a steady continuation of rETR.Acidification therefore led to an uncoupling of oxygen and variable fluorescence based measurements of photosynthesis.
Supplementing suspensions with DIC (1 mM) did not increase rETR.Photosynthetic O 2 production, how ever, appeared to be stimulated in 1 sample when the DIC amendment was performed under acidic conditions, but not in another (Fig. 4B but not C).Cells of the different TLCs were tested regularly for DIC limitation where the photosynthetic performance remained unaffected by DIC additions at low or high growth pH (data not shown).Efficient quantum yields are indicative of the efficiency of photon usage for photosynthetic electron transport.1 − Vj represents the probability that an electron in Q A − moves further to electron carriers including the plastoquinone (PQ) pool.The higher the value (efficient F v '/F m ', or 1 − Vj), the higher the photosynthetic competence of the sample.Data show mean ± SD (n = 3; n = 2 for fluorescence induction curves shown in HL for t > 15:00 h; where n = 2 due to failure of aeration in 1 replicate)
pH perturbation: increasing pH
Increasing the pH rapidly increased F ' and F m ' to initial values or higher (Fig. 4).As a result, NPQ decreased to initial levels or lower.Negative NPQ values occurred when initial F m ' was lower than F m ' after the pH treatment.Because NPQ was calculated using the first F m ' value instead of F m , negative NPQ values can occur, and indicate a lower NPQ state compared to values at the start of the experimental protocol.Photosynthesis measured by rETR was not affected by the single-step pH increase.Photosynthetic oxygen production, however, was stimulated by addition of NaOH.Initial values were not quite restored, which could have been be due to high oxygen concentrations in the vials at the end of the measurements due to a buildup of O 2 by photosynthesis.
Photon flux effect at low and high pH
When the actinic PF was in creased 4-fold at low pH, rETR slowly increased during the entire period of elevated PF (Fig. 5).Decreasing F m ' values caused NPQ to increase until the pH was raised after 5 min.pH elevation caused a similar response under high PF conditions compared to low PF conditions (Fig. 4).PF elevation after cells were exposed to pH shifts did not induce a different response compared to PF elevation at low pH (Figs. 4 & 5).No pattern was found for oxygen production rates regarding PF.
Gradual pH amendments
When the pH was incrementally lowered, continuously recorded fluorescence and NPQ acclimation was visible (Fig. 6) in contrast to single-step pH perturbations, where very little acclimation was visible.Accli ma tion to pH increments was visible at pH > 7.0, where alkalization increased the fluorescence signal, decreased NPQ and elevated oxygen production with highest values between pH 7 and 8. Gradual increase of pH mirrored responses to decreasing pH.Clearly, F ' decreased with acidification, and increased with alkalization.
High time resolution measurements by FRRf
Fig. 7 shows variable fluorescence responses to pH perturbations in different PF levels.After a 3 min relaxation phase in very low light, the low actinic light (140 µmol photons m −2 s −1 ) was switched on and led to a temporary increase in F ' and F m ' followed by a drop and a quasi-steady state just before HCl was added (Fig. 7, dashed line), which decreased F ' and F m ' further.NPQ was activated when the actinic light was switched on and reached maximum values briefly after acidification.rETR, however, was only marginally disturbed, which corroborates PAM measure ments.The cells started to recover from acid addition within approximately 40 s, as shown by NPQ down-regulation.Increasing the actinic light intensity to 400 µmol photons m −2 s −1 (Fig. 7, light grey bars) perturbed the NPQ down-regulation only briefly (~1 min).rETR clearly increased under additional light, but was down-regulated upon addition of NaOH and consequent rise of the pH.This is in contrast to PAM measurements, where rETR remained stable upon an alkalization at high PF (Fig. 5).Acidification and alkalization appears to induce 2 phases, one imme diately after the pH adjustment for a duration of approximately 1 min.In the secondary phase, the parameters F', F m ' and NPQ developed in the opposite direction until the conditions were altered following the measurement protocol.When cells were transferred to lower actinic PF, the effective quantum yields increased, mainly due to a relaxation of NPQ as F ' increased only slightly.In a brief dark period, F ' decreased to initial values and F m ' responded marginally, suggesting that photoprotective mechanisms were not active at the last light phase of the experimental protocol.Photoinhibition was absent, as shown by F v /F m values which were not repressed by the experimental treatment (0.37 ± 0.08 and 0.37 ± 0.07 for the first and last quantum yield respectively).While acidification lowered Q A − re-oxidation kinetics (τ PSII ) only slightly, addition of NaOH increased τ PSII almost 2-fold (Fig. 7 C), which suggests a Q A − re-oxidation acceleration.The PSII functional cross section (σ PSII ) decreased when the actinic PF of 140 µmol photons m −2 s −1 was switched on due to a NPQ decrease.Acidification caused a slow rise in σ' PSII until initial values were restored.The increase of the PF from 140 to 400 µmol photons Gross oxygen evolution 7.7 µM l -1 min -1 (0.90) Gross oxygen evolution 12.3 µM l -1 min -1 (0.93) (C) shows a repeat measurement from (B) for error estimation purpose.pH perturbations were achieved by addition of small volumes (≤300 µl) of acid (HCl) or base (NaOH).DIC (NaHCO 3 − solution) was added as indicated by arrows to yield final 1 mM.Note that continuous fluorescence lines, pH and oxygen concentrations do not read on any y-axis.pH was measured every 30 s m −2 s −1 only caused a minor dip in this 'recovery process'.Interestingly, addition of NaOH caused a slow decrease in σ' PSII .Note that the changes in σ' PSII are generally in the opposite direction to NPQ, as might be expected, but that the kinetics differ.
Light intensity stress
Chlorella spp.are known to grow well, even under challenging conditions.In the TLC used in the present study, the cells were exposed to full sunlight for some tens of seconds, then remained for some time in low light (Jerez et al. 2014).This light treatment challenges the photosynthetic apparatus of the cells and requires effective regulation of photosynthesis and photoprotection.Although diatoms perform better than green algae under fluctuating light (Wagner et al. 2006), chlorophyta convert photon energy efficiently to biomass (Wilhelm & Jakob 2011) and are successfully used for mass culture.To test the strain used for high light resistance, we exposed cells continuously to full strength sunlight for an entire day (≤1800 µmol photons m −2 s −1 , daily irradiance dose ~150 kW m −2 ).Cells were able to cope with continuous full strength ambient light conditions as shown by fast fluorescence induction curves.Quantum efficiencies and Q B reduction kinetics were lower in high light, but could recover in the afternoon.Frequently, P phase values were lower than J, which could be caused by a selective operation of the active fluorescence quenchers, or indicate photodamage in PSII.Photodamage impairs Q A − reduction kinetics and might lower the degree of Q B and PQ pool reduction capacities, potentially due to the proximity of Q B to the susceptible D1 protein (Aro et al. 1993, Jansen et al. 1999).However, the full photosynthetic poten- where F m '* = the first F m ' value of the experimental protocol.F m was not measured as cells were not transferred to the dark prior to measurements.For legends and protocol explanations refer to Fig. 4 tial could be restored in lower PF in the late afternoon and early evening.Samples were dark-acclimated only very briefly (≤2 s), which will oxidize primary electron acceptors in PSII (Q A − ), but not secondary electron acceptors or the PQ pool (Strasser et al. 1995), or relax photoprotective NPQ.Although the rapid phase of energy dependent quenching (q E ) might relax due to a dissipation of the ΔpH-gradient within seconds, xantho-phyll cycle-mediated q E requires minutes, as do statetransitions (Müller et al. 2001, Ihnken et al. 2011a).Residual NPQ in illuminated samples can explain the obvious difference between initial samples, which were dark-acclimated overnight, and samples taken at low PF.Unexpected, however, was that even LL samples showed similar F v '/F m ' values compared to ML and HL samples in the morning, where the PF at the vessel surface in LL treatments was ≥200 µmol photons m −2 s −1 .In situ rapid light curves showed light saturation at approximately 500 µmol photons m −2 s −1 (J.C. Kromkamp pers.comm.), which indicates that LL cells were light-limited except for the highest PF at noon.Interesting, even LL samples appear to be photoinhibited around noon and needed to repair in the afternoon, as shown by a lag of F v '/F m ' up-regulation after the highest light values in the diurnal cycle.Theoretically, chlorophyta perform better under continuous PF (as provided in the PF experiment) compared to fluctuating light (in the TLC) (Wagner et al. 2006), but Chlorella cells can acclimate to various, and fluctuating, PF regimes without compromising growth (Kroon et al. 1992a,b), or be stimulated by fluctuating light compared to continuous light exposure (Wijanarko et al. 2007).However, it is possible that a change in the light exposure treatment af fected the photosynthetic performance in the present study.Cells experienced fluctuating light under growth conditions in the TLC and were then exposed to continuous PF for 1 d in the PF experiment.Photoacclimation may take hours to days, and is an energy-dependent process (Post et al. 1984, Wilhelm & Wild 1984, Havel kova-Dousova et al. 2004).It is possible that LL samples needed longer to adjust to the changed PF regime due to lower energy capture compared to ML and HL.However, LL conditions in pH drift experiments did not restrain pH drift, which shows that cells were still able to perform well, at least when the initial pH was high (Fig. 1B).
Generally, cells were resistant to severe photo-damage due to effective repair, or efficient and progressive photoprotection.These results show that cells were able to cope with in situ, and partly very high, PF without major damage.The very restricted pH drift at pH ~7 was therefore not due to impairment of the photosynthetic machinery.
pH or DIC effects?
pH drift experiments showed very low activity at approximately pH 7.2.The low photosynthetic activity at this pH is surprising, as Chlorella sp.showed highest growth rates at pH 7 or lower (Myers 1953, Goldman et al. 1982), and can raise the pH to values well over pH 10 (Myers 1953) due to the presence of efficient CCMs and bicarbonate uptake capacity (Shelp & Canvin 1980, Beardall 1981, Beardall & Raven 1981).We were not able to measure the DIC concentration, but suspected a DIC limitation when the pH was low.At an acidic pH, the total DIC concentration is much lower than at an alkaline pH.At pH 7.0, for instance, [DIC] = 74 µM, while at pH 9.0 [DIC] = 6300 µM when solutions are in air equilibrium (temperature = 25°C, salinity = 0), calculated using R; (R Development Core Team 2013) package seacarb 3.0 (Lavigne & Gattuso 2011).The in situ DIC concentrations would have been much higher because cell suspensions were maintained at pH ~7.2 by aeration with CO 2 gas.DIC addition experiments did not stimulate pH drift, nor rETR.The DIC additions were sufficiently substantial to enable photosynthesis for several hours if DIC would have been limiting (gross O 2 production ~10 µM l −1 min −1 ≈ CO 2 consumption; DIC additions 400, 800 mM).We therefore concluded that the DIC concentration was not restricting photosynthesis in the TLC, or in pH drift experiments.After the pH was raised by bicarbonate addition in the TLC, pH drift was stimulated.Final pH values were high and showed substantial DIC acquisition and photosynthetic capacity compared to other studies and species (Merrett et al. 1996, Choo et al. 2002, Ihnken et al. 2011b).An in crease in pH also elevated growth in a Chlorella mass-culture (Castrillo et al. 2013).Similar conclusions were made using marine algae in situ.The H + concentration, and not the DIC concentration, was the primary factor for cell regulation, growth and photosynthesis in moderate pH manipulation experiments (Lundholm et al. 2005, Hansen et al. 2007, Middelboe & Hansen 2007).Unfortunately, it is not clear what drives these effects due to the complexity of pH and inter-relationships with different cell functions (Smith & Raven 1979, Felle 2001).However, it can be concluded that impaired pH drift was neither due to DIC limitation, nor to PF effects in the present study.
pH-dependent acclimation patterns
O 2 evolution was repressed when the pH was decreased from alkaline conditions to pH 6.5.Cells grown at various pH levels show speciesdependent pH effects (Schneider et al. 2013).Cyanobacteria growth was depressed by acidification (Wang et al. 2011), while seagrass photosynthesis decreased with increasing pH (Invers et al. 1997).
Unfavorable pH conditions increased dark respiration and repressed photosynthetic activity in Euglena (Danilov & Eke lund 2001) and cyanobacteria, while F v /F m was not affected by pH (Liao et al. 2006).
It was expected that a pH increase would be less favored by Chlorella.Alkaline conditions increase the likelihood of cell damage due to increased concentrations of reactive oxygen species at high pH (Liu et al. 2007).At pH 7, internal pH values are similar to the external medium (Smith & Raven 1979, Beardall & Raven 1981, Gehl & Colman 1985, Sianoudis et al. 1987), which suggests an energetic advantage at this pH due to low costs for homeostasis.Proton regulation can be energetically expensive (Smith & Raven 1979, Briskin & Hanson 1992) and involve regulative ATPase activity (Weiss & Pick 1996).Active proton pumping might only be em ployed by the cell during short-term regulation (Beth mann & Schönknecht 2009), while steady-state homeostasis might be facilitated mainly by anion exchange (Lew 2010) in Chlorella fusca by a K + /H + antiport system (Tromballa 1987).
However, the substantial pH perturbations in the present study were carried out in a shock-mode, which is likely to require active H + pumping and entail ATP consumption (Bethmann & Schönknecht 2009).Because the intra-cellular pH could not be determined, it is not clear if (and how fast) homeostasis was reached.Photoautotrophic cells can regulate internal pH in time scales of minutes (Bethmann & Schönknecht 2009), but no information of rapid internal pH regulations (in seconds) was found in the literature for Chlorella.Limitations of internal pH maintenance requires extreme external pH (Gehl & Colman 1985), or osmotic stress (Goyal & Gimmler 1989) and induced internal pH changes of approximately 0.9 and 0.15 units respectively.F ', and F m ' values related positively with pH in the present study: the higher the pH, the higher the fluorescence signal.NPQ was activated by acidification.Similar results have been found for thylakoid fragments (Rees et al. 1992, Heinze & Dau 1996).In these studies, fluorescence responded in seconds, and a steady-state was reached within 1 min.These results can be explained by the quenching of the NPQ component q E , which was directly triggered by pH, irrespective of photosynthetic ETR.A lumen pH of 6 clearly activates q E (Kramer et al. 1999).Similarly, Jajoo et al. (2012) showed that pH changes can induce a thylakoid membrane re-organisation in pea thylakoid membranes.However, in the present study intact cells were used, and cell compartment membranes as well as pH regulatory mechanisms are expected to regulate internal pH effectively.If external pH perturbation could affect q E directly, the cells' pH regulation would be severely impaired.The exponential growth phase was passed in the TLC, which is very likely to have lowered the cell fitness.Cells in the stationary phase show distinctively different genetic activity, photophysiological characteristics (Humby et al. 2013) and are more susceptible to external stress factors (Randhawa et al. 2013).However, senescence effects are under-investigated (Humby et al. 2013).It appears likely that cells exploit alternative electron transport routes to increase energy dissipation to compensate for decreased photosynthetic energy quenching capacity.During ionic stress in higher plants, electron acceptance at the plastid terminal oxidase site (PTOX) accounted for up to 30% of the electron transport in PSII (Stepien & Johnson 2009).Because PTOX activity consumes oxygen while electron transport is facilitated, this mechanism could explain the discrepancy between rETR and gross oxygen evolution in the present study.Electrons are being transported and donated to molecular oxygen at the PTOX, with water as an end product (Kuntz 2004, Cardol et al. 2011).However, increasing, not decreasing, pH supported PTOX-mediated O 2 consumption (Josse et al. 2003), i.e. reaction kinetics operate in the opposite direction and are therefore not likely to explain low oxygen evolution rates in acidified samples in the present study.It is still un known if PTOX activity can generally act as a safety valve under stress conditions in higher plants, partly due to open questions regarding reduction kinetics in this pathway (Trouillard et al. 2012, Laureau et al. 2013).Nevertheless, it is possible that this mechanism is employed under pH shock conditions in Chlo rella as electron redox systems can participate in homeostasis (Houyoux et al. 2011).
A segregation between ETR and oxygenetic photosynthesis can be explained by state-transitions (Ihnken et al. 2014).Further studies must evaluate if state-transitions occur in Chlorella under conditions of pH stress and test the hypothesis that q T is the reason for changes in variable fluorescence readings and the seeming segregation of rETR and O 2 evolution.A deviation from ETR/O 2 photosynthesis linearity can also be explained by other mechanisms, such as cyclic electron flow around PSII. Cyclic electron transport in PSII suggests photosynthetic electron transport, but electrons are cycled within PSII and do not contribute to photosynthesis.Al ternatively, nitrate reduction, elevated oxygen consuming processes such as the Mehler reaction (Asada 2000, Heber 2002), photorespiraton/ chloro respiration (Beardall et al. 2003, Peterhansel & Maurino 2011) or mitochon drial respiration can cause deviation between fluorescence and oxygen based measurements.
A pH effect on the inter-PS-system redox chain was clearly shown in the present study.Q A − re-oxidation kinetics (τ PSII ; Fig. 7C) increased with alkalization.Faster Q A − re-oxidation, most likely due to a higher oxidation state of the PQ pool, was not facilitated by elevated electron transport, as rETR decreased when the base was added.This clearly shows that alternative electron routes are employed by the cells when the pH of the external medium is perturbed.However, rapid alkalization causes cells to aggregate temporarily (Castrillo et al. 2013), which can cause self-shading similar to the package-effect (Berner et al. 1989).It is therefore possible that high τ PSII values are artifacts caused by cell aggregation after addition of the alkaline solution.
DIC acquisition
Costs for DIC acquisition are expected to be lower at pH 6.5 due to the higher CO 2 :HCO 3 − ratio compared to pH 9.5.Chlorella possesses effective CCMs and can actively acquire both CO 2 and HCO 3 − (Shelp & Canvin 1980, Beardall 1981, Beardall & Raven 1981, Matsuda & Colman 1995a,b).Predominantly, bicarbonate CCMs are anion transporters which can operate as symport or antiport systems using Na + (Badger et al. 2002) or Cl − (Young et al. 2001).Because anion regulation is also involved in homeostasis, HCO 3 − acquisition and consumption might be related to internal pH regulation.Cells acclimated to alkaline pH might combine bicarbonate acquisition and pH regulation, although uptake of HCO 3 − does not appear preferable as bicarbonate must dissociate to CO 2 (which is fixed in the Calvin-Benson-Bassham-cycle) and OH − (which hydrolyses to water under 'consumption' of H + ).Nevertheless, cells ex hibited higher photosynthetic rates at alkaline pH compared to measurements taken at pH 6.5.Compared to optimal growth conditions, RuBisCO contents are down-regulated when cells enter the stationary growth phase (Humby et al. 2013).A CCM up-regulation can account for low RuBisCO levels (Beardall et al. 1991).If CCM induction is controlled by pH in Chlorella, an alkalization might allow higher photosynthetic rates due to CCM up-regulation, or activation.
CONCLUSIONS
The present study shows 2 surprising pheno mena.Firstly, cells showed higher performance at elevated pH, a DIC concentration-and light in tensityindependent response.As this is in conflict with the majority of the literature values (where Chlorella spp.preferred pH ≤ 7), the precondi tioning of the cells (having passed the maximal growth phase) is likely to have influenced the outcome of the experiments.The second surprising finding is related to the deviation of oxygen and fluorescence based measurements of photosynthesis.Acidification lowered gross oxygen production, but not electron transport rates in PSII.The reason for this might be related to state-transitions which could not be detected due technical limitations in the present study.Nevertheless, alkalinisation clearly increased the physiological performance of cultures that are not operating at maximum efficiency.
Acknowledgements.The authors express their special gratitude to the local coordinators of the bio-technology workgroup, Irene Malpartida and Roberto Abdala.Special thanks to the Spanish GAP coordinators Félix L. Figueroa and Jesús Mercado, and the Spanish GAP committee María Segovia, Nathalie Korbee, Roberto Abdala, Rafael Conde, Francisca de la Coba, Andreas Reul and Irene Malpartida.
Fig. 1 .
Fig. 1. pH drift experiments with Chlorella fusca cells collected (A) before and after the pH was increased by addition of NaHCO 3− in the thin-layer cascade photobioreactor (TLC) (refer to 'Results' section for DIC-effects versus pH-effects).(B) Light intensity effect on pH-adjusted suspensions to full-strength ambient photon flux (HL), medium light (ML: 70% of ambient photon flux), and low light (LL: 30%).Cells were collected from the TLC in the morning, and exposed to natural sunlight throughout the day.Glass scintillation vials shielded cells from a high fraction of UV light.Vials in (A) were shaded to ML levels.Temperature was kept below 28°C.Data show means ± SD when plot symbol is extended (n = 3).Times are given in 24 h format
Fig. 3 .
Fig. 3. Fast fluorescence induction curves of Chlorella fusca cell suspensions continuously exposed to HL, ML and LL (see Fig.1) at different times of the day.A 250 ml cell suspension was withdrawn from the TLC, filled in 500 ml conical glass flasks, bubbled over night and exposed to measurement conditions on the following day.Samples were withdrawn from experimental conditions and dark-acclimated for ≤2 s before the measurement was performed.Data were normalized to O (initial fluorescence at t = 50 μs); vertical bars range over 0.5 units.Data show mean ± SD (n = 3; n = 2 for fluorescence induction curves shown in HL for t > 15:00 h where n = 2 due to failure of aeration in one replicate)
Fig. 4 .
Fig. 4. Chlorella fusca.Photosynthesis and photoprotection in response to rapid pH perturbations.From to top to bottom in each panel, data show gross O 2 evolution and correlation coefficient (in brackets), pH, continuous fluorescence recording, non-photochemical quenching (NPQ), relative electron transport rates (rETR) and actinic photon flux (grey shaded area; 140 µmol photons m −2 s −1 in the low photon flux, LL, phase; 450 µmol photons m −2 s −1 in the high PF, HL, phase) measured simultaneously.rETR and photoprotection NPQ were determined using variable fluorescence with rETR = ΔF/F m ' × photon flux, and NPQ = (F m '* − F m ') / F m ', where F m '* = the first F m ' value of the experimental protocol.F m was not measured as cells were not transferred to the dark prior to measurements.Data show responses to pH amendment at (A) constant photon flux, while in (B) and (C) the light intensity was increased at the end of the protocol.(C) shows a repeat measurement from (B) for error estimation purpose.pH perturbations were achieved by addition of small volumes (≤300 µl) of acid (HCl) or base (NaOH).DIC (NaHCO 3 − solution) was added as indicated by arrows to yield final 1 mM.Note that continuous fluorescence lines, pH and oxygen concentrations do not read on any y-axis.pH was measured every 30 s
Fig. 5 .
Fig. 5. Combined effects of pH perturbance and photon flux on photosynthesis and photoprotection in Chlorella fusca.Photon flux was (A) elevated when cells were exposed to acidic conditions and maintained while pH was brought back up again in a single mode, or (B) by 2 consecutive base additions to initial values.The low light relaxation phase was intermitted in (C).Relative electron transport rates (rETR) and photoprotection non-photochemical quenching (NPQ) were determined using variable fluorescence with rETR = ΔF/F m '* photon flux, and NPQ = (F m '* − F m ') / F m ', where F m '* = the first F m ' value of the experimental protocol.F m was not measured as cells were not transferred to the dark prior to measurements.For legends and protocol explanations refer to Fig.4
Fig. 6 .
Fig. 6.Chlorella fusca.Effects on photosynthesis and photoprotection by consecutive pH adjustments at constant photon flux.(A) and (B) show separate experiments performed on the same day.A DIC addition was performed at the end of (B) (arrow) with a final 0.7 µM
Fig. 7 .
Fig. 7. Chlorella fusca.Fluorescence parameters measured by FRRf in response to manipulated pH and actinic light conditions.High initial pH (9.3) was lowered by addition of HCL (dashed line) to a theoretical pH 6.5.After 5 min, the cell solution was alkalized by injection of NaOH (dotted line) to a theoretical pH 8.5.Grey scale bars indicate actinic photon flux: 3, 140, 400 and 0 µmol photons m −2 s −1 .Data in (A) show minimal fluorescence in darkness or actinic light (F'), maximal fluorescence during a single turnover flashlet application (F m ') and effective (or maximal) quantum yields; (B) shows relative electron transport rates (rETR) and non-photochemical quenching (NPQ).The iterative curve fit during the single turnover fluorescence emission allows determination of the functional absorption cross-section of PSII (σ PSII ); (C) Re-oxidation kinetics of Q A − after the single turnover are shown by τ PSII .Data show mean ± SD for n = 3 Experiments were carried out under support from the Department of Ecology of the Universidad de Málaga.The Netherlands Institute for Sea Research (NIOZ) provided acknowledged travel funds for J.C.K. and S.I.Three anonymous reviewers contributed through constructive criticism.LITERATURE CITED Andersen RA, Berges JA, Harrison PF, Watanabe MM (2005) Recipes for freshwater and seawater media.In: Andersen RA (ed) Algal culturing techniques.Elsevier Academic Press, London, p 429−538 Aro EM, McCaffery S, Anderson JM (1993) Photoinhibition and D1 protein degradation in peas acclimated to different growth irradiances.Plant Physiol 103: 835−843 Asada K (2000) The water-water cycle as alternative photon and electron sinks.Philos Trans R Soc Lond B Biol Sci 355: 1419−1431 Badger MR, Hanson D, Price GD (2002) Evolution and diversity of CO 2 concentrating mechanisms in cyanobacteria.Funct Plant Biol 29: 161−173 Beardall J (1981) CO 2 accumulation by Chlorella saccharophila (Chlorophyceae) at low external pH: evidence for active transport of inorganic carbon at the chloroplast envelope.J Phycol 17: 371−373 Beardall J, Raven JA (1981) Transport of inorganic carbon and the 'CO 2 concentrating mechanism' in Chlorella emer sonii (Chlorophyceae).J Phycol 17: 134−141 Beardall J, Roberts S, Millhouse J (1991) Effects of nitrogen limitation on uptake of inorganic carbon and specific activity of Ribulose-1, 5-bisphosphate carboxylase oxygenase in green microalgae.Can J Bot 69: 1146−1150 Fig. A1.pH drift experiments.(A) Cells were supplemented with NaHCO 3− ; (B) shows a scatter plot of (A) for samples without DIC amendment and samples where an 800 mM bicarbonate supplement was performed.(C) Shows additional pH drift experiments from TLC2 and TLC3, where cell suspensions were bubbled with CO 2 prior to pH drift experiments, and samples that were diluted with fresh, nutrient replete medium (50:50 by volume).Data are mean ± SD (n = 3) | 2017-10-27T11:46:36.670Z | 2014-11-20T00:00:00.000 | {
"year": 2014,
"sha1": "fa70a294800895a5c715d03c8e3eb0e6ec6cb7c7",
"oa_license": "CCBY",
"oa_url": "https://www.int-res.com/articles/ab2014/22/b022p095.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fa70a294800895a5c715d03c8e3eb0e6ec6cb7c7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
15632399 | pes2o/s2orc | v3-fos-license | Myeloperoxidase-Dependent LDL Modifications in Bloodstream Are Mainly Predicted by Angiotensin II, Adiponectin, and Myeloperoxidase Activity: A Cross-Sectional Study in Men
The present paradigm of atherogenesis proposes that low density lipoproteins (LDLs) are trapped in subendothelial space of the vascular wall where they are oxidized. Previously, we showed that oxidation is not restricted to the subendothelial location. Myeloperoxidase (MPO), an enzyme secreted by neutrophils and macrophages, can modify LDL (Mox-LDL) at the surface of endothelial cells. In addition we observed that the activation of the endothelial cells by angiotensin II amplifies this process. We suggested that induction of the NADPH oxidase complex was a major step in the oxidative process. Based on these data, we asked whether there was an independent association, in 121 patients, between NADPH oxidase modulators, such as angiotensin II, adiponectin, and levels of circulating Mox-LDL. Our observations suggest that the combination of blood angiotensin II, MPO activity, and adiponectin explains, at least partially, serum Mox-LDL levels.
Introduction
Atherosclerosis is an inflammatory disease involving a crosstalk between vascular cells, monocytes, proinflammatory cytokines, chemokines, and growth factors [1][2][3]. The current paradigm of early atherosclerosis claims that lowdensity lipoprotein (LDL) particles are trapped in the subendothelial space of the vascular wall where they can be oxidized. The precise physiological process for LDL oxidation in vivo is still largely unknown and the occurrence of LDL 2 Mediators of Inflammation oxidation outside the lesion sites has not definitively been ruled out yet.
Evidence accumulated during the last decade has suggested implication of myeloperoxidase (MPO) in inflammation leading to atherogenesis. MPO is produced by macrophages and neutrophils [4] and via its chlorination activity, MPO produces hypochlorous acid (HOCl) from hydrogen peroxide (H 2 O 2 ) and chloride anion (Cl − ). HOCl can oxidize protein-bound amino acid residues among which the formation of 3-chlorotyrosine is considered as specific of the activity of MPO as the latter is the only human enzyme able to produce HOCl. In the context of atherogenesis, MPO, 3-chlorotyrosine, and MPO-dependent modified LDL (Mox-LDL) have all been detected in human atherosclerotic lesions and in the bloodstream [5][6][7][8].
We previously demonstrated that Mox-LDL generation could occur in vitro at the surface of the endothelial cells suggesting that it was not restricted to the subendothelial space in vivo [9]. The triad made up by endothelial cell, circulating LDL and MPO, allowed a synergic mechanism for producing Mox-LDL. The starting point of this reaction is the generation of superoxide anion (O 2 − ) by the membrane bound nicotinamide-adenine-dinucleotide phosphate (NADPH) oxidase. O 2 − is further dismutated into H 2 O 2 a substrate for MPO to produce HOCl. We recently reported, in two different clinical situations, that this indeed enables MPO to rapidly modify LDL and serum proteins by oxidations [8,10]. Furthermore, NADPH oxidase is activated and upregulated by angiotensin II (ANGII) via the ANGII type I (AT1) receptor present at the surface of endothelial cells [11]. This enzymatic complex therefore plays a central role in the Mox-LDL generation [9].
Based on these data, we wondered whether there was an independent association between NADPH oxidase modulators, such as ANGII, adiponectin [12], and levels of circulating Mox-LDL. To test this hypothesis, we report the data observed in a cohort of male patients ( = 121) consulting for lower urinary tract symptoms (LUTS). Indeed, LUTS is associated with the erectile dysfunction, which is an early predictive sign for atherosclerotic cardiovascular events [13].
Patients.
Subjects were 121 males with a mean age of 58.8± 10.8 who consulted for lower urinary tract symptoms (LUTS) at the Erasme University Hospital. This study conforms with the Declaration of Helsinki and its protocol was approved by the Ethics Committee of the Erasme University Hospital. Finally, all subjects gave their written informed consent.
Standard
Analyses. Blood samples were centrifuged for 10 minutes at 4000 g and the supernatant was collected and frozen. Blood tests were performed at the Laboratory of Experimental Medicine of the University Hospital of Charleroi, Site A. Vésale, Unit 222, ULB. The following parameters were measured: C-reactive protein (CRP), blood glucose, total cholesterol, triglycerides, HDL-cholesterol (standard laboratory techniques PLC), and adiponectin. LDL-cholesterol levels were calculated using the Friedewald formula (LDL-chol (mg/dL) = T-chol-HDL-chol-TG/5).
MPO and Mox-LDL Analyses.
Mox-LDL was measured using a sandwich ELISA kit [9]. The specificity of the antibody was further assessed by analyzing LDL oxidized with peroxynitrite (0, 10, 100, and 1000 M) and comparing with LDL oxidized by MPO/hydrogen peroxide/chloride. Other oxidants produced by MPO such as HOSCN/ − OSCN, HOBr/ − OBr, and HOI/ − OI (from MPO/hydrogen peroxide/corresponding halide) were also used to oxidize LDL and to test the specificity of LDL. It resulted in the fact that the antibody is highly specific for Mox-LDL.
The active and total MPO contents in plasma were measured using the licensed SIEFED and ELISA (ELIZEN MPO, Zentech SA, Belgium) methods [14]. By using these two techniques, we are able to distinguish active and total MPO contents in plasma and to determine the specific activity of MPO (MPO activity/MPO antigen ratio).
Statistic.
Data were analyzed using the SigmaPlot 12.0 software (Systat, San Jose, CA). Results were considered statistically significant with a two-tailed < 0.05. Two models of multiple linear regression analysis were tested using a backward stepwise selection of explicative variables.
Results and Discussion
The purpose of the present study was to explore whether there is an independent association between ANGII (an NADPH modulator), adiponectin, and levels of circulating Mox-LDL. In this context we analyzed various parameters within 121 male subjects who consulted for the first time for LUTS. Table 1 shows the means and SD of the parameters measured or calculated within patients. Table 2 describes two models of multivariate analysis of backward regression in these subjects. The standardized regression coefficients are given for each model. As shown in Table 2, in the first model (Model 1) we set Mox-LDL as the dependent variable, while the independent variables included the parameters described above. Significant linear correlations were found between Mox-LDL levels and ANGII, and MPO activity (both positively) and also adiponectin content (negatively). In the second model (Model 2) the Mox-LDL/ApoB ratio (an estimation of the proportion of MPOmodified LDL in the bloodstream) was set as dependent variable and the same set of above parameters as independent variables. The same variables as in Model 1 were found to predict the Mox-LDL/ApoB ratio.
Our observations suggest that the combination of blood ANGII, MPO, activity and adiponectin explains at least partially the serum Mox-LDL levels. They corroborate and extend our previous data showing that oxidation could also take place at the surface of endothelial cells [9,15] and that plasma level of Mox-LDL follows the level of MPO in patients during a hemodialysis process [15,16]. They are underpinned by established physiopathological mechanisms as endothelial cells express NADPH oxidase, the activity and expression of which are increased by ANGII binding to the AT1 receptor [12]. In support of our proposal, we previously observed that hypertensive COPD patients treated by angiotensin-converting enzyme inhibitors had reduced levels of circulating Mox-LDL (our unpublished data). This is an alternative and complementary explanation to the common model positing that the presence of modified LDL in the circulation is due to the back diffusion of modified LDL from the vessel to the circulation and is a marker of plaque instability in patients with coronary artery disease. Furthermore, it recently arose that human peroxidasin 1, also called vascular peroxidase 1 (VPO1), might be involved in the in vivo production of HOCl and so potentially contributes to the oxidation of LDL [17]. Moreover, VPO1 was also suggested as an inductor of vascular smooth muscle cell proliferation [18]. However, further experiments are needed as the formation of HOCl by VPO1 is low at physiological pH. We also uncovered a negative linear correlation between oxidative stress and adiponectin in our multiple linear regression models ( Table 2). This is in agreement with the observation that adiponectin reduced in vitro and in vivo the NADPH oxidase activity and hence oxidative stress [12]. It is also in support of the general agreement that adiponectin, which is secreted by fat tissue, is antiatherogenic by modulating cytokine inflammatory cascades and inhibiting cholesterol incorporation.
In sum, our study suggests that the combined action of ANGII, MPO, and adiponectin might explain the serum Mox-LDL levels. A definitive validation or invalidation of the proposed role of ANGII in the generation of serum Mox-LDL will request a double blind randomized crossover study comparing subjects receiving an angiotensin-converting enzyme inhibitor or an angiotensin II receptor antagonist and a placebo. | 2016-05-12T22:15:10.714Z | 2013-11-18T00:00:00.000 | {
"year": 2013,
"sha1": "a3fa8370504a0eb6dc246bb5c064ab56fb53b776",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mi/2013/750742.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35c157c4927a6ecf7430e0e0e292f2d618d8d05f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233706965 | pes2o/s2orc | v3-fos-license | Metagenomics Analysis of Race and Age inuence on the Vaginal Microbiome in Pregnant and Non-pregnant Healthy Women
Various human body parts are host to many microbial species and have a mutualistic relationship with them. The presence of these microbial species in reproductive tubes plays an essential protective role against the proliferation of harmful organisms and is an important factor in reproductive health. The vaginal microbiota during pregnancy plays a vital role in the health of the mother and the infant. Microbiota imbalance during pregnancy is associated with many complications. As a result, the detection of vaginal microbiota during pregnancy can reduce the risk of these problems. High-throughput culture-independent technologies allow the study of vaginal microbiome on a large scale. This study aimed to compare the vaginal microbiota between pregnant and non-pregnant healthy women of different age or race using the meta-analysis method. The results from 7 articles having 16S rRNA gene sequences, were extracted and analyzed by CLC. Data from 898 pregnant and 702 non-pregnant women showed that the Bacilli, Clostridia, Actinobacteria and Coriobacteria were the dominant classes in pregnancy. The vaginal microbiota in normal non-pregnancy is also predominated by Bacilli. Still, beta diversity maps demonstrated that non-pregnant vaginal microbiome is more variable than that in the pregnant state. This study reveals new insights into age and ethnic effects on the pregnant and non-pregnant vaginal microbiome and found that the microbiome of Chinese women was more distinct than the other races. It was also detected that the relative number of bacterial classes is dramatically lower in women above the age of 35 relative to younger ones.
Introduction
In the human body, various microbial species often coexist with the host. In this relationship, the host provides microbial species with nutritious food to grow and reproduce. On the contrary, a microbial population acts as a barrier against the growth of opportunistic microorganisms, which can cause infection and disease in humans. These microbial populations, called microbiota, play an essential role in the development, physiology, immunity, and nutrition of humans [1]. The entire microbiota, their genomes, and their surrounding biochemical environment are called microbiome [2].
Microbiota of the reproductive tubes plays an important role in reproductive health in women. In healthy women, the vaginal microbial population is predominated by Lactobacillus spp. These microbial species play an important role in vaginal protection against the growth and proliferation of pathogens by the secretion of antibacterial bacteriocins and production of metabolites, such as lactic acid, which reduce vaginal pH [3]. It seems that the lack of Lactobacillus spp. in vaginal microbiota in pregnancy is associated with pregnancy complications, especially preterm birth [4,5]. Recent studies have shown that although Lactobacillus species are dominant in 60-70% of women, there are also healthy women in whom Gardnerella, Atopobium, Prevotella, Pseudomonas or Streptococcus are dominant. Therefore, the vagina has a very complex ecosystem with heterogeneous microbial distribution [6]. Microbial populations in other body sites are not typically dominated by any single genus [7]. It is known that vaginal microbiota is not always stable. Any internal and external factors such as antibiotic use, vaginal drugs, systemic hormones, contraceptives, douches, sexual intercourse, vaginal sprays, stress levels, and economic conditions can lead to increased or decreased strains in the microbial vagina and change the normal vaginal uoride periodically [8][9][10]. This is to some extent because of the anatomical location and function of the vagina [11]. The vaginal microbiota during a woman's lifespan from birth, puberty, and pregnancy to menopause, also undergoes a combination of changes. Sex steroid hormones appear to play an essential role in the composition and stability of vaginal microbiota [12]. A Microbiome imbalance can lead to many diseases, including bacterial vaginosis (BV). This imbalance during pregnancy is associated with an increased risk of early and late abortion, postpartum infection, postpartum endometriosis, premature delivery, etc. Since preterm birth causes many problems, including increased risk of cardiovascular defects, respiratory syndromes, and increased risk of chronic diseases in adulthood, the detection of the vaginal microbial communities during pregnancy can reduce the risk of birth with these problems [13,14].
The development of non-culture-dependent techniques, such as the high-capacity sequencing of 16S rRNA genes, has facilitated the comparison of the composition and role of vaginal microbial populations at different times and led to the identi cation of non-cultivated microbial species [8]. 16S rRNA, as a ubiquitous gene found in all bacteria, is suitable in this regard and has regions of conserved sequences that can be ampli ed by universal or speci c primers. It also has heterogeneous regions that can be used to identify bacteria or to nd phylogenetic relationships [15]. These studies show that pregnancy has a signi cant impact on the vaginal microbiome. Pregnancy is associated with many physiological changes that may lead to changes in the structure and composition of the microbial population in pregnant women, which is different from those of non-pregnant women [13].
This study aimed to evaluate the metagenomic analysis of the vaginal microbiome in pregnant and nonpregnant women. Understanding the vaginal microbiome during pregnancy is an essential step in the diagnosis, prevention, and treatment of adverse pregnancy complications.
Materials And Methods
Data Collection: We found 165 case-control studies by exploring keywords in DDBJ, PubMed, and references in relevant meta-analysis and case-control studies. We selected studies containing 16S-related data (fastq or fasta), which could be used as metadata required to establish whether a sample is a case or control. Some data were downloaded from the Sequence Read Archive (SRA) repository and some data were obtained through communication with the authors. In studies where multiple body sites were examined or where multiple samples of each patient were used, we needed the respective metadata to complete the main metadata. We only looked vaginal samples for 16S and thus, studies with other genes, like CPN60, were excluded from our research. In studies with multiple control groups (e.g., non-infectious infertility, female sex worker), only the pregnant and non-pregnant patients were used [16][17][18][19][20][21][22]. The study identification and selection process is presented as a PRISMA flowchart (Fig. 1).
16S Processing: Raw data (fasta and fastq) were downloaded and processed using the CLC Genomics Workbench 20.0.4. If required, we de-multiplexed sequences by nding speci c matches between the given barcodes and trimmed primers with a maximum of two mismatches. The paired-end reads were assembled, sequences and quality score data from the fastq and fasta les were extracted. The reverse complement of the reverse read was produced. Finally, the paired-end reads were assembled into a single Contig le. Generally, sequences were quality ltered by trimming at the rst base and Q score lower than 8. Nevertheless, some datasets did not deliver such a quality threshold (for instance, the resulting OTU table lacked original samples, or the read depth was signi cantly lower than in the original article). We aligned the remaining sequences using a customized Greengene bacterial reference database and removed unaligned sequences. To classify the sequences, we used the CLC Microbial Genomics Module 20.1.1 with a cutoff point of 80 and removed non-bacterial sequences. For per dataset, we eliminated samples with reads < 100 and OTUs with read values < 50. Further analysis was done on a random subset of 2000 reads/samples, either using operational taxonomic units (OTUs) clustered with a similarity threshold of 97% or based on taxonomic assignment.
Statistical Analyses: The CLC Microbial Genomics Module 20.1.1 and PERMANOVA analysis were used to evaluate statistical differences in the vaginal microbiome, age, and race during pregnancy and nonpregnancy.
Characteristics of the study population
The present study characterized the vaginal microbial communities in pregnant and non-pregnant women. To this end, 898 pregnant and 702 non-pregnant subjects were analyzed. The age range of the samples was between 15 and 50 years in pregnant and non-pregnant groups and were classi ed into the age groups of 15-19, 20-25, 26-30, 31-36, and more than 36. The data were also categorized in terms of race. The Pregnant group was divided into Black, Asian, white, Chinese, African American, and American Indian or Alaska native. The non-pregnant group was divided into Black, Asian, white, Chinese, Hispanic, and Puerto Rico. The frequency of bacterial phylum and class in similar groups in terms of age or race was compared between pregnant and non-pregnant women and the following results were obtained.
OTU analysis related to race Clustering reads at phylum and class levels revealed large uctuations in the microbiome composition within pregnant and non-pregnant groups depended on the race ( gure 2a, 2b, 2d, and 2e) and also distinct differences between these two groups ( gure 2c). Firmicutes (81%) were the dominant phylum among all ethnic groups. The vaginal microbiome in pregnant women of Black, Asian, White and African-American races just or mainly composed of Firmicutes. In this group, however, the Chinese female vaginal microbiome has a higher diversity of bacteria at the phylum level and composed of more than 10% of Proteobacteria. This is completely reverse in non-pregnant group in which the Chinese female vaginal microbiome has a lower diversity of bacteria at the phylum level and composed of just Firmicutes and Proteobacteria. Actinobacteria, Bacteroidetes, Fusobacteria and Tenericutes present with different percentage in all other races, but cannot be seen in non-pregnant Chinese women.
At the class level, the most represented class was Bacilli which presents about 80% and 70% in pregnant and non-pregnant groups, respectively. In the pregnant group, the abundance of Bacilli in the races of Black, Asian and White is near 100%. For the American-Indian or Alaska native group, however, this percentage reaches about 50% and for Chinese is about 70%. In the non-pregnant group, this difference is less noticeable and the abundance of Bacilli in the races of Asian and White is near 85% and reaches about 60% in Black, Hispanic, and Puerto Rico. What is interesting in this diagram is the low diversity of bacterial classes in Chinese, so that in this race, most of the bacteria are Bacilli and Gammaproteobacteria. Gammaproteobacteria is the bacterial class that is just seen in Chinese pregnant and non-pregnant group. Clostridia which is the most abundant class after Bacilli in all the races is not seen in Chinese. This is also true in the pregnant group, in which the Chinese do not have Clostridia Generally, statistical analysis revealed that the difference between microbiome in different races, regardless of pregnancy and non-pregnancy status, is signi cant (P-value=3x10 -3 (.
In non-pregnant women, the race is effective in the microbiome population, and the differences between different races are statistically signi cant (P-value=10 -5 (. But in this group, the microbiome difference among Hispanic, Asian, and black races is not statistically signi cant. In the population of pregnant women, although race is generally effective in the microbiome population (P-value=10 -5 (, but a two by two comparison of groups shows that the microbiome of some races are similar, and the difference between them is not statistically signi cant. Across all samples, a total of 118 classes were detected, but the seven most abundant classes accounted for ~95% of the total relative abundance, bacilli (73%), clostridia (7%), Actinobacteria (4.5%), Corinobacteria (4.5%), Bacteriodia (4%), Fusobacteria (3%), Mollicutes (2%).
OTU analysis related to age
In general, the number of bacterial species decreases dramatically over the age of 36, and the bulk of the microbiome includes only the Bacilli class. (More than 95% in pregnant women and about 90% in nonpregnant women).
After the class of Bacilli, in all age groups in pregnant and non-pregnant women, Clostridia is more common than other bacteria. The frequency of this bacterial class is, on average lower in pregnant than in non-pregnant women in the same age group ( gure 3).
In non-pregnant women, the age difference has no effects on the microbial population of the vagina, and it is not statistically signi cant (p-value=0.28), but Statistical analysis showed that the age of the pregnant mother is effective in the microbial population of the vagina and different age groups have signi cant microbial population differences (p-value=10 -5 (. This difference is more evident in the age group of 36-50 years with other groups.
Heatmap analysis
Heatmaps were also constructed, which is a data matrix for visualizing values in the dataset using a color gradient. This gives a good overview of the largest and smallest values in the matrix ( gure 4 and 5).
Beta diversity analysis
To elucidate possible similarities in vaginal microbiota community structure between groups of participants, we calculated Euclidean-based distances across the entire population [4]. PCoA (Fig. 6) In PCoA, the rst two principal components explained 50 and 18%, respectively, of the variance along the rst and second axes, with the pregnant samples visually separated from the non-pregnant. Results consistently showed that the non-pregnant vaginal microbiome is notably different compared to pregnant. Non-pregnant samples had higher variation within the groups.
Discussion
The human vaginal microbiome is affected by factors such as diet, environment, genetic background, and ethnicity [23].
Although the underlying reason behind the microbial changes during pregnancy is still uncertain, a relationship was reported between sex steroid hormone levels and vaginal microbial composition. Forsum et al. showed a relationship between estrogen levels during menstruation and changes in lactobacillus vaginal bacteria [24]. Increasing the estrogen concentration during pregnancy can increase the vaginal mucus thickness and glycogen deposition. Glycogen is the main carbohydrate used by lactobacillus strains hydrolyzed into maltodextrins, maltobiose, and maltose in vaginal fluid by hostencoded α-amylase [25,26] and resulted in the production of lactic acid in the anaerobic glycolysis, leading to a protective role in reducing vaginal pH [27]. Hormone-induced glycogen production may create a rich medium for bacterial growth in the vagina during pregnancy. Pregnancy also changes the amount and stability of the mucus so that the mucus becomes richer and thicker and the swabs of sampling of pregnant women carry more material than non-pregnant women and may in uence judgment about the vaginal microbiome [13,28]. Comparison of the vaginal microbiota of pregnant versus non-pregnant women showed that normal pregnancy is characterized by Lactobacillus-dominated microbiota [5,29].
After delivery, the maternal estrogen levels fall and the vaginal microbiota becomes more diverse and can remain in some women for up to one year postpartum [4,30].
Ghartey et al. in 2014 detected that the microbiome has less diversity and richness in 18-32 weeks' gestation and returned to non-pregnant status in late gestation [20].
A Previous study demonstrated that L.crispatus dominant vaginal microbiome is related to E.coli growth inhibition, E.coli growth in vagina cause neonatal sepsis and chorioamnionitis [20].
Most researchers have studied the vaginal microbiome during a particular period of life or in a speci c population and concluded that ethnic diversity and geographical area could affect the vaginal microbiome [31]. Even different ethnicities in one geographical region can result in signi cant differences in the vaginal bacterium. Genetic and environmental factors, such as diet, are effective in making these differences [32]. Using the meta-analysis method, the present study characterizes the vaginal microbial communities during pregnancy and non-pregnancy in women of different races. Ravel et al. showed that the vaginal microbiome differs in various ethnicities. For example, in Asian and white women, lactobacillus was higher than black and Hispanic [22]. Differences in the microbiome content of different races were also observed in this study. Interestingly, the microbiome of the Chinese was more distinct than the other races, and these ndings are consistent with previous studies in the literature.
In Chinese women, the vaginal microbiome variation among women during pregnancy is more signi cant than that of non-pregnant women [18], and this is completely reverse in other races [12,13].
Xu et al. stated that the maternal age and the level of FSH hormone were negatively correlated with the relative abundance of Paraprevotella. It was suggested that this bacterium was more commonly found in the vagina of younger women or women with normal ovarian function. Meanwhile, the relative abundances of genera Varibaculum, Streptococcus, and Veillonella were positively related to age, indicating that the colonization of these bacteria in the vagina may increase with female age [33]. In another study Nasioudis et, al. mentioned that there are no statistically signi cant associations between the relative abundance of any bacterial taxa and maternal age, gestational age at birth, or neonatal gender [34]. However, in this study, it was found that the relative number of bacterial classes is dramatically lower in women above the age of 35 relative to younger ones. As mentioned in the Materials and Methods section, the microbiome of pregnant women over 36 years old is signi cantly different from other age groups. Since the rate of fetal abnormalities increases with maternal age increasing, and also the increased maternal age is one of the risk factors of pregnancy related complications such as preeclampsia [35], so the authors suggest that the relationship between maternal age, microbiome and factors like fetal abnormalities and pregnancy complications examine more broadly.
It is also well established that there is a decline in female fertility as a function of age. M.H. Razi et.al con rmed that the women's age strongly in uence outcomes of assisted reproductive technology (ART) treatment [36]. We think this may also be related to microbiome and should be examined.
Regardless of age and race, the difference in the vaginal microbiome in pregnant and non-pregnant women is statistically signi cant (p-value =10 -5 ).
Overall since the vaginal microbiome during pregnancy can affect the neonatal gut microbiota, the microbial infections can be controlled by identifying the vaginal microbiome and its abnormalities [37]. Also, it seems that a healthy human fetus grows in a free-bacterial environment but is exposed to a wide variety of bacteria through the delivery channel at birth. However, many babies are today born by Csection and thus are not exposed to vaginal microbiota. This also diversi es intestinal microbiota, and since intestinal microbiota affects the balance of energy, metabolism, and resistance to pathogens, the vaginal microbiota can indirectly affect the intestinal microbiota [38].
Extending these studies can lead to discovering biomarkers of reproductive disorders or problems that may occur during pregnancy. We reported the microbial status of the vagina during pregnancy and nonpregnancy. Considering that our women were healthy, this model could be presented as a healthy vaginal model and examined the effect of omission or addition of bacterial strains of the disease. A microbiome core can be considered a sign of health. However, considering one microbiome core does not indicate health properly, and it is better to consider several cores [22]. In general, the vaginal microbiome is very essential, and microorganisms excreted through vaginal secretions should be replaced again [39].
Declarations
Funding Figure 1 PRISMA study selection flowchart for meta-analyses of the pregnant and non-pregnant vaginal microbiome.
Figure 2
Taxonomic pro le of samples. Taxa were clustered at class and phylum level within pregnant (a and d) and non-pregnant women (b and e) and between these two groups (c). Euclidean principal coordinates analysis (PCoA) plot comparing sample distribution for the different groups | 2021-05-05T00:08:19.543Z | 2021-03-24T00:00:00.000 | {
"year": 2021,
"sha1": "2d5cda374de54fe8026d057cd21705acf23a909a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21203/rs.3.rs-291962/v1",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b848afab4f8ddcc1093c160ce59449512d248255",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
46279994 | pes2o/s2orc | v3-fos-license | Water Quality and Planktonic Communities in Al-Khadoud Spring, Al-Hassa, Saudi Arabia
Problem statement: Al-Khadoud spring is one of the most important wat er resources in AlHassa Governorate, Saudi Arabia. However, much of i ts biotic information is still unknown. This study presented preliminary ecological information of thi s aquatic body. The aim of this research was to study the water characteristics and the planktonic rganisms inhibiting Al-Khadoud spring and its irrigational channels for a period of 1 year. Approach: A regular visit was monitoring the spring over a period of 1 year (June 2007 to may 2008). Physico -chemical characteristics of spring water were determined. Quantitative and qualitative analysis o f Plankton (Phytoplankton or zooplankton) were also investigated. Results: All the water quality variables measured showed co nsiderable seasonal variation. The data of this study showed that there were marked seasonal differences in the quantitati ve and qualitative composition of the phytoplankton co mmunities in Al-Khadoud spring and its irrigation canal. The changes in total algal counts throughout the investigation coincided closely with in Chlorophyceae abundance. Thirty six species were id entified allover the period of the investigation. Out of these, 9 species belong to Chlorophyceae, 17 belong to Bacillariophyceae, 7 to Cyanophyceae and 3 to Euglenophyceae. Cyclotella meneghiniana Kützing, Nitzschia closterium Ehernberg, Fragilaria capucina Desmazieres, Surirella ovalis Breb, Actinastrum sp., Chlorella vulgaris Beyerinck, Scenedesmus quadriquda Breb, Oscillatoria sp. and Oscillatoria subbrevis Schmidle were observed in a high rank of occurrence. The phytopla nkton crop showed a remarkable increase as compared with the previous records. The data showed that the zooplanktonic fauna identified in this aquatic body is a typical of the permanent freshwat er and brackish water. Eleven species were recorded, 5 belonged to Cladoceran, 4 belonged to R otifera and 2 belonged to Chironomid. Zooplankton species like Thermocyclops hyalinus, Mesocyclops sp., Moina micrura, Brachionus caudatus, B. falcatus and Filina longiseta were recorded at all sites investigated allover th study period. The scarcity of zooplankton species from Al -Khadoud spring and its irrigation canal could be due to the nature of these reservoirs as both recei ving reuse-drainage and treated sewage water. Conclusion: These results indicated that after receiving water from the outlets either treated sewage water or of re-use drainage water, the spring water had an obvious increase in electrical conductivity , COD, total alkalinity, nitrates, phosphorus, chlori de and potassium. These features indicated pollutio n with organic wastes, increased salinity and deterio rated oxygenated state. Based on this we can say th at all these factors can be affected both soil and pla nts cultivated in the area of Al-Hassa.
INTRODUCTION
Plankton dynamics or the time dependent changes in plankton biomass are the result of a complex interplay of physical, chemical and biological processes. The seasonal cycles of biological parameters are usually driven by factors referred to as physical biological [1] . Therefore, plankton diversity in relation to water quality is a well practiced protocol, accepted all over the world, which help to describe an ecological system and is a measure of community pattern [2,3] . Plankton diversity is controlled by seasonal changes as well as by the rate at which plant nutrients are supplied. Primary production has performed by chlorophyllbearing plants ranging from the tiny phytoplankton to the giant kelps through the process of photosynthesis. Zooplankton plays an important role as secondary producers and together with phytoplankton; they support the vast assemblages of marine food chain with all their diversity and complexity. Data on chlorophyll pigments, phytoplankton and zooplankton has regarded as a sound basis for environmental appraisal of ecosystems [4] .
Saudi Arabia is a hot, dry country with a high level of development of all kinds. Demand for water increases continually, resources remain limited. The municipal water supply is mainly desalinated water and partly wadi groundwater [5] . Al-Hassa's Province is one of the largest oases in the world and located in the southern part of the eastern region of Saudi Arabia. An agricultural area of Al-Hassa receives the highest solar energy load 1200 W m -2 in the world [6] , thus providing favorable arid ecosystems for planktonic and wild plant to grow [7] .
Whitton et al. [8] studied the water chemistry and algal vegetation of streams in the Asir Mountains, Saudi Arabia. Okla studied the algal microfacies in upper tuwaiq mountain limestone (Upper Jurassic) near Riyadh [9] . Hussain and Sadiq studied the metal chemistry of irrigation and drainage waters of Al-Hassa Oasis of Saudi Arabia and its effects on soil properties [10] . Hussain and Khoja studied the intertidal and subtidal blue-green algal mats of open and mangrove areas in the Farasan Archipelago (Saudi Arabia), Red Sea [11] . Al-Homaidan described planktonic algae and water chemistry of various water bodies [12,13] . Hussain et al. [14] and Al-Homaidan and Arif [15] studied the seasonal succession of bloom-forming algae over a period of 3 consecutive years (1992)(1993)(1994)(1995) in relation to the trophic changes taking place in a semi-permanent rain-fed pool at Al-Kharj, Saudi Arabia [14,15] . On the other hand quantitative surveys of the intertidal macrobiota were conducted between 1991 and 1995 in the Saudi Arabian Gulf along Permanent Transect Lines (PTLs) by Jones et al. [16] . Al-Aidaroos et al. [17] studied the occurrence and abundance of zooplankton in sewage polluted coastal areas of Jeddah in Al-Arbaeen and Al-Shabab lagoons (Saudi Arabia). Shaikh et al. [18] studied the Phytoplankton ecology and production in the Red Sea off Jiddah. Baker and Hosny studied the zooplankton diversity and abundance in Half Moon Bay, Saudi coastal waters, Arabian Gulf [19] . Recently, Al-Fredan and Fathi investigated the Edaphic algae in Al-Hasa, Eastern region, Saudi Arabia [20] .
Al-Khadoud spring is one of the most important water resources in Al-Hassa, however; much of its biotic information is still unknown. The aim of this research is to study the water characteristics and the planktonic organisms inhibiting Al-Khadoud spring and its irrigational channels for a period of 1 year.
MATERIALS AND METHODS
Site description: Al-Hassa lies in the south of the Kingdom's Eastern region and is bounded by the Al-Dahna and the Al-Daman deserts. It is situated between 25°05' and 25°40' Northern latitude and 49°55' Eastern longitude. The Al-Hassa oasis is the largest oasis in the Kingdom of Saudi Arabia and the municipality of Al-Hassa constitutes the largest administrative area in the Kingdom. Al-Hassa has a dry, tropical climate, with a five-month summer and a relatively cold winter. It enjoys the benefit of copious reserves of underground water, which has allowed the area to develop its agricultural potential. Al-Hassa's water mainly originates from an underground source through a number of artesian springs. Al-Khadoud's spring is one of the most important water resources in Al-Hassa Region and plays an important role in agricultural activities in the area. It is located nearly 5.0 km Northwest of King Faisal University main campus. Physico-chemical characteristics: Temperature, pH and conductivity, total dissolved salts and dissolved oxygen were measured at each location. pH was measured using a pH meter (370 pH meter Jenway, UK), conductivity and total dissolved salts using a calibrated Conductivity Meter (470 Conductivity meter, Jenway, UK). Dissolved oxygen was measured according to the Winkler method [21] . Total alkalinity, chloride, nitrate-N, phosphate-P, sulfate, major cations and Chemical Oxygen Demand (COD) were determined according to Water and wastewater examination manual [22] . Sodium and potassium concentrations were determined photometrically by flame emission [23] . Results were calculated as mean values of triplicate measurements made on each water sample from each of the four sampling stations. The calculated values are the mean of three replicates; the standard deviation was less than 5% of the mean value.
Quantitative and qualitative analysis of Plankton:
Chlorophyll's content of water was determined according to the method described by Strickland and Parsons [21] . For Phytoplankton analysis, 1.5 L −l water samples were fixed in the filed with acid Lugol's solution (1 mL L −l sample). Samples were then allowed to settle for at least 36 h, where after the supernatant was siphoned off and the remaining volume was adjusted to 100 mL. This 100 mL sample was kept at 4°C until analysis. Phytoplankton counts were done using a Wild inverted microscope following the Utermöhl technique [24] . For counting, the simplified methods described by Willen and Hobro-Willen was followed [25,26] . The counts of phytoplanktonic algae (unicellular, colonial or filamentous) expressed as cells per mL. The algal taxa dentified according to standard references [27][28][29][30] . The appropriate statistic in Brillouin's index [31] was used for quantitative analysis of species diversity of the phytoplankton. On the other hand, Zooplankton samples were collected on each sites with a net mesh size of 80-100 µm and preserved in isopropyl alcohol. Estimation of zooplankton density was made by counting 1 mL sub-sample of the wellmixed standard sample in a Sedgwick Rafter counting chamber. The counts were converted to number of cells or organisms per cubic meter of water. Zooplankton species were identified according to standard references [32,33] .
RESULTS AND DISCUSSION
It is well known that, the physical and Chemical characteristics controlling life in aquatic habitats, either saline or brackish water, lead to the appearance of special types of biota [34][35][36] .
The annual values of the measured parameters of Al-Khadoud spring and its irrigation canal shown in Table 1 varied for the different sites except temperature which was nearly the same in all sites throughout the period of study. The water of Al-Khadoud spring (Sites I and II) before receiving either treated sewage water or re-use drainage was characterized by low conductivity, low total dissolved salts and low carbon oxygen demand while the mean values of other parameters were always the least among other sites.
The data of Table 1 shows that the average water temperature of Al-Khadoud spring and its irrigation canal was subjected to seasonal variations. The temperature of water reached its minimum in winter (16°C) while the maximum (26.9°C) was recorded in summer samples. The water temperature of Al-Khadoud spring and its irrigation canal generally followed that of the air, due to the shallow depth [37][38][39] . In the present investigation the spring and its irrigation canal did not show proper thermal stratification, as it is extremely shallow (maximum depth 0.5 m). Allott reported that thermal stratification is weak in the shallowest aquatic systems [40] . Generally, it can be said that any increase or decrease in standing crop of phytoplankton at Al-Khadoud spring and its irrigation canal seemed to be strongly correlated with fluctuation in water temperature. This is in accordance with results obtained by some other authors [10,34,36,38,41] .
Change in pH value was always in the alkaline side. It fluctuated between 7.74 in winter at site III and 8.42 in summer at site IV. Generally, this general tendency to the alkaline side may be due to the increased photosynthetic activity of planktonic algae, or to the chemicals nature of water [34,35,42] . The lowest pH and alkalinity values recorded in this investigation may be due to greater amount of discharging waste water and also to the decomposition of plankton and organic matter [43][44][45][46] . The conductivity and the Total Dissolved Salts (TDS) of the investigated sites water were higher in summer (5.55 m sec and 3.42 g L −1 , respectively) but dropped to a minimum level through winter and spring (2.16 mS and 1.12 g L −1 , respectively). On the other hand, site III and site IV were characterized by high electrical conductivity and TDS. The highest value of its electrical conductivity could be attributed mainly to the high pollution levels in water, resulted from the high nutrient loads of wastewater [10,35,39,42] . On the other hand the fluctuations of salinity of North Egyptians lakes from time to time could be explained by the differences of the input amount of drainage water [36] .
Dissolved oxygen is an important parameter for identification of different water masses. The oxygen content of the investigated lake water tended to be higher in summer (17.00 mg L −1 ) at site I and lower in winter (4.00 mg L −1 ) at site III. The relatively high concentrations of dissolved oxygen recorded in this study (summer) could be mainly attributed to light intensity rather than photosynthetic activity of phytoplankton [41] due to the increased photosynthetic activity of phytoplankton populations. In this respect, some author noticed that oxygen super saturation due to photosynthetic activity is often encountered in regions with abundant phytoplankton [47] .
Total alkalinity of Al-Khadoud spring and its irrigation canal was found to fluctuate in a narrow range. However, site IV was characterized by higher concentrations of alkalinity in compare to the other investigated sites, this increases may be due to the bacterial decomposition of organic substrates coming with the re-use drainage water receiving by this site [38,39,48,49] .
Chloride attained their maximum in spring at site IV (1240 mg L −1 ) and dropped to their minimum in summer at site II (412 mg L −1 ). The high concentrations of chloride recorded in this study could be mainly attributed to waste water discharge. It seemed probable that ions play significant role in biomass and standing crop. Some author stated that chlorides appear to limit algal production directly in nature, but in the form of NaCl [35] .
The maximum value of nitrate was found in summer at site IV (4.20 mg L −1 ) and the minimum value in autumn at site I (1.7 mg L −1 ). The highest values of nitrate-N reflect the direct effect of the agriculture runoff [50] , while the lowest values of nitrate-N are indicative of phytoplankton uptake. On the other hand, phosphate-P content tended to be high in all investigation period. In general site I and II were characterized by low phosphate-P content, but sites III and IV by high phosphate-P content. The recorded high phosphate-P values probably due to the release of great amounts of adsorbed phosphate from the re-use drainage water [39,50,51] . On the other hand the lowest values of phosphate concentrations could be attributed to the vigorous uptake by the plankton [34,51] .
Monovalent and divalent cations play very important role in the productivity of inland water. Calcium and magnesium are reported to be of importance for plankton production [52] . In the present study the values of divalent (calcium and magnesium) and monovalent cations (sodium and potassium) were relatively high at all samples, irrespective of some minor fluctuations in seasonally readings. Levels of calcium and magnesium were found to fluctuate within the ranges of 143-172 and 71.4-62.5 mg L −1 , respectively. On the other hand the concentrations of sodium were found to be higher throughout the study period, which exceeded those of calcium, magnesium and potassium in the spring water. It fluctuated from 455 mg L −1 (in summer at site IV) to 229 mg L −1 (in winter at site I). Generally, of Al-Khadoud spring water showed rather higher values of sodium content. Despite its major role in algal growth and photosynthesis, there are only a few instances of either magnesium deficiency or toxicity in lakes [53] . Magnesium is usually present in aquatic system in large amounts relative to plant needs. Both sodium and potassium play important role in the productivity of water [54,53] . However some authors suggested that the amounts of sodium, calcium and chloride determine the species present rather than quantitative development of phytoplankton [55] .
The chemical oxygen demand was taken in the present study as a measure of the oxygenated state and additionally the amount of organic\matter in water as well. The data of this study show that COD tended to be higher in site IV throughout the investigated period in comparison to the other studied sites, especially in summer (40.4 mg L −1 ). The increase in COD could be attributed to the high organic matter content that produces about poor oxygenated state of water resulted from the discharging of untreated waste water [34,39,56,57] .
Generally, after receiving water from the outlets either treated sewage water or of re-use drainage water, the spring water at sites III and IV had an obvious increase in electrical conductivity, COD, total alkalinity, nitrates, phosphorus, chloride and potassium in comparison with sites I and II (i.e., before receiving). These features indicate pollution with organic wastes, increased salinity and deteriorated oxygenated state. Based on this we can say that all these factors can be affected both soil and plants cultivated in the area of Al-Hassa. in Al-Khadoud's spring and its irrigation channel during the investigation period Phytoplankton: It is well known that, the changes in physico-chemical characteristics of any water mass lead to concomitant qualitative and quantitative changes in phytoplanktonic organisms [35,36] .
On the other hand Chlorophyll content in spring exceeded that recorded in other samples (Fig. 4), which could be attributed to vigorous phytoplankton growth [58] .
The data of this study shows that there are marked seasonal differences in the quantitative and qualitative composition of the phytoplankton communities in Al-Khadoud spring and its irrigation canal (Table 3 and Fig. 1 and 2). In terms of total cell number the maximum count (28.45×10 5 cell L −1 ) was recorded in spring at site IV, whereas the lowest densities occurred in winter (2.5×10 5 cell L −1 ) at site I (Fig. 1). The changes in total algal counts throughout the investigation coincided closely with in Chlorophyceae abundance.
Four algal groups were recorded throughout the investigation period, Bacillariophyceae, Chlorophyceae, Cyanophyceae and Euglenophyceae. (Table 2 and 3). The data of Fig. 2 shows that, at sites I, II and III the total percentage composition of the four main phytoplankton groups shows that Chlorophyceae dominated the phytoplankton of Al-Khadoud spring and its irrigation canal throughout the study period. Bacillariophyceae ranked second dominated. Ranking third were the Cyanophyceae, which were least abundant in the winter and spring. Euglenophyceae ranked fourth in order of dominance. On the other hand, the site IV was characterized by the dominated of Cyanophyceae in the summer and autumn, while Chlorophyceae was dominated in the winter and spring, while other algal groups at the same order of other investigated sites. It is worthy to mention that in summer and autumn samples some Euglenoids were recorded in a high abundance at site IV. Means±SD (n =3) The data included in Table 3 further revealed that a total of 36 species were identified allover the period of the investigation. Out of these, 9 species belong to Chlorophyta, 17 belong to Bacillariophyta, 7 to Cyanophyta and 3 to Euglenophyta. The maximum number of phytoplankton taxa on any one sampling period (30 species) recorded species were rarely recovered. Also found that some sites had been marked by the presence of some algal species, which appear linked to these sites especially in summer and spring. Site I was characterized by Mougetia sp., which can be attributed to the lack of water movement in this site. On the other hand, site IV was characterized by the three recorded Euglena species. This could be due to the receiving of this sites huge amounts of wastewater (re-use drianage water), which contain high concentrations of organic materials specific to the growth of these algae [34,35] . Generally, the phytoplankton crop showed a remarkable increase as compared with the previous records [7] .
CONCLUSION
The data of Table 3 also shows that the maximum diversity index (4.0) was estimated on autumn at site II and on winter at sites I and II, while the minimum (1.7) was in summer at site IV. It should be noted that biological indices of species diversity, based mainly on the composition of phytoplankton have been proposed by Pilou and Nygaard may indicate the pollutional state of water [31,59] . There are several numerical attempts [60] to express degrees of oligotrophy and eutrophy from a consideration of species complements rather than from nutrient levels [61] . Some workers believe that the biological estimation of the degree of eutrophication and pollution of aquatic ecosystems is probably more informative than chemical determinations [36,62] . According to the scales of Staub [63] , site I and II is indicates slight polluted in all seasons; site III slight polluted in autumn and summer, but light polluted in summer and spring; site IV moderate polluted throughout the study period.
Zooplankton:
The zooplankton recorded at the four investigated sites of Al-Khadoud spring and its irrigation canal over the study period are shown in Table 4. The data shows that the zooplanktonic fauna identified in this aquatic body is a typical of the permanent freshwater and brackish water. Eleven species were recorded, 5 belonged to Cladoceran, 4 belonged to Rotifera and 7 belonged to Chironomid. Zooplankton species like Thermocyclops hyalinus, Mesocyclops sp., Moina micrura, Brachionus caudatus, B. falcatus and Filina longiseta were recorded at all sites investigated allover the study period. Other species were found to be fluctuated from sites to sites. Chironomidae species are poorly represented in the analyzed samples. This poverty is possibly the results of different modes of sampling. Indeed, the Chironomidea are often benthic forms and rarely found in surface water. Generally, the sites IV have shown the highest varieties of zooplankton compared with the other studied sites. The scarcity of zooplankton species from Al-Khadoud spring and its irrigation canal could be due to the nature of these reservoirs as both receiving drainage and treated water [64] . | 2019-03-30T13:09:13.637Z | 2009-06-30T00:00:00.000 | {
"year": 2009,
"sha1": "68055bdb1b823ed5a2c35ece0407ad3cb7c1c649",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ajessp.2009.434.443",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "790b1b470bba76b97caa02fd9cff11f33c0cce41",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
248250765 | pes2o/s2orc | v3-fos-license | A practical approach with drones, smartphones, and tracking tags for potential real-time animal tracking
Abstract Drones are increasingly used for fauna monitoring and wildlife tracking; however, their application for wildlife tracking is restricted by developing such systems. Here we explore the potential of drones for wildlife tracking using an off-the-shelf system that is easy to use by non-specialists consisting of a multirotor drone, smartphones, and commercial tracking devices via Bluetooth and Ultra-Wide Band (UWB). We present the system configuration, explore the operational parameters that can affect detection capabilities, and test the effectiveness of the system for locating targets by simulating target animals in savanna and forest environments. The self-contained tracking system was built without hardware or software customization. In 40 tracking flights carried out in the Brazilian Cerrado, we obtained a detection rate of 90% in savanna and 40% in forest areas. Tests for targets in movement (N = 20), the detection rates were 90% in the savanna and 30% in the forest areas. The spatial accuracy obtained by the system was 14.61 m, being significantly more accurate in savanna (x¯= 10.53) than in forest areas (x¯ = 13.06). This approach to wildlife tracking facilitates the use of drones by non-specialists at an affordable cost for conservation projects with limited resources. The reduced size of the tags, the long battery life, and the lower cost compared to GPS-tags open up a range of opportunities for animal tracking.
During the last half century, wildlife tracking has made a major impact in ecology and conservation biology (Kays et al. 2015). Aimed at investigating animals' movement, wildlife tracking is one of the main tools to explore species' behavior and ecology in diverse habitats (Lahoz-Monfort and Magrath 2021). Over the years, new technologies have been used for wildlife tracking: conventional radio telemetry (very high frequency, VHF); Argos Doppler tags (aka platform transmitter terminals, PTTs) based on the satellite network ARGOS System (https://www.argos-system.org), and Global Navigation Satellite Systems (GNSS) tracking tags. Although GNSS-tracking provides the best spatial and temporal resolutions, the small size of many animals limits the use of this technology, as tags are often too large or heavy to be fitted to subject animals (Cooke et al. 2004). The smallest GNSStracking device with data download via Bluetooth technology weighs 15 g (Thomas et al. 2011), and considering that tracking devices should not weigh more than 3-5% of the animal body mass (Kenward 2001), the use of GNSS-tracking devices currently available are limited to animals heavier than 500 g. In addition, the high cost of these devices, which can reach approximately $1,500 with manual download or $4,000 with remote download services (Thomas et al. 2011), is another challenge to be overcome by researchers and which currently limits the use of this technology in ecology and conservation studies.
In recent years, the use of drones (Unmanned Aerial Systems, UAS) has gained popularity in wildlife studies (Schiffman 2014;Jiménez and Mulero-Pázmány 2019). Both on terrestrial and aquatic ecosystems, drones are increasingly used for fauna monitoring (Linchant et al. 2015;Lyons et al. 2019), to study species' spatial distribution (Mulero-Pázmány et al. 2015;Baxter and Hamilton, 2018), and for wildlife tracking (Cliff et al. 2018;Nguyen et al. 2019). The main benefits of UAV-based Radio Tracking Systems (also known as UAVRTS) when compared with conventional methods are the reduction of logistical and labor-intensive challenges in the field and the increase of fieldwork operational safety (Linchant et al. 2015;Cliff et al. 2018). In addition, UAVRTS studies have shown that these systems present a significantly stronger signal than ground-based ones, which helps in detecting species such as small forest birds (Tremblay et al. 2017), and may provide localization estimates with 53% less error than those obtained by experienced radiotelemetry users (Shafer et al. 2019).
Currently available UAVRTS use the principle of conventional radio telemetry for wildlife localization in 2 ways: 1) range-based or 2) bearing-based (Hui et al. 2021). Rangebased, such as those developed by Santos et al. (2014) and Nguyen et al. (2019) are less difficult to build than bearing-based systems because the antenna configuration is simpler (Cliff et al. 2018;Dressel and Kochenderfer, 2018). However, for both systems, considerable technical knowledge is still needed both for the development and customization of the hardware and for data analysis, generally based on estimation approaches such as particle, grid, and Kalman filters (Dressel and Kochenderfer 2018;Nguyen et al. 2019). Thus, the application of drones for tracking wildlife is restricted to those users with the technical capacity to develop such systems.
Here, we explore a practical approach to potential wildlife tracking using an off-the-shelf system consisting of a multirotor drone, smartphones, and tracking tag which is easy to use by non-specialists. Specifically, we describe the setup of the system, explore operational parameters that can affect detection capability, and test the system's effectiveness in locating targets that simulate tagged animals in open and forest-covered environments. To our knowledge, this is the first experiment where drones are associated with off-theshelf Bluetooth and Ultra-Wide Band (UWB) technologies for wildlife tracking.
Materials and Methods
Off-the-shelf tracking system overview The off-the-shelf tracking system we developed is formed by a DJI Mavic Pro multirotor drone (https://www.dji.com/ br/mavic), 2 smartphones (Iphone model 8 and Iphone model 11, Apple Inc.), and tracking tags known as AirTags from Apple.inc ( Figure 1). To assemble the system, we created a structure to attach the iPhone 8 to the Mavic Pro drone ( Figure 1B) using pre-existing models in 3D printing webpages (https://www.thingiverse.com/). AirTags (https:// www.apple.com/airtag/) are Apple tracking tags (diameter = 31.9 mm; thickness = 8.00 mm; weight = 11 g), with IP67 water resistance (IEC 60529); with a built-in speaker; which features Bluetooth technology with a transmission capacity up to 100 m; an UWB support; an accelerometer sensor; and an estimated battery life of 1 year ( Figure 1C). UWB is similar to Wi-Fi and Bluetooth technology but that has a significantly higher bandwidth than most narrowband signals used in communications, with low-power signals, less interference, and low energy consumption.
In this system, we set up the AirTag acting as a transmitter of Bluetooth and UWB signals. The Iphone 8 is physically attached to the drone and works as 1) a receiver of the AirTag's Bluetooth signals and 2) a transmitter of the tag coordinates to the cloud. The Iphone 11 works as 1) a receiver retrieving the coordinates from the cloud and 2) a receives of the AirTag's Bluetooth and UWB signals ( Figure 2). The AirTags do not obtain locations using GPS technology, but working through the network from other anonymous iOS and iPadOS devices nearby. Therefore, the AirTag needs to find the nearest Bluetooth-enabled device and take the device's location data in order to work. The Iphone 8 needs Global System for Mobile (GSM) coverage in order to be able to send the location to the cloud. We chose Iphone model 8 because the type Bluetooth version 5 incorporated in these models offers data transmission speed up to 50 Mb/s. To set up the system, it is necessary to link the AirTag to an Iphone handled by the researcher. To use the UWB technology ( Figure 2; step 5), it is necessary that the Iphone model has the same U1 chip present in the AirTag, so we recommend the use of iphones 11 or newer. Once the AirTag is linked to the Iphone, the "Lost Mode" function must be activated within the "Find" application of the Iphone. After this configuration is set up, the Iphone 11 becomes the device that will receive the coordinates of the AirTag from the cloud. The Iphone 8 attached to the drone will receive the Bluetooth signal transmitted by the AirTag and it will transmit the coordinates to the cloud ( Figure 2; Steps 1-3), which will be retrieved by the Iphone 11 linked to the AirTag.
Parameter control flights
Before starting the tracking flights, we carried out 20 flights tests to define the maximum flight altitude that allow receiving the tag's Bluetooth signal. The first step is checking if both smartphones are withing GSM coverage, which can be done sending and receiving data between them. We performed the drone take-off with the Iphone 8 attached to it at minimum distance of 200 m from the tag in an open, non-urban area, with no physical barrier between the drone an the tag. We flew the drone ascending to an altitude of 120 m AGL (above ground level), which is the maximum allowed by the local legislation (ANAC, 2017), and then flew horizontally toward the tag until the drone was positioned over it. We made de drone descend vertically at a maximum speed of 1 m/s until the Bluetooth signal sent by the tag was detected by the Iphone 8 and the coordinates information received by the Iphone 11 from the cloud. We performed the above procedure five times and considered that the maximum detection altitude was the average value obtained (x= 52.8 m). With this average altitude, we performed five horizontal approach flights at a speed of 5 m/s and we also obtained the average value (x= 50.4). Considering the average values obtained, we subsequent flight tests in open environments at an altitude of 50 m AGL. We repeated the same procedure in forest environments and obtained average altitudes (x= 32.6) in vertical flights and (x= 30.4) in horizontal flights and chose therefore to perform the subsequent drone flight tests at an altitude of 30 m AGL.
Drone flights tracking
We tested the off-the-shelf tracking system design in 2 habitat types: savanna and forest areas, both within the Cerrado biome. The tests were carried out in August 2021, in 2 areas adjacent to Chapada das Mesas National Park, Maranhão, Brazil ( Figure 3). Flights were carried out in the savanna area within the "cerrado stricto sensu," typical physiognomy of savanna with forest cover below 30% and in the forest area, within the "Cerradão", a physiognomy that has dense vegetation cover and predominant arboreal strata (Sawyer et al. 2017).
In both areas we carried out two types of experiments: stationary and in motion. For stationary experiments, we placed the tags randomly on the ground in the study area. For the tests in motion, a researcher walked randomly in the study area holding a tag at 1 m above the ground. In all tests, the take-off was done 200 m away from the perimeter of the study area, with the pilot unaware of the tags' location. Lawnmower pattern flights were performed covering the 10-hectare using the Dronedeploy free version software (https://www.dronedeploy.com/). In the savanna, we performed flights at 50 m AGL, with 60% front and side overlap, 5 m/s flight speed, and the "terrain awareness" app function activated. In the forest, we performed flights at 30 m AGL, with 50% front and side overlap, 5 m/s flight speed, and "terrain awareness" app function activated. On each of the tracking flights, the tags were placed at different locations inside the study area. We carried out flights between 08:00-09:30 h and 16:00-17:30 h local time and under the same environmental conditions as the parameter control flights. For the execution of the lawnmower pattern flight, once the Bluetooth signal was identified by the smartphone coupled to the drone and we confirmed it was sending the coordinates to the smartphone with the researcher, the pilot disabled the automatic flight mode and enabled the manual flight mode to try keeping the captured Bluetooth signal. At that moment, the researcher, without knowledge of the location of the tag, handling the Iphone 11 previously linked to the tag and with the "Lost Mode" activated, started the process of terrestrial tracking of the tag as instructed by the Maps application in the smartphone (Apple Inc.). During the search process, when entering the Bluetooth coverage radius, ± 50 m in open areas, and ±30 m in forest areas, the smartphone starts to consider the origin of the Bluetooth signal and not the location received by the cloud. Once inside the coverage radius of UWB technology, ±10 m, for both types of environments, the smartphone automatically changes the tracking form to directional search with centimeter accuracy (Figure 4).
Data analysis
Considering that this off-the-shelf system indirectly involves the use of GNSS, we measured the system's effectiveness based on the two main steps in the overall operation of satellite telemetry units: Fix acquisition and Data transfer (Hofman et al. 2019). Adaptively, we consider Fix acquisition as steps 1-3 ( Figure 2) and Data transfer as step 4. Acknowledging that there may be a failure or delay between steps 3 and 4 due to the GSM signal of both smartphones, we considered it an effective detection when the sending of coordinates in step 4 was performed while the drone was still in flight. Considering the average fix acquisition rate of 66% found by Matthews et al. (2013), we calculated detection probabilities above 70% using the binomial test considering the proportion of total detection, by type of environment and type of experiment.
To find out if there is any significant association between the factors environment and the type of experiment that may influence the system's detection capacity, we performed a GLM (Generalized Linear Model) using a binominal distribution and a logit link function with the interaction between the 2 factors. The model selection process was done using the R "drop1()" command, which drops one explanatory variable at a time and applies an analysis of deviance test each time. The significance of the factors was assessed using command "Anova ()." The heterogeneity of residuals was assessed by visual examination of the figures. GLM models with no random factors were fitted using the "glm()" function. In all stationary tests, we recorded the coordinates of the tags using a GPS Garmin eTrex 30×. To calculate the static accuracy, that is, the distance between the GPS coordinates and the coordinates obtained by the off-the-shelf tracking system, we used the formula based on the Spherical Law of Cosines: acos (sin (lat1) sin (lat2) + cos (lat1) * cos (lat2) * cos (long2 − long1)) * 6371.
We used the t-test to compare the mean values of accuracy obtained in the savanna and forest areas. For model validation, we tested for normality (Shapiro-Wilk) and set the significance level at 0.05. All statistical analyzes were performed using R Studio version 1.4.1 (R Core Team, 2019).
Results
We performed 40 tracking flights with the off-the-shelf tracking system, 20 in savanna and 20 in forest, totaling 9.23 flight hours (Table 1). Tracking flight times varied between 5 and 22 min (x= 13.85 ± 6.07), from take-off until obtaining the first tag coordinate. Due to the lower altitude and lower detection rate, the total time of flights in the forest area was 6.45 h, while in the savanna area it was 2.78 h.
After conducting all steps illustrated in Figure 2, we obtained an overall detection rate of 65% (90% in the savanna area and 40% in the forest area). The probability of detection above 70% was only significant in the savanna (binominal test, P = 0.035). The interaction between environment and type of experiment factors did not significantly influence the system's detection rate (χ 2 1 = 0.23, P = 0.63). However, the detection rate of the system was higher in the savanna (90% detection) than in the forest (30% detection, χ 2 1 = 12.0411, P < 0.01), while no significant differences were observed between tests in motion (60% detection) and static tests (70% detection, χ 2 1 = 0.6099, P = 0.43). In the stationary tests where there was detection, we calculated a mean spatial accuracy of 14.61 ± 0.53 m (N = 14) based on the R95 parameter ( Figure 5). In the savanna area, the average spatial accuracy was 10.53 ± 1.53 m (N = 9), and in the forest area was 13.06 ± 1.73 m (N = 5), and there was a significant difference on the spatial accuracy obtained between the two environments (t 12 = 2.818, P = 0.015; Figure 5).
Discussion
Finding ways to make wildlife tracking easier and less expensive is a constant challenge for researchers. In this study, we propose a user-friendly system combining drones, smartphones, and tags using Bluetooth and UWB signals that could be potentially applied for animal tracking. To our knowledge, this is the first attempt to use an off-the-shelf tracking system with drones, Bluetooth, and UWB technology.
We found the off-the-shelf tracking system tag detection rate was higher in savanna areas (90%) than the average rate of 66% found by Matthews et al. (2013) for several Australian mammal species and similar to the 85% rate obtained by Hoffman et al. (2019), who analyzed the performance of satellite telemetry units in terrestrial wildlife research across the globe. On the other hand, the detection rate in environments with forest cover was low, with a detection rate of 40%. This is likely due to the vegetation biomass of the trees which blocks the transmission of the Bluetooth signal. In step 5 of all tests, after receiving the tag coordinates via cloud, the researchers, in addition to using Bluetooth and UWB technology, used the tag's sound emission function, demonstrating that this technology can offer a differential in the wildlife tracking process in the precision search, mainly for small animals with cryptic behavior and in forest areas where the animal can be camouflaged below vegetation. However, the sound emission by a tag attached to the animal would be likely to cause disturbances in animal behavior that have not yet been analyzed.
The off-the-shelf tracking system accuracy around 12 m is higher than lightweight GPS collars accuracy averaging 30 m (e.g., used for research on common brushtail in suburban environment, Adams et al. 2013). When compared with the few studies that developed a tracking system involving UAVRTS such as Nguyen et al. (2019), with an average precision of 22.7 m, Cliff et al. (2018) with 51.4 m and Hui et al. (2021) with 25.9 m, we note that the accuracy of the system assembled in this study were more accurate (14.61 m). Also, as opposed to the UAVRTS from Cliff et al. (2018), Nguyen et al. (2019), and Hui et al. (2021), in the system we propose there is no need for the development or customization of hardware or algorithms since all parts of the system can be purchased commercially and ready to use. However, we emphasize that the comparison of accuracy of this system with other tracking systems based on radio frequency (UHF/VHF) is only valid within the context of the experiment, since the calculation of the position in radio frequency is done through an estimate of quadratic regression based on the number of "pings" and on the shape of the UHF/VHF signal (Desrochers et al. 2018), while the positioning via GNSS works through the triangulation of satellites (Hofman et al. 2019). This off-the-shelf tracking system, although using Bluetooth and UWB as its differential technology, has application characteristics that are similar to radio frequency and GNSS telemetry systems. As with radio frequency tracking systems, there is a need for a field search for the tagged animals. And just as in GNSS telemetry, this system depends on the satellite triangulation system, but with the limiting factor of needing GSM coverage. On the other hand, the field effort needed for this system when compared with the traditional radio frequency technique may be lower, as it reduces the need for the researcher to travel by land, and can also enable an increased search coverage depending on the flight capacity of the drone used. The difference in cost between GPS tags with similar size and the tags used in this system is another aspect to be taken into account. While an AirTag can be found for $29, GPStags of similar sizes can cost up to $2,000 (Lahoz-Monfort et al. 2021). In addition, its 1 year battery life and its 10 g weight would allow the tracking of any animal with a minimum weight of 350 g, considering the recommended limit of not exceeding 3-5% of the animal's weight (Kenward, 2001).
The reduced size and weight of Air tags allows attaching them to different types of animals, and can be attached as a necklace on mammals, or fixed as a backpack on some species of birds and reptiles. Considering that AirTags have IP67 water resistance (IEC 60529), it may not be necessary to include protective structures, although they are recommended for animals with aquatic habits, since the time span that the tag can tolerate water is limited to 30 min at a maximum depth of 1 m. In cases where the tags need to be fixed by protection structures, such structures should not significantly affect the emitted Bluetooth and UWB signals since these technologies do have higher bandwidth than most narrowband signals and are usually only affected by other electromagnetic sources within the same communication channel.
Although we used a specific drone model in this off-theshelf tracking system, the lack of hardware customization allows the use of different parts of the system (smartphones and tracking devices) on different drone platforms, paying attention to the due previous parameterizations of speed and altitude that will allow the connection of the Bluetooth signal. Different multirotor platforms or even fixed-wing platforms such as the Asa-Branca I model (Mesquita et al. 2021), developed for use in the study of biodiversity conservation in large areas, could be incorporated into this system, thus increasing the tracking coverage area. Another potential modification of the system that does not affect the functioning core is changing the types of smartphones and tags, thus paying attention to the latest Bluetooth class and versions. In this study, we used Apple branded smartphones and tags due to prior availability of the devices for the researchers. However, other brands like Samsung have smartphones and tags with the same type of operation and capacity. Considering that a single tag of this system can be tracked by different smartphones, since the system works in a type of a network, we envisage the possibility of using more than a single drone or even a drone network with attached smartphones in order to locate different targets in an area, making the tracking process possibly more efficient.
Although we demonstrated the feasibility of this off-theshelf tracking system on controlled targets in savanna areas, we acknowledge that tests on animals can present variable results, whether due to the complexity of the behavior of different species or the different ways of fixation and positioning of tags on animals. Therefore, carrying out new experiments with this system in real animals will help to understand the actual possibilities of use. In addition, further research is still needed for assessing the effects of other operational parameters (flight speed, altitude, flight types, and tag displacement speed) as well as the environmental influence (vegetation types, relative air humidity, and arboreal stratum height). Determining which factors may influence the detection capability of this system could make it more useful not only in savanna areas but possibly in other areas with higher forest cover. | 2022-04-20T15:21:34.637Z | 2022-04-18T00:00:00.000 | {
"year": 2022,
"sha1": "8814f83d0f6c93dcb3d1da6db256d5b8728bebad",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/cz/advance-article-pdf/doi/10.1093/cz/zoac029/43518603/zoac029.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35af061937baf4c036f321301e992ecffac16c48",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
231598634 | pes2o/s2orc | v3-fos-license | The pineapple MADS-box gene family and the evolution of early monocot flower
Unlike the flower of the model monocot rice, which has diverged greatly from the ancestral monocot flower, the pineapple (Ananas comosus) flower is more typical of monocot flowers. Here, we identified 43 pineapple genes containing MADS-box domains, including 11 type I and 32 type II genes. RNA-seq expression data generated from five pineapple floral organs (sepals, petals, stamens, pistils, and ovules) and quantitative real-time PCR revealed tissue-specific expression patterns for some genes. We found that AcAGL6 and AcFUL1 were mainly expressed in sepals and petals, suggesting their involvement in the regulation of these floral organs. A pineapple ‘ABCDE’ model was proposed based on the phylogenetic analysis and expression patterns of MADS-box genes. Unlike rice and orchid with frequent species-specific gene duplication and subsequent expression divergence, the composition and expression of the ABCDE genes were conserved in pineapple. We also found that AcSEP1/3, AcAG, AcAGL11a/b/c, and AcFUL1 were highly expressed at different stages of fruit development and have similar expression profiles, implicating these genes’ role in fruit development and ripening processes. We propose that the pineapple flower can be used as a model for studying the ancestral form of monocot flowers to investigate their development and evolutionary history.
Scientific Reports
| (2021) 11:849 | https://doi.org/10.1038/s41598-020-79163-8 www.nature.com/scientificreports/ lemma-like organs and carpels in the second and third whorls 10,11 . OsMADS3 and OsMADS58 are homologous to the Arabidopsis AG gene, and these two copies show partial sub-and neo-functionalization 8,12 . OsMADS3 is primarily involved in stamen and ovule development, and a deletion mutant was found to have normal carpel development 13 . A knockdown mutation in the OsMADS58 gene led to disrupted floral meristem determinacy rather than floral organ defects. In addition to floral determinacy defects, the silencing of both OsMADS3 and OsMADS58 resulted in homeotic conversion of female and male reproductive organs into palea/lemma-like organs and lodicules, respectively, similar to the phenotypes of the ag mutant in Arabidopsis 13 . Two D-function genes, OsMADS13 and OsMADS21, underwent functional diversification. While OsMADS13 is primarily involved in ovule formation, OsMADS21 lost its role in this process 14,15 . The rice genome contains five SEP-like subfamily homologs: OsMADS1, OsMADS5, OsMADS7, OsMADS8, and OsMADS34. Among them, OsMADS1 and OsMADS34 contribute to the specification of the four whorls of floral organs and control the determination of spikelet meristems 16 . The expression and function of the ABCDE MADS-box genes are known to have helped establish the unique floral architecture of rice and orchid 17,18 . In contrast, the genetic basis of pineapple flower development and the evolutionary history of the molecular mechanisms in ancestral monocot flowers remain poorly understood. Therefore, a comprehensive study of this gene family, especially the phylogeny and roles of MADS-box genes in flower development, is urgently needed for pineapple 19 . To classify the MADS-box genes in pineapple and elucidate their evolutionary relationships, we utilized the pineapple genome as a reference for a systematic phylogenetic analysis. We also studied the expression patterns of pineapple MADS-box genes at different stages of flower and fruit development. We identified candidate floral ABCDE genes in pineapple, and the gene numbers and expression patterns could explain the conserved floral architecture of pineapple flower to a large extend.
Results
Pineapple flower and the ancestral monocot flower. Pineapple, orchid, and rice vary vastly in terms of flower morphology. Pineapple, a perennial monocot of the Bromeliaceae family, is indigenous to Central and South America 20 . Each pineapple flower, from the outside inwards, is composed of three broadly ovate and fleshy sepals in whorl 1, three long elliptic petals in whorl 2, six stamens in whorl 3, and a pistil in whorl 4 with three fused carpels (Fig. 1A). Orchids have unique labellum and gynostemium. The grass species, including rice, have the unique floral organization and morphology of florets, which comprise grass-specific peripheral organs, including a pair of bract-like organs (lemma and palea), two lodicules, and conserved sexual organs (six stamens and a pistil with a single carpel) (Fig. 1B). Conversely, individual pineapple flowers resemble ancestral monocot Inflorescence of pineapple. Dense pineapple flowers are arranged in a spiral at the periphery of the spadix rachis. Each flower comprises sepals, petals, stamens, and pistils and is protected by one thick bract. (B) Floral diagrams of pineapple, rice, and the proposed ancestral monocot. Individual pineapple flowers are trimerous with three sepals (green) and petals (purple) in the two outer whorls, six stamens arranged in two whorls with three organs in each, and one pistil with three fused carpels in the center. In rice, there are two outer perianth organs (green, adaxial palea and abaxial lemma); two lodicules (red) internal to the lemma, corresponding to petals in non-grasses; six stamens in one whorl; and a single carpel. The floral structure of the ancestral monocot is similar to that of pineapple, except for the two outer whorls of perianth organs. se sepal, pe petal, sta stamen, pa palea, le lemma, lo lodicule, ca carpel.
The identification of pineapple MADS-box genes.
To identify the MADS-box genes in pineapple, we employed the following strategy: a profile hidden Markov model (HMMER) search against the pineapple genome protein database using the SRF-TF domain (PF00319) as a query dialog. Redundant and extremely short sequences were removed, and conserved MADS domains were detected by verifying sequences in the Pfam databases. Forty-four candidate pineapple MADS-box proteins with a complete M-domain were obtained and designated AcMADS1 to AcMADS43 (Tables 1, Supplementary Table S1). To clarify the evolutionary relationships of MADS-box genes in various plant species, MADS-box genes from ten other plant species were detected using the same method described above for pineapple (Fig. 2). We found that the 43-member MADS-box gene family of pineapple was relatively small compared to those of other species. The pineapple MADS-box genes were subsequently designated as type I (11) and type II (32) MADS-box genes, according to the phylogenetic relationship with well-known MADS-box gene members in Arabidopsis.
Phylogenetic analysis of pineapple MADS-box genes. MADS-box genes can be assigned into two phylogenetically distinct groups: type I and type II 28 . To further assess the phylogenetic relationship of pineapple MADS-box genes with those in other species and to assign them to a specific subfamily, two phylogenetic trees were constructed for type I and type II (Fig. 3, Supplementary Fig. S1). Tree construction was based on multiple sequence alignments of MADS-box protein sequences from pineapple and five additional species including basal angiosperm, core eudicot, and monocot species. For the type I group, the 11 pineapple genes were divided into the Mα, Mβ, and Mγ subgroups, with six, two, and three members, respectively, which is similar to other species 6,29 . The type II group are classified into the MIKC C and MIKC* subclades, based on structure divergence at the I domain 30 . The pineapple MIKC C proteins further were divided into 13 subfamilies (AGL6, SEP, AP1/ FUL, OSMADS32-like, SOC1, B-sister, AP3/PI, SVP, AGL12, AGL15, ANR1/AGL17, AG/AGL11, and FLC) (Figs. 3, Supplementary Fig. S2). Apart from the AGL15 clade, pineapple MIKC C genes were present in 12 of the 13 clades with their counterparts in Arabidopsis and rice, indicating that the genes in the AGL15 clade were lost in pineapple. SOC1 and ANR1/AGL17 were the largest clades, both with five members, whereas AGL6 and OSMADS32-like each had only one member. Compared with the MIKC C gene numbers in rice and orchid, pineapple had fewer members in the AP1, B-PI, Bs, AG, AGL6, and SEP clades. In contrast, significant expansion was observed in the SOC1, ANR1, AGL11, and SVP clades (Fig. 3). www.nature.com/scientificreports/
Conserved ABCDE genes in pineapple.
We conducted more detailed phylogenetic analyses to identify pineapple genes homologous to Arabidopsis and rice genes involved in the ABCDE model of floral development. For A-class genes, the monocot AP1/FUL-like group can be subdivided into two main subclades: FUL-like I and FUL-like II 7 . The two AP1/FUL-like members from pineapple were evenly assigned to these two subclades, with AcFUL1 closely related to the rice FUL-like I genes OsMADS15 and OsMADS14, and AcFUL2 similar to the rice FUL-like II genes OsMADS18 and OsMADS20 (Fig. 4A). The three B-class candidates were designated as AcAP3 and AcPI. Phylogenetic analysis showed that AcAP3 belonged to the B-AP3 clade, and only one member, AcPI, corresponded to the B-PI clade (Fig. 4B). Because most monocots have only one B-AP3 member, except for orchid with four copies 17,31 (Fig. 4B). AcBS, the B-sister homolog in pineapple, formed a sister group with the genes from the AP3/PI clade. In pineapple, the AG-like subfamily had four members, one of which (AcAG) consistently grouped into the C lineage (AG), while the others (AcAGL11a-c) were associated with D function (STK/AGL11) (Fig. 4C). According to the sequence and phylogenetic analyses, AcAG formed a monophyletic lineage along with two rice MADS-box genes (OsMADS3 and OsMADS58), whereas AcAGL11a, AcAGL11b, and AcAGL11c formed another monophyletic lineage along with the D-lineage MADS-box genes from other monocot species (e.g., OsMADS13 and OsMADS21). The SEP subfamily in pineapple had only two members, AcSEP1 and AcSEP3, compared to five in rice and six in orchid (Fig. 4D). The topology of the phylogenetic trees grouped AcSEP3 into a subclade (AGL9 or SEP3) with two rice genes, OsMADS7 and OsMADS8. AcSEP1 was grouped into another subclade (AGL2/3/4 or SEP1) and formed a separate monocot subgroup with three rice MADS-box genes, OsMADS1, OsMADS5, and OsMADS34. There was only one member from pineapple in the AGL6 lineage, compared to two homologous genes in rice, OsMADS6 and OsMADS17. Overall, the phylogenetic analysis revealed that pineapple has fewer ABCE genes compared to rice and orchid (Fig. 5).
Expression patterns of MADS-box genes during vegetative and reproductive development.
Using publicly available transcriptome data and the expression profiles of MADS-box genes in specific developmental processes in pineapple 19,32,33 , we analyzed expression levels in four vegetative and reproductive organs (root, leaf, flower, and fruit) at different developmental stages ( Fig. 6; Supplementary Table S2). Three major clusters (A1, A2, and A3) of expression patterns were distinguished according to the expression specificity in different tissues. The A1 cluster included two expression groups. The first group comprised AcSEP1, AcSEP3, AcAGL6, and AcAG, which were expressed primarily in leaves and flowers and accumulated at very high levels during fruit development. In the second group, all three members classified as D genes (AcAGL11a-c) and one AP1 member (AcFUL1) were strictly restricted to fruit. The A2 cluster contained almost all Type I genes (except for AcMADS40) and several members from MIKC subclades (e.g., AcBS, AcSOC1d/c, and AcSVP1/3). These genes exhibited relatively low transcript accumulation in almost all tissues except for the B-class genes www.nature.com/scientificreports/ AcAP3, which were moderately expressed in flowers and leaves, and AcANR1a expressed in roots. The genes in the A3 cluster segregated into three major expression groups. The first group contained three genes, AcFUL2, AcMADS23, and AcMADS40. AcFUL2 was mainly detected in stage S4 fruit, and AcMADS23 and AcMADS40 were expressed in flowers, leaves, and roots. The second group comprised AcSOC1c, AcSVP1, and AcAGL12a/b, which had root-specific expression. The third group comprised AcPI, AcSOC1a, and AcFLC1. AcSOC1a and AcFLC1 were evenly expressed in almost all organs, whereas AcPI exhibited tissue-specific expression, with FPKM values more than 20-fold higher in flowers and leaves compared to roots.
Expression patterns of floral MADS-box genes in pineapple.
To determine whether certain genes were associated with specific pineapple floral organs, we analyzed RNA-seq transcriptome data for MADS-box genes in five floral organs (sepals, petals, stamens, pistils, and ovules) at different developmental stages 32,33 . Four Table S3). The B1 cluster included six genes, which were subdivided into two expression groups. The first group contained three genes (AcAGL6, AcAG, and AcPI) that were highly expressed in three floral organs. AcSEP3, AcSOC1a, and AcFLC1 in the second group all had abundant transcript levels in all whorls of floral organs, which indicated extensive involvement in the physiological processes of flower development and organogenesis. The B2 cluster contained three genes, a BS subfamily gene (AcBS) and two AGL11 subfamily genes (AcAGL11a and AcAGL11c). These were highly expressed throughout all developmental stages of only ovules. The B3 cluster had three expression groups, and the genes exhibited diverse expression profiles. The first expression group contained three genes, AcAGL11b, AcFUL1, and AcSEP1, which belonged to distinct MIKC subfamilies. AcAGL11b was mainly expressed in reproductive organs (stamens and pistils). AcFUL1 was mostly detected in whorls 1 and 4; AcSEP1 in whorls 1, 2, and 4. In the second group, www.nature.com/scientificreports/ AcMADS23 and AcMADS40 were detected at low levels in all four whorls. The third group included three genes with moderate or low expression in certain stages of a specific whorl. The remaining MADS-box genes composed the B4 cluster and included most of the type I genes (except for AcMADS40) and some members from the SVP, ANR, and SOC1 subfamilies. Consistent with the expression analysis results in floral tissues, these genes had low transcript levels or no significant expression in each of the floral organs, suggesting that they are not important for floral organ development. AcMADS43 expression was not detected at any developmental stage. The putative ABCDE model genes had typical temporal and spatial expression profiles in the five analyzed floral organs. The expression of AcFUL1, an A-class gene, was high in sepals but was also detected at lower levels in pistils and stamens. The B-PI lineage gene AcPI was highly expressed in petals, stamens, and pistil tissues, whereas AP3 lineage gene, AcAP3, showed no appreciable expression in these organs. The B-sister class gene AcBs was only expressed in ovules, and its transcripts accumulated during the process of ovule development. The C-class gene AcAG retained the ancestral function in female and male reproductive organs and was highly expressed in stamens, pistils, and ovules. Interestingly, two D-class genes, AcAGL11a and AGL11c, showed characteristic expression in ovules suggestive of redundant gene function. However, AcAGL11b, also within the D lineage, was expressed in pistils as well as stamens and ovules. Thus, AcAGL11b expression was more similar to that of C-class AcAG than D-class AcAGL11a and AGL11c. The E-class genes AcSEP1 and AcSEP3 were detected in all floral organs during development. The transcript level of AcSEP3 was greater than that of AcSEP1, indicating that this gene pair might have undergone sub-functionalization after duplication. AcAGL6 from the AGL6 clade had higher expression levels in sepals, petals, and ovules than in pistils and stamens.
Even though RNA-seq data had been proved reliable by RT-qPCR and in situ hybridization, we selected eight A, B, C, E class orthologous genes (AcFUL1, AcFUL2, AcAP3, AcPI, AcAG, AcAGL6, AcSEP1, AcSEP3) to test their expression in four whorls of floral organs (sepal, petal, stamen, pistil) by RT-qPCR in pineapple. The RT-qPCR results were consistent highly with our RNA-seq results (Fig. 8A) and another published dataset of pineapple tissue-specific transcriptome 34 (Fig. 8B), which provided solid proofs for the subsequent propose of ABCDE gene model (Fig. 8C).
Discussion
Our bioinformatics analysis identified 43 MADS-box genes in the pineapple genome. The phylogenetic analysis divided the type II MADS-box genes into 13 subfamilies, among which the OsMADS32-like clade appeared to be a novel monocot-specific lineage 29,35,36 . Two members from the FLC clade were identified in the A. comosus genome, but their relationship to genes from other species, such as Arabidopsis, was not well resolved. The low www.nature.com/scientificreports/ support value might be due to the highly divergent sequences and extremely short length of monocot FLC genes 37 . The gene numbers in most MIKC subfamilies were less than those identified in rice and sorghum, consistent with the idea that grasses underwent a recent whole genome duplication (WGD) after the divergence from the Bromeliaceae 19 . The reduced number of MADS-box gene copies in pineapple has important implications for elucidating the evolutionary history of the MADS-box gene family prior to the divergence of grasses and pineapple.
Different evolutionary patterns of MADS-box genes in pineapple and rice. The phylogenetic
analysis revealed two A-class, three B-class, one C-class, three D-class, and two E-class homologs in pineapple (Table 1). Although pineapple and rice shared a WGD event near the base of monocot plants, only one pair of duplicated genes, AcFUL1 and AcFUL2, was retained. Another gene pair, AcAGL11a and AcAGL11c, probably arose from species-specific duplication in pineapple. Unsurprisingly, no gene species-specific duplication was found in pineapple within AP3/PI lineage. Indeed, in most monocots, only one B-AP3 and B-PI copy has been reported 17 . Therefore, gene duplication in this subclade is rare, except in orchids where gene duplication events occurred frequently, followed by expression divergence that led to the innovation of specialized labellum 38 . Rice had at least seven pairs of genes in the AP1, PI, AG, AGL, AGL2/3/4, AGL9, and AGL6 lineages that derived from a more recent WGD event that occurred prior to the origin of the Poaceae 66-70 MYA 17,39 . The significant expansion of genes in the B-AP3 and E classes in orchid may have also resulted from the WGD 38 . In contrast, there were fewer instances of recent gene duplications in pineapple, showing that the composition of ABCDE genes in pineapple was conserved and that the reduced gene numbers in these model classes are representative of an ancestral state. The five members in the SOC1 clade, five members in the ANR1 clade, and three members in the SVP clade suggest that these subfamilies expanded frequently (Fig. 5). Thus, contraction of the gap between the number of type II genes in pineapple (31) and rice (43) may have been a consequence of the expansion of these clades. Among them, the expression profiles of SOC1 subfamily genes were diverse, with each member having a distinct expression pattern. Because SOC1 genes are primarily involved in the flowering phase transition and stress tolerance 40,41 , the expansion and diversification of the SOC1 subfamily in pineapple may have contributed to its adaptation to extreme tropic environments. www.nature.com/scientificreports/
Pineapple MADS-box genes involved in fruit development. In most cases, MADS-box genes within
the same phylogenetic subgroup exhibited analogous expression patterns, indicating that these genes have similar biological functions in shaping regulatory networks that influence specific developmental processes. The genes clustered in the AP3/PI, AG, and SEP/AGL6 clades were highly expressed in pineapple reproductive organs, consistent with their expected roles in floral organogenesis and development. Notably, most of the genes belonging to the AP1/FUL (AcFUL1), AG/AGL11 (AcAG, AcAGL11a/b/c), SEP (AcSEP1/3), and AGL6 (AcAGL6) subfamilies in pineapple were highly expressed throughout all fruit developmental stages. In tomato, a fleshy fruit model plant, several genes including SlMADS-RIN from the SEP subfamily, TAGL1 from the AGlike subfamily, and FUL1 and FUL2 from the AP1-like subfamily regulate both early fruit expansion and later ripening [42][43][44][45] . In a study in banana, two SEP-like subfamily genes, MaMADS1 and MaMADS2, were functionally characterized, and repression of either gene led to slow-ripening and prolonged shelf-life phenotypes 46 . The present finding that AcFUL1 transcript levels were relatively high in fruits is consistent with the hypothesis that the AP1/FUL clade in monocots includes only FUL-like rather than AP1-like sequences, and that these genes may control fruit development analogously to the Arabidopsis gene FRUITFULL 47 . In summary, MADS-box genes from these subfamilies may play a conserved and essential role during fruit development and the ripening of the non-climacteric fleshy fruit of pineapple and therefore warrant further study in the context of pineapple fruit yield and storage.
Figure 7.
Expression heat map of MADS-box genes in five representative floral organs (sepals, petals, stamens, pistils, and ovules) at different developmental stages of pineapple. The different developmental stage samples comprised four sepal stages (S1-S4), three petal stages (S1-S3), five stamen stages (S1-S5), and seven stages of pistils (S1-S7) and ovules (S1-S7). The expression value was quantified as fragments per kilobase per million reads (FPKM), and relative gene expression data were gene-wise normalized. AcMADS43 was not detected at any developmental stage. Four major expression groups were marked as B1, B2, B3, and B4. The color scale bar is at the top-right corner; blue, white, and red indicate low, medium, and high expression levels, respectively. www.nature.com/scientificreports/ Unlike the ABCDE genes of Arabidopsis and Antirrhinum whose expression is limited to flowers, many B-, C-, and E-class pineapple genes were also expressed in leaves. For example, AcSEP1 and AcPI were up-regulated in flowers and highly expressed in leaves, suggesting that these genes may also be critical for vegetative development. AcAGL12a and AcAGL12b had preferentially high expression levels in roots, with sparse expression in leaves and fruits, indicating that these two genes are root-specific and may be important for root development (Fig. 6). Almost all type I MADS-box genes had relatively low transcript levels or no significant expression based on their FPKM values from the RNA-seq data for different tissues. This finding is consistent with earlier reports that type I genes have relatively limited functions in plants compared to type II genes 2,48 .
Functional conservation of ABCDE model genes in pineapple.
The expression of A-class AcFUL1 in pineapple was notably higher in sepals than in other floral tissues, which is in line with its expected function in sepal identity specification. However, its expression was extremely low in petals, indicating that other A-function genes might be recruited to specify sepal identity in pineapple. AcFUL1 was also detected at moderate levels in pistils, a pattern also reported in orchid, in which two AP1/FUL-like members, PhaMADS1 and PhaMADS2, promote carpel and ovary development before and after pollination 31,49 . There is evidence from a study of water lily and Nigella damascena that the role of AGL6-like subfamily genes is similar to that of AP1 50 . In orchid, these genes have a specialized function in labellum formation 49 . Although AcAGL6 was expressed in all tissues in our study, its expression was highest in sepals and petals, indicating a possible role in the A-class functions of whorl 1 and 2 specification via interaction with AP1 and SEP. www.nature.com/scientificreports/ Robust expression levels of AcAP3 and AcPI expression in petals and stamens indicated that these two genes performed B function based on the conserved expression patterns of B-class genes in angiosperms 51 . The Bs subgroup, a phylogenetic sister group of the B-class floral homeotic genes, is specifically expressed in female reproductive organs and developing seeds 52,53 . Both the sequence and ovule-specific expression profile of pineapple AcBs were highly similar to those of the rice ortholog OsMADS29, whose expression is restricted to developing seeds 52 . In line with ancestral C function in specifying both male and female reproductive organs, high expression levels of AcAG were detected in stamens, pistils, and ovules (Fig. 8a). Regarding D function, three subfamily members (AcAGL11a, AcAGL11b, and AcAGL11c) were detected and found to be homologous to orchid MADS2 and rice OsMADS13/21 54,55 . AcAGL11a/c were identified as D-function candidate genes because their expression patterns in our study were similar to those of other D-lineage genes, which are preferentially expressed in ovules. The expression of AcAGL11b in the inner whorls of flowers and ovules overlapped with the expression domain of AcAG. In rice, OsMADS21 exhibits the same expression pattern but lost its function in determining ovule identity, presumably because of its redundancy with OsMADS13, whose expression is also restricted to the female reproductive organs 15 . We therefore hypothesized that like OsMADS21, AcAGL11b might have similarly lost its role in D function. AcSEP3 is more likely to have a major role in E function because it had higher transcript levels than AcSEP1 in all organs except sepals. Additionally, AcSEP3 orthologs, including AtAGL9 in Arabidopsis and OsMADS7/8 in rice, are more critical for E function than any other SEP-like family members in the two plants 56,57 . By comparing the expression patterns of pineapple MADS-box genes with those of previously characterized orthologs, we inferred the functions of candidate pineapple genes involved in the ABCDE model. The numbers and evolutionary history of the putative ABCDE genes strongly indicated that the pineapple flower is similar to the ancestral state of monocot flowers.
Materials and methods
Data sources and sequence retrieval. The whole pineapple genome sequences that were used to identify MADS-box genes were downloaded from the Pineapple Genomics Database 58 . Additionally, the water lily (Nymphaea colorata) genome was generated from our own genome project 59 . The MADS-box protein sequences of Arabidopsis and rice were retrieved from the TAIR (http://www.arabi dopsi s.org/) and RGAP (http://rice.plant biolo gy.msu.edu/) databases, respectively. Non-redundant protein sequences of Amborella trichopoda, Vitis vinifera, Sorghum bicolor, Musa acuminate, and Spirodela polyrhiza were collected from Phytozome (http://www. phyto zome.net/). The latest proteome release of Phalaenopsis equestris was from a recent study 38 . The proteome of Elaeis guineensis was downloaded from the Genomsawit website (http://genom sawit .mpob.gov.my/index .php?track =30), and the Phoenix dactylifera proteome was obtained from the Date Palm Research Program (http://qatar -weill .corne ll.edu/resea rch/resea rch-highl ights /date-palm-resea rch-progr am).
Genome-wide identification of MADS-box genes.
To identify MADS-box gene family members in pineapple, the hidden Markov model (HMMER) profile of the SRF-TF domain (Pfam accession: PF00319) was obtained from the Pfam database (http://pfam.xfam.org/) 60 used as a query to search against pineapple proteins. The gene IDs and sequence information are provided in Table 1 and Supplementary Table S1. In addition to pineapple, MADS-box protein sequences in the following species were also collected and screened to investigate their evolutionary relationships: two basal angiosperm species (Amborella and waterlily); two core eudicots (Arabidopsis and Vitis vinifera); and seven monocot species (rice, sorghum, Phalaenopsis equestris (Epidendroideae), Musa acuminata (Musaceae), Elaeis guineensis (Arecaceae), Phoenix dactylifera (Arecaceae), and Spirodela polyrhiza (Lemnoideae)).
Classification of MADS-box genes in pineapple.
To assign putative pineapple MADS-box genes to specific gene subfamilies, multiple sequence alignments were performed based on amino acid sequences using the alignment tool MAFFT with default parameter settings 61 . MADS-box proteins from six plant species, A. trichopoda, N. colorata, A. thaliana, O. sativa, S. bicolor, and A. comosus, were used. Maximum-likelihood phylogenetic trees were constructed using FastTree software with the JTT+CAT model 62 . Furthermore, a more detailed MIKC-protein phylogenetic tree was constructed using the same strategy with six additional species: V. vinifera (Vitaceae), P. equestris (Epidendroideae), M. acuminate (Musaceae), E. guineensis and P. dactylifera (Arecaceae), and S. polyrhiza (Lemnoideae). In the phylogenetic tree, bootstrap supporting values below 50 were generally regarded as unreliable and were not shown. As a matter of convenience, pineapple MIKC C genes were renamed according to the phylogenetic relationship deduced by sequence comparison with proteins from the whole genomes of 11 flowering plants and their corresponding clade/subclade names. A phylogenetic tree of pineapple and ten other species (Fig. 2) was also constructed in the context of angiosperms. The phylogeny was inferred using RAxML v7.1.0 with the PROTGAMMAJTT model, 100 bootstrap replicates, 756 single-copy genes, and the methods described above 63 .
Expression profiles of MADS-box genes in different pineapple tissues. Two transcriptome datasets were extracted from previous studies in pineapple 19,32 . The first dataset included floral, leaf, and root developmental tissues and six stages of fruit development (F1-F6, from young to mature) 19 . The other dataset was composed of different floral organ samples, including four sepal stages (S1-S4), three petal stages (S1-S3), five stamen stages (S1-S5), seven pistil stages (S1-S7), and seven ovule stages (S1-S7). The criteria for the different stages were previously described in details 32,33,64 . The expression level of each gene was quantified as fragments per kilobase of exon model per million reads mapped (FPKM) values by featureCounts 65 . The third dataset from previous published study provided available expression matrix including different floral organs in transcripts per kilobase million (TPM) by Stringtie 34,66 . Two heat expression maps were generated by the Pheatmap package | 2021-01-14T14:25:30.239Z | 2021-01-13T00:00:00.000 | {
"year": 2021,
"sha1": "ea768c4903f89f656e687b25009b3b2c9cb4c149",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7806820",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "654d36779317d03a54b0beddc6c802e5804e9670",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251957282 | pes2o/s2orc | v3-fos-license | Student Medical Summit - Online 2022
Targeting mutant p53 for the treatment of triple negative breast cancer: a pre-clinical study Anna Lawless1, Shane O’Grady2, Minhong Tang2, Michael J. Duffy2,3 1UCD School of Medicine, University College Dublin, Belfield, Dublin, Ireland;2UCD School of Medicine, Conway Institute of Biomedical and Biomolecular Research, University College Dublin, Belfield, Dublin, Ireland;3UCD Clinical Research Centre, St. Vincent’s University Hospital, Dublin, Ireland Correspondence: Anna Lawless (anna.lawless@ucdconnect.ie) Triple negative breast cancer (TNBC) refers to an invasive subset of breast cancer that lacks oestrogen receptors (ER), progesterone receptors (PR) and lacks amplification of HER2 [1]. [...]these patients cannot be treated with a targeted therapy and have poorer outcomes compared to patients with other subforms of breast cancer. p53 it is the most frequently mutated gene in human cancer. For all cell lines investigated, ATO induced significant levels of apoptosis at a concentration of 5 μM. Although our data are preliminary, we conclude that ATO is a potential new therapy for the treatment of p53 mutated cancer, including triple negative breast cancer. Since ATO is already approved for the treatment of acute promyelocytic leukaemia (APL), it should be straightforward to repurpose it for TNBC. Katie Ryan1, Emma-Louise Rogers2, John Cronin2, Conor Prendergast2 1School of Medicine, University College Dublin, Dublin, Ireland;2Department of Emergency Medicine, St. Vincent’s University Hospital, Dublin 4, Ireland Background Emergency Medicine (EM) clinicians are required to make critical decisions, often with limited information, resources, and time. Microfluidic-microwave platforms for real-time, non-invasive and sensitive monitoring of bacteria and antibiotic susceptibility testing Rakesh Narang1,2,3, Sevda Mohammadhi4, Mehdi Mohammadi Ashani1,2, Mohammad Zarifi4, Amir Sanati-Nezhad1,2,3 1BioMEMS and Bioinspired Microfluidic Laboratory, Department of Mechanical and Manufacturing Engineering, University of Calgary, Calgary, Alberta, Canada, 2Center for BioEngineering Research and Education, University of Calgary, Calgary, Alberta, Canada, 3Biomedical Engineering Graduate Program, University of Calgary, Calgary, Alberta, Canada, 4Okanagan Microelectronics and Gigahertz Applications (OMEGA) Lab, Faculty of Applied Science, Kelowna, BC, Canada Correspondence: Rakesh Narang (rakesh.narang@ucdconnect.ie) In 2019, the CDC reported over 2.8 million antibiotic-resistant bacterial infections in the United States [1].
Triple negative breast cancer (TNBC) refers to an invasive subset of breast cancer that lacks oestrogen receptors (ER), progesterone receptors (PR) and lacks amplification of HER2 [1]. Thus, these patients cannot be treated with a targeted therapy and have poorer outcomes compared to patients with other subforms of breast cancer. p53 it is the most frequently mutated gene in human cancer. Approximately 80% of patients with TNBC carry a p53 mutation. Recently, arsenic trioxide (ATO) was found to reactivate mutant p53 and convert it back to its normal wild-type form [2]. The aim of this research was to test if ATO might be a new treatment for TNBC. The ability of ATO to inhibit cell proliferation was determined using MTT assays while induction of apoptosis was measured using flow cytometry. IC 50 values for growth inhibition across 10 breast cancer cell lines ranged from 0.297-2.99 μM. Inhibition of proliferation was found to be independent of the cell line molecular subtype. No significant differences were found between IC 50 values for TN vs non-TN cell lines (p=0.597) or between contact vs structural p53 mutants (p=0.481). For all cell lines investigated, ATO induced significant levels of apoptosis at a concentration of 5 μM. Although our data are preliminary, we conclude that ATO is a potential new therapy for the treatment of p53 mutated cancer, including triple negative breast cancer. Since ATO is already approved for the treatment of acute promyelocytic leukaemia (APL), it should be straightforward to repurpose it for TNBC. I would like to express my gratitude to my supervisor Professor Joe Duffy and all in the breast cancer research group in St. Vincent's University Hospital. Thank you for all your encouragement and for giving me the opportunity to work with and learn from you. Background SARS-CoV-2 has affected children and adolescents worldwide, with the pandemic taking a toll on their health and well-being. COVID-19 measures have disrupted the education of 1.6 billion students worldwide [1]. Children and adolescents are reported to have lower susceptibility to COVID-19 than adults, possibly attributed to their limited social interaction with mainly household members [2]. However, the full impact of community transmission among asymptomatic children requires further investigation. Methods A retrospective cohort study was used to examine the transmissibility of COVID-19 from primary cases to close contacts using data from Cork and Kerry, notified between 1/3/21 and 15/6/21. Cases were aged 0-4 (n=100), 5-11 (n=100), 12-17 years old (n=100) and these sub-groups were compared against unvaccinated adults. This study uses extracted data from the National Case Tracker Customer Relationship Management (CRM) System. The effective R number and relative risk numbers were calculated and compared between the different subgroups.
Overall, there was no difference between the 12-17 and unvaccinated adults.
Conclusion
Our study found that 0-11 years old resulted in fewer secondary cases compared to unvaccinated adults. Transmission for cases aged 12-17 were similar to adults except they led to fewer secondary cases outside of their household compared to unvaccinated adults.
1. Patients making an unscheduled return to the ED with the same complaint within 72h of discharge 2. Abdominal pain in patients >70 years 3. Atraumatic chest pain in patients >30 years To reflect the ST4+ levels of clinician that were used in the UK, senior sign-off in this audit was defined as review by, or a case discussion with, an EM consultant, SpR, or senior registrar with 4+ years of EM experience. Only patients who were discharged from the ED were included. After cycle 1, a teaching session was delivered to all ED clinicians. There was a move toward electronic note-keeping and the introduction of a designated doctor to document decisions made at clinical handovers with senior staff present. After cycle 2 a further teaching session was delivered and posters were placed around the ED to highlight the importance of documenting senior reviews and discussions. Results 2,687 patients were included across the 3 cycles. The overall rate of documented Senior Sign-off across the three patient cohorts was 35% in cycle 1 (n=454), 62% in cycle 2 (n=1139), and 64% in cycle 3 (n=1194) ( Table 1). Conclusion Significant improvements were made in the rate of documented 'Senior Sign-Off' of these high-risk patient cohorts across the three audit cycles. To ensure all ED patients, particularly those deemed "high-risk", are seen by a senior doctor, an increase in ED consultant staffing is required.
In 2019, the CDC reported over 2.8 million antibiotic-resistant bacterial infections in the United States [1]. Current clinical practices often require up to two weeks to accurately diagnose these infections and determine appropriate antibiotic courses for patients [2]. With contemporary methods of infection diagnosis and antibiotic susceptibility testing (AST) being time-consuming, expensive to operate, and lacking point-of-care (POC) testing potential in remote areas physicians are forced to over-prescribe broad-spectrum antibiotics, as diagnosis and AST are neglected [3]. Along with patients' noncompliance in antibiotic therapy, pathogens subsequently develop resistances to multiple drugs [4]. Therefore, developing methods that improve the POC and high-throughput potential of infection diagnosis and AST practices is of critical need to treat patients more accurately, efficiently, and cost-effectively in hospitals and remote areas. Microfluidics, lab-on-a-chip, and microelectromechanical systems (MEMS) are areas of studies highly valued for their user-friendly design, POC potential and high-throughput results, particularly in healthcare and biomedical sensing applications [5,6]. The current use of capillary forces to autonomously deliver fluids within microfluidic chips (Capillary Fluidics) has allowed microfluidic sensors to be integrated in POC and high-throughput sensors. Furthermore, the cheap and seamless integration of Capillary Fluidics with multiple sensory methods, such as optical, electrochemical and microwave sensing, makes it an attractive option to remedy the critical need of innovation in infection diagnosis and AST practices [7].
Here, microwave sensing was selected for its high sensitivity and inexpensive implementation with Capillary Fluidics to monitor pathogens and perform AST [8]. The microwave resonator generated electrical fields to detect dielectric shifts within the capillary microfluidic channels, allowing for the characterization of growing E. coli assays in a real-time, sensitive, and non-invasive manner for over 8 hours. Furthermore, two resonators were coupled in an array to enhance the sensor's sensitivity and selectivity towards AST applications. The hybridized microwave-microfluidic sensor autonomously handled liquid samples to monitor the growth of E. coli over 24 hours and deduce the susceptibility of the pathogen against multiple antibiotics within 4 hours with a signal-tonoise ratio of 1107.5. This inexpensive and easily operated system shows a high potential to be implemented in POC environments while delivering high-throughput, accurate and reliable results, indicating an enormous improvement compared to contemporary gold-standard AST practices. Millions of people every year are affected by cancer globally. Through earlier diagnosis, cancer patient outcomes can be enhanced. Therefore, early detection of cancer is important for improving and saving lives. Artificial intelligence (AI) applications and cancer imaging are often utilized, such as increasing triage expediency in underserved regions worldwide, while aiding medical professionals [1,2].
Materials and methods
In this research, an AI model is developed using cancer imaging and machine learning with patient data to help increase cancer diagnosis and diagnostic accuracy [3]. The AI model was developed using AI algorithms to diagnose cancers in patients. Clinical patient data was applied to build, train, and test the model. For training, 50% of the patient data was randomly selected while the other 50% was used for testing its diagnosis abilities.
Results
In cancer diagnosis, the AI model achieved an overall 82% diagnostic accuracy compared to previously published methods ranging from 50% -74% accuracy. Conclusions Therefore, AI algorithms and cancer imaging can be applied to aid healthcare professionals for diagnosing cancer in patients, such as in underserved regions worldwide in triage cases, improving outcomes and saving lives. . Schematic of the microwave-microfluidic sensor. The resonant profile is shifted due to the sample's dielectric properties when the assay is introduced on the sensing region, analyzed through the vector network analyzer (VNA) [5] A6.
Background
Protein HuR, an RNA binding protein has been identified as a key regulator of the proteins TDP-43 and FUS involved in the pathogenesis of ALS [1]. By understanding the role of HuR in a cell under stress, we hope to unravel the protective or pathogenic mechanisms that exist in a neuronal cell. The objective of this study was to identify whether Protein HuR has a protective or detrimental effect on a neuronal glioblastoma cell under heat shock, toxic and oxidative stress. Methods shHuR (Protein HuR silenced) cells and shGFP (cells with protein HuR marked with green fluorescence protein) were used in this experiment to compare the cells with normal protein HuR and cells with a significantly smaller quantity of Protein HuR ( Figure 1, Western blot analysis). After 24 hours of growth, both groups of cells were treated with either hydrogen peroxide or sodium arsenite added to the growth medium, or the plate was placed at 47°C incubation for two hours to generate heat shock. The cells were then counted by hand using a hemocytometer and the cell survival rate in the two groups were compared.
Results
Protein HuR has varying effects on cells in differing conditions of stress. In cells treated with hydrogen peroxide, shGFP had a statistically significant greater cell survival rate than shHuR (98% survival versus 28% survival), indicating that Protein HuR may have a protective effect on cells under oxidative stress. There was no observed difference in cell viability under conditions of 47°C heat shock. Both cells changed their morphology and detached compared to the control, but there was no statistically significant difference between cells from shGFP series and shHuR. The study with sodium arsenate indicated that Protein HuR had a protective effect on neuronal cells, as shGFP had a greater cell survival rate than shHuR.
Conclusion
The studies point to the protective effect of HuR for the survival of neuronal cells even when exposed to oxidative and toxic stress. There is no protective role for HuR in heat shock.
Introductions
The MyotonPRO (Myoton AS, Tallinn, Estonia) is a handheld, digital palpation device which has been used to measure the mechanical properties of muscles and other soft tissue. Using such a device, characterization of the biomechanical properties of the musculoskeletal system has the potential to help identify and diagnose abnormalities in skeletal tissues on-site and in the field without the need of other highly specialized equipment. For example, areas of increased muscle tone or the response of hypertonic muscle to therapeutic interventions have been reported. The MyotonPRO works by imparting a small mechanical impact to the tissue of interest, perpendicular to the surface of the skin. The tip of the probe is subjected to a constant preload to maintain contact and co-oscillation as the tissue vibrates underneath the skin. An accelerometer linked to the probe generates an acceleration vs. time relation from which various biomechanical characteristics, such as tissue stiffness, can be calculated. The repeatability and reliability of the device have been tested on various muscles in several intra and inter-session studies, but never on tendon tissue to our knowledge. This study aims to conduct an independent assessment by a publicly funded, academic research group of the MyotonPRO's stiffness measurement by (1) testing on phantom material with experimentally verified viscoelastic properties, (2) determining whether or not (and to what extent) the device's measurements are influenced by the layer of skin over the tissue being measured, (3) examining the test-retest reliability of the MyotonPRO applied to human Achilles and patellar tendons and (4) examining the performance of the device during a field study of tendon stiffness in endurance runners. Methods: The MyotonPRO was used to measure the stiffness and related properties of ballistics gel in comparison with an external materials testing system (PCB electronics). The device was then used to measure the same properties of avian Achilles tendons before and after the removal of the overlying skin and subcutaneous tissue. Next, the test-retest reliability of the Achilles and patellar tendons was determined in humans. Finally, the stiffness of the Achilles tendon was measured before and after competitive running races of varying distances (10, 21 and 42 km, total number of athletes analyzed = 66). Results: The MyotonPRO demonstrated a high degree of consistency when testing ballistics gel with known viscoelastic properties. The presence of skin overlying the avian Achilles tendon had a statistically significant impact on stiffness (p<0.01) although this impact was of very small absolute magnitude (with skin; 728 Nm ±17 Nm, without skin; Nm 704 Nm ±7 Nm). In healthy adults of normal body mass index (BMI), the reliability of stiffness values was excellent both for the patellar tendon (ICC, 0.96) and the Achilles tendon (ICC,0.96). In the field study, men had stiffer tendons than women (p<0.05), and the stiffness of the Achilles tendon tended to increase following running (p = 0.052). Conclusions: The MyotonPRO can reliably determine the transverse mechanical properties of tendon tissue. The measured values are influenced by the presence of overlying skin, however this does not appear to compromise the ability of the device to record physiologically and clinically relevant measurements.
Background
It is HSE policy that all patients should have a fit for purpose, laser printed, legible wrist-(or ankle-) band with a scannable barcode before receiving any care to avoid issues with misidentification ranging from the minor to the catastrophic. This re-audit aimed to show an improvement in the standard of compliance of wearing wristbands in the interest of patient safety. Methods 498 patients were surveyed over a 7 day period to determine whether or not they had an identifying band on their person. In the instances that identification was missing follow up questions were asked as to who removed the band and why. Data was presented using Microsoft Excel. Results 90.0% of patients had scannable ID bands on their person with the correct demographics (448/498). Patients themselves were primarily responsible for removing their own armbands with removal by a healthcare worker in the minority.
Conclusions
Compliance fell short of the ideal of 100%. The rate of compliance was at a similar level compared to previous audits carried out on the wearing of ID bands in SVUH, showing a lack of improvement and apathy for change. A further re-audit is not imminent due to the lack of institutional will.
Background
Mechanical chest compression devices (MCCDs) were developed to improve low survival rates from out-of-hospital cardiac arrest (OHCA) [1]. MCCDs have been shown to prevent practitioner fatigue, improve organisation, increase circulation and end-tidal CO 2 pressures [2,3,4,5,6,7]. These benefits have not yielded an increase in survival outcomes. Previous systematic reviews show insufficient evidence to suggest superiority of MCCDs [8]. This review aimed to include new studies in order to answer the three-part question: "In [adults with OHCA], is the use of a [MCCDs better than manual CPR] at [achieving return of spontaneous circulation (ROSC) and/or improving survival rates]?" Materials and methods The keywords prehospital, out-of-hospital, paramedic, EMS, EMT, mechanical compression device/chest compressor, mechanical CPR, mCPR, Lucas, AutoPulse, survival and ROSC were searched in Embase, Pubmed, Ovid, Cochrane and ScienceDirect.
Outcomes of interest included rates of ROSC, survival rates to admission, at 1 month/discharge, at 6 months and long term neurological function. Odds ratios (ORs) and 95% confidence intervals (CI) were utilised. Unreported CIs were obtained using the method outlined by Altmann and Bland [9]. The Levels of Evidence (LOE) assessment tool introduced by the International Liaison Committee on Resuscitation (ILCOR) 2010 was utilised in the evaluation and selection of literature. As a systematic review only, regression analysis or statistical tests for heterogeneity were not performed.
Conclusions
MCCDs appear to not effect survival rates of OHCA. They may, however, be associated with unfavourable neurological outcomes. The current recommendation that MCCDs can be used on a case-by-case basis when the practitioner predicts a benefit remains. Many studies identified certain sub-groups within OHCAs that have improved survival outcomes with MCCDs. As such, further large scale, randomised trials which focus on specific OHCA situations is required.
Acknowledgments
This is an abstract of the full-text study which was conducted as part of the assessment of final year medical students in the UHL School of Medicine. A special thank you to Dr. Alan Watts whose guidance was paramount through this whole process and was always on hand with helpful and friendly advice.
Background
The primary aim of this research project is to employ analytics on big data to investigate the levels of amblyopia and anisometropia in Ireland as well as the distribution of these conditions across the country. Comparing the prevalence of anisometropia and amblyopia in the Republic of Ireland (ROI) to that of other countries, according to year, refractive error and age act as the secondary aims.
Methods
Data analytics were retrospectively carried out on 143,234 unique patients across 296,797 visits. The gender division of the patients was 51.5% female, 34.3% male and was not recorded in 14.2% of cases. All 26 counties of the ROI were represented in the data although this was not evenly distributed throughout the country. This data was cleaned and analysed using the R statistical and the SQLite database programming languages. Statistical analysis and graphical representation were subsequently undertaken to produce and illustrate the results.
Results
The patients were aged between 0 and 100 years with an average age 47.5 ± 20.9 years. The estimated incidence of anisometropia and amblyopia were 15.8% and 4.2% respectively. Variables that did not impact upon prevalence levels for either of these disorders were county of residence and year of examination. Age and male gender were both weakly correlated with anisometropia. Meanwhile, increasing refractive error (hyperopic or myopic) had the strongest relationship with both prevalence levels. Although, the correlation was not significant enough with any of these variables to solely explain anisometropia development. In terms of amblyopia development, refractive error was the most influential variable with myopia being protective while anisometropia and hyperopia were strong predictive factors. However, hyperopia and anisometropia cannot exclusively account for the development of amblyopia.
Conclusions
This is the first study to provide population based data on the prevalence of anisometropia and amblyopia in the ROI. Although further research is required in this area, these results can be applied to improve the detection and management of these common and potentially inhibiting vision abnormalities. This research was facilitated and supported by the TUD Optometry Department. The author would also like to acknowledge the participation of optical practices that provided data. The author is also grateful for the ongoing contribution and guidance of the project supervisor without whom this research project would not have been possible.
A12.
Investigating the necessity of pediatric emergency medicine in resource limited settings Avanti Baronia In Tanzania, the healthcare system is overburdened and lacking qualified healthcare workers. This is more pronounced in pediatrics.
This study attempted to determine if a dedicated pediatric emergency care (PEC) program is possible in resource limited settings such as Tanzania.
The study was done through literature review of 13 papers and global recommendations on existing PEC standards and practices. The information gathered was then compared to field observations made in the Arumeru District Hospital of Arusha, Tanzania. Though one third of patients seen in the Arumeru emergency department are children, staff are not trained to specifically care for pediatric cases. Additionally, the department lacks general emergency medicine resources, as well as equipment needed specifically for children. The recommended personnel trained for pediatrics is 4.5 healthcare workers per 1,000 patients. Currently there are about 5 pediatricians per 100,000 patients in Tanzania [1]. Thus, the focus of resources and training should be to increase the number of healthcare workers and facilities able to handle pediatric cases in general, rather than focus on increasing the capacity for PEC specifically. Per World Health Organization (WHO) recommendations, rather than increasing specialized PEC practitioners (a long-term goal), resource limited settings can improve outcomes for pediatric emergencies in the short term by ensuring adherence to global emergency standards and training existing personnel in pediatric specific needs [2]. It is imperative that the standards be met at Arumeru Hospital before further consideration can be given to the necessity of developing designated pediatric emergency care programs. (Table 1). Mean follow-up was 2.5 years. In terms of our primary outcome, 87% had clinical and radiologic improvement. Diagnostic investigation for possible recurrent/persistent obstruction, based on symptoms and/or imaging results, was required in 17% of cases, but only 3% required reintervention for recurrent UPJO. Accordingly, the overall treatment success was 97%. The most common post-operative complication was UTI (18%), and urine leak was seen in only 2% of patients.
Conclusion
The results of our retrospective review compare favourably with currently reported outcomes in the literature and demonstrate the safety and high level of success of RAP at a high-volume Canadian centre. (7) Mean LOS, nights (SD) 2 (1) clinical signs, presumptive diagnosis, culture and susceptibility results, ease of administration, cytology results, financial constraints and client expectations. Amoxicillin-clavulanate was the most routinely prescribed antimicrobial in 99/274 (36%) of all prescriptions. This was underdosed in 17/86 (19.8%) prescriptions. Under general comments, 5/64 (7.8%) respondents described client expectations and pressures influencing prescribing practices, 5/64 (7.8%) indicated that expense influenced the ability to perform culture and susceptibility testing, and 5/64 (7.8%) described the use of empirical prescribing. This study demonstrated that antimicrobial use is not influenced by the number of years in practice but is influenced by several clinical and owner-dependent factors. Amoxicillin-clavulanate a European Medicines Agency Class C antimicrobial, was widely used and frequently at doses less than commonly accepted guidelines.
A15.
Analysis of stakeholder perception of comparative oncology in the study of melanoma Comparative oncology examines naturally occurring cancers seen in both animals and humans to compare findings between species. It is a growing field which has the potential to benefit both veterinary and human patients by giving insights into cancer progression and treatment responses. Importantly, comparative oncology requires collaboration between many groups including Veterinary Professionals, Human Healthcare Professionals, Biomedical Researchers, Pet Owners, and People with Lived Experience of Cancer. Our study aimed to qualitatively assess the different perceptions and knowledge of comparative oncology between these various stakeholders.
Interviews and a survey were conducted by senior researchers analysing the perceptions of various stakeholder groups involved in Comparative Oncology: Veterinarians, Patients, Healthcare Professionals and Biomedical Researchers. Information was included on Respondent Subgroups, Pet Ownership status, as well as opinions on Communication of Findings, Consent, Knowledge, Opinions, and Values/Concerns. These interviews and surveys were analysed in NVivo using matrix coding and standardised in Excel using a mentions per person ratio to assess perceptions across subgroups. 176 individuals responded to the anonymous survey, and a further 12 individuals were interviewed to assess their knowledge and perceptions regarding Comparative Oncology. The stakeholder groups presented with various levels of knowledge and concerns regarding Comparative Oncology. Expectedly, the biomedical researcher cohort had the greatest knowledge mentions per person of 1.9, followed closely by Veterinary professionals (1.4), with the lowest being Human Healthcare Professionals (0.55). The Concerns held by respondents were classified as "Animal Welfare", "Convenience", "Cooperation with Veterinary Professionals", "Data Management/Storage", "Information Availability", "Scientific Rigour", and "No Concerns". In our study, the stakeholder groups held different perceptions of comparative oncology. Researchers were most concerned with scientific rigour whilst Veterinarians were most often concerned with animal welfare.
Results
Of 113 samples from the cross-sectional study, Enterococcus species were isolated from 31 (27.4%) and E. coli from 9 (7.9%). Four of 51 (7.8%) of hand samples were contaminated with these pathogens. Twenty-one isolates (28.8%) were MDR (Figure 1). There was no change in cleanliness or microbial burden over 3 weeks. Enterococci and E. coli isolates with same resistance patterns were recovered from the environment in the large and small animal hospitals and from a small number of patients (Figures 1, 2).
Results
These results suggest that movement between the small and large animal hospital areas may be responsible for cross-contamination and possible hospital-acquired infections. These data will inform an imminent review of infection control protocols and hygiene procedures by the UCDVH Infection Control Committee.
A17.
Why is it so hard to get adolescent feedback? A report on a shortterm feedback project in a paediatric hospital
Background
Children are capable of reflecting critically upon the services they receive, making them competent consumers of mental health services [1]. Examining children's feedback gives direct insight into what is working and where further improvement is needed in the way care is delivered [2]. This project aimed to provide feedback to the mental health team about the experiences of patients at Children's University Hospital (CUH), considering their impression of the environment and meeting the mental health team for the first time.
Methods
Ethical exemption was granted by the hospital ethical committee. An initial literature search on child and carer feedback for paediatric consultation-liaison psychiatry services was conducted using PubMed. 42 articles were identified based on title and/or abstract, and 25 articles were selected based on relevance to the topic. Articles were analysed qualitatively using thematic analysis. To collect feedback, a questionnaire with opt-in Likert scale and free text questions was given to 20 children and their carers to be completed anonymously. Four questionnaire sets were returned to the team over a four-week period. Questionnaire data were analyzed using Excel and qualitative analysis.
Results
Two main themes were identified in the literature: 'Impact of child and adolescent voices in psychiatric services' and 'Addressing the concerns and expectations of children and adolescents'. Evidence suggests that patient feedback tools can improve treatment engagement and may improve patient outcomes in child and adolescent mental health services. Literature supports addressing the expectations and concerns of children and adolescents to increase patient satisfaction and improve the overall quality of mental health services. Participating patients were female adolescents (n=4) and their carers (n=4). Adolescents were least satisfied with the physical environment of the hospital and most satisfied with the recreational activities offered and the extent to which the team listened to them. Adolescents shared both positive and negative experiences and provided tangible feedback on the environment and care. Carers were satisfied with their own experience and their child's experience.
Conclusions
Literature review findings reinforce the importance of patient feedback for child and adolescent mental health services. Child and carer feedback should continue to be collected to help improve the experience of patients at CUH. However, it was difficult to collect feedback from adolescents. To increase response rates, open-ended questions should be optional, and a digital questionnaire format should be made available [3].Questionnaire clinical utility and consumer appeal should also be taken into consideration [4].
A18.
Biosensors for cardiovascular mechanical circulatory support devices: a literature review
Background
Heart failure continues to be a leading cause of mortality worldwide [1]. Different markers and changes within the body can allude to different cardiac pathologies. These changes can be detected by specific biosensors [2]. Coupling or implanting these biosensors with mechanical circulatory support devices (MCSDs), such as left ventricular assist devices (LVADs), can be extremely important in detecting changes in biochemical and physiological parameters following MCSD implantation [3]. The aim of this review is to explore the available biosensors that may be coupled or implanted alongside LVADs to monitor biomarkers, such as interleukin 10 (IL-10), and changes in physiological parameters, such as ventricular pressure. This review will also explore the potential for feedback control mechanisms to be integrated with LVADs in response to the detected parameters. An exploration of the different materials used to fabricate biosensors is also presented.
Methods
We searched PubMed and Web of Science databases. Keywords included: biosensors, LVADS, MCSDs, sensors, and heart failure. Studies were included if they mentioned the testing of a biosensor detecting any biochemical or physiological parameter and the device was implanted on or alongside a MCSD.
Results
Of the 488 results obtained, a total of six studies met the inclusion criteria. A range of in-vivo and in-vitro studies were selected for this review. Two studies aimed to detect biochemical parameters in-vitro and successfully detected interleukin-10 and tumour necrosis factorα when implanted alongside LVADs [4,5]. A total of four studies, two of which tested in-vitro and the remaining two tested in-vivo, aimed to detect physiological parameters and successfully detected changes in blood pressure [6,7,8,9]. Two studies also offered mechanisms for feedback control of the MCSD based on pressure input [7,9]. Regarding fabrication of these biosensors, the materials most used to fabricate biosensors are synthetic polymers, metals, carbonbased, and glass/silicon. Of these materials, synthetic polymers are the most common [10].
Conclusions
Implanting biosensors alongside MCSDs has the potential to improve patient outcomes and identify pathologies before they arise. The existing research offers promising results because as MCSDs become increasingly popular, there is potential to develop and integrate these biosensors to better meet our needs for rapid diagnostic and prognostic real-time information. Future research should be aimed at testing these devices in-vivo as well as developing feedback control mechanisms. Although myocarditis is now a well-known but rare complication of mRNA SARS-CoV-2 vaccine, multisystem inflammatory syndrome (MIS) is an extremely rare complication. This case details a likely MIS in a 21 year old male who presented 20 days after a second dose of the mRNA Pfizer-BioNTech COVID-19 vaccine. He had no prior history of SARS-CoV-2 infection but his history was significant for anaphylaxis to egg and chicken. He presented with a two day history of progressive retrosternal odynophagia, mild dull chest pain which was exacerbated by lying flat and deep inspiration and lethargy.
Laboratory investigations revealed a markedly elevated troponin level at 1061 ng/L (normal <34) and CRP was raised at 20.8 mg/l (normal 5.0). ECG showed normal sinus rhythm, ST elevation in leads V4 and V5 and T wave inversion in V5. On admission, symptoms worsened and he was unable to lie flat and eat food or drink liquids. He was commenced on diclofenac acid and a proton pump inhibitor. Echocardiogram and cardiac MRI were normal and oesophagogastroduodenoscopy (OGD) showed severe ulcerative oesophagitis with sloughing from the oesophagogastric junction to 32cm ab oral. Following the OGD, diclofenac was discontinued and colchicine introduced. PICC line was inserted on day 6 and total parenteral nutrition was commenced. Further tests showed raised LFTs and telemetry showed several short bursts of ventricular tachycardia during the two-week hospital stay so a beta-blocker was introduced.
The patient slowly recovered and repeat OGD four months later showed complete recovery of the ulceration but histopathology showed likely eosinophilic oesophagitis. Of note, the patient had no previous symptoms of this disorder and remains well in this regard and has fully recovered. A repeat OGD is planned. This interesting case of likely MIS raises the possibility that this vaccine response may be more likely to occur in those with an allergic history.
A20.
Online In 2017, the report on the National Wellbeing of Doctors proposed that there are ever-increasing burnout rates [1]. Reflective practice groups are used to explore a deep level of understanding of doctorpatient relationships, in order to combat burnout and increase satisfaction at work [2]. This study aims to assess online reflective practice groups for interdisciplinary trainees in Paediatric hospitals during the Covid-19 pandemic. The Balint group methodology was adapted for an online format. Trainees from psychiatry, emergency and paediatric specialties answered two online questionnaires before and after six sessions of Balint group meetings. There were nine responses to the pre-Balint questionnaire and eight responses to the post-Balint questionnaire. The data was analysed using Microsoft Excel. 75% of participants were from Crumlin Children's Hospital. Most were women, aged 26-30 years and 3-11 years' experience. Six participants preferred online groups while four preferred face to face groups after the sessions were completed. Trainees indicated that they thought about patient cases afterward and their teams were disrupted which may cause mild burnout due to the struggles faced. There was a positive relation between burnout reduction and Balint sessions. Additionally, the sessions were positively reviewed by the trainees and there were sessions cancelled which may indicate the trainees' appreciation for the group. Reflective practice programs should be implemented for trainees in all institutions since there is a positive link between reducing the risk of burnout and reflective practice groups. It should be available for all specialties, not only psychiatry and general practice.
Background
Elite athlete mental health is becoming a topic of increasing interest. Following the publication of the International Olympic Committee consensus statement [1] in 2019, emphasis has been placed on encouraging help-seeking and treatment for elite athletes with mental health issues. However, there remains a paucity of research into the diagnostic practices and screening tools that exist to aid in identifying mental ill health in elite athletes. This study aims to examine the question "What identification practices exist to aid in identifying mental ill health in elite athletes?" Methods A scoping review design was undertaken following the six-stage process developed by Arksey and O'Malley [2] with revisions by Levac et al [3]. The PubMed, SPORTDiscus and PsycINFO databases were searched for relevant papers. This review follows in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews. (PRISMA-ScR).
Results
Forty-three studies were included in the review. Emerging themes concerned the importance of timely identification, the need to make identification pathways/resources accessible, the delivery of interventions, and using the right tools for identification. A range of mental health outcome measures were identified, few of which were athlete specific.
Conclusion
Practices for identifying mental ill-health in elite athletes are numerous and varied. Many are of questionable use in elite athlete populations, and few screening tools are specific to elite athletes. Many countries and sport organisations lack consensus-based guidelines for identifying mental health problems in elite athletes. Further research to develop high-quality athlete-specific screening tools should be a priority.
Background
Microsurgery is a highly skilful component of Plastic and Reconstructive surgery with a steep learning curve. Due to COVID-19, reduced access to technical courses and hands-on theatre time has created significant challenges in microsurgical education . Trainees must therefore engage in self-education and be adept at accurate selfassessment to overcome this. The aim of this study was to assess the ability of trainees to self-assess technical performance while performing a simulated microvascular anastomosis.
Materials and Methods
Novice and experienced Plastic surgery trainees were recruited. All participants performed a simulated microvascular anastomosis by placing 8 interrupted, evenly spaced sutures around a high fidelity chicken femoral vessel model. A stopwatch timed the procedure from start to finish. Each participant objectively rated their anastomosis using the Anastomosis Lapse Index (ALI) [1]. Each anastomosis was then blindly rated by two expert microsurgeons. Self-scores and expert scores were compared using a Wilcoxon-Signed Rank Test.
Results
Thirteen surgical trainees completed the simulated procedure. Mean time to completion was 22.2 minutes (range 14.2-31.9 minutes). Mean ALI self-score was 3.8 (range 3-5) while mean ALI expert score was 5.27 (range 4.5-6). There was a significant difference between ALI self-score and expert score (p=0.001) with expert assessors consistently assigning a higher ALI score to the same anastomosis (Figure 1). There was no significant difference between male and female trainees or between novice and experienced trainees in relation to time to completion, ALI self-score or ALI expert score.
Conclusions
These findings suggest that while the ALI is an excellent training tool, surgical trainees tend to overestimate their technical performance. This emphasises the importance of expert feedback to accurately self-assess progress in the early stages of surgical training.
Background
Novel acute kidney injury (AKI) biomarkers have been shown to improve diagnostic accuracy, but reports of use in standard clinical practice are rare. The objective of this audit is to evaluate the clinical utility of urine Neutrophil gelatinase-associated lipocalin (uNGAL) in newly diagnosed AKI episodes in hospitalized patients.
Materials and Methods
We reported the implementation of uNGAL measurement for routine AKI diagnostic workup of patients receiving nephrology consultation in an academic centre, focusing on discrimination of AKI aetiology (functional/pre-renal vs intra-renal), using retrospective data collection. The diagnostic accuracy of uNGAL was compared to the final adjudication by two independent nephrologists, using descriptive statistics (
Background
Ireland's healthcare system is currently focused on delivering an integrated care system where emphasis is placed on universal healthcare which is primary care focused and patient-centred [1]. The GP-Hospital interface has been identified as a key problem area and a need to account for the various professional perspectives when guiding reform is required [2]. The aim of this study is to identify structures, processes and outcomes from GPs which may be important to enhance integrated care at the GP-Hospital interface using a Delphi consensus method.
Methods
A pilot e-Delphi consensus study was conducted over two rounds. In Round 1, 15 participants were asked to score 32 statements, by how much they agree with their importance in enhancing integrated care at the GP-Hospital interface. Participants were also allowed to suggest their own statements. In Round 2, the 13 participants who completed Round 1 were shown the distribution of scores from Round 1 and were asked to rescore if they wished. Eleven participants completed Round 2.
Results
Based on the Round 1 ranking, 15 of the 32 statements met the 70% threshold for consensus. Five additional statements suggested by participants in Round 1 were added, and two statements reached the consensus threshold. The largest consensus was observed in areas such as rapid access diagnostics, direct access to specific hospital departments and improved communication between GPs and Hospitals.
Conclusions
These study findings highlight important elements for enhancing integrated care at the GP-Hospital interface and can inform integrated healthcare policy in Ireland and elsewhere.
Strength less than expected for their BMI and gender. No correlation was found between these metrics and traditional indicators of fitness. Interestingly, performance on physical tests did correlate with haemoglobin levels on the day of the assessment in males.
Conclusions:
As the population ages, the number of older adults who may benefit from HSCT will increase. Older HSCT patients have significant risk factors for the development of frailty and are more vulnerable than community-dwelling individuals. Identifying frailty in older patients and incorporating strategies to boost resilience (eg. through targeted physiotherapy regimes) prior to transplant will hopefully improve survivorship.
A26. Background COVID-19 has had a profound effect on our mental health services. In a short period of time, mental health services have had to reconfigure to reduce the spread of SARS-CoV-2. This has resulted in the closure of day services, reduced in-person psychiatric support and social isolation, leaving some of society's most vulnerable in crisis.
The purpose of this study is to identify any differences in the number and severity of emergency presentations to the Emergency Department (ED).
Methods
The study is a retrospective review of the log of patients referred to the liaison psychiatry team at an Inner-City Dublin hospital from the ED or inpatients wards where self-harm was the reason for admission. Three time frames were chosen between January and June 2020: a baseline group (T1), lockdown (T2) and re-opening of society (T3). Severity of presentation was measured using the Threshold Assessment Grid (TAG) (n=306) [1]. Data were analysed using the application SPSS.
Results
There was a significant increase in self-harm presentations in T2 and T3 (T2 -55.1% n=27 & T3 -38.1% n=16) with the highest incidence during the first lockdown (T2), and this was statistically significant (p=0.029). Psychiatric admissions rose during the pandemic, highest in T3 with an admission rate of 26.8% (n=11) compared to baseline (19.9%, n=39 T1, p value 0.733). Substance misuse levels were high among this population, the baseline group level of substance misuse was 57.7% (n=113) and this rose to 71.4% (n=35) and 80% (n=32) in T2 and T3 respectively (p=0.008). The study found that the homeless represent 37% (n=107) of the population seen in the ED by psychiatry (0.92% of local population). This number rose during periods of lockdown and during the reopening of society to 46.9% (n=23, T2) and 47.5% (n=19, T3) respectively.
Conclusions
The preliminary data suggests further research is warranted to fully understand and address the impact on this population however it is clear that there is a need to strengthen and expand current mental health systems to address the ongoing mental health crisis. Through this research we demonstrated the feasibility of doing a larger and more conclusive study with the current proposed methodology.
Background
Since the early Shuttle missions, astronauts have reported visual acuity (VA) changes that have led to anecdotes of diminished focus and reading checklists [1]. Further investigation has led to the discovery of Spaceflight Associated Neuro-Ocular Syndrome (SANS), a distinct set of neuro-ophthalmic findings following long-duration spaceflight (LDSF) including globe flattening and hyperopic shift. Astronauts have also demonstrated reduced dynamic VA in post-flight assessments from vestibulo-ocular adaptations during G-transitions [2]. Future planetary missions will likely involve major G-transition events, as well as exposure to microgravity longer than current LDSF. To uphold astronaut health and mission performance, consistent extraterrestrial assessment of static and dynamic VA will provide close monitoring of various microgravity-induced VA changes. A compact virtual reality (VR)-based system is being developed to provide comprehensive assessment for monitoring SANS and other vision issues including diminished dynamic visual acuity in G-transitions. In this terrestrial pilot study, we test VR-based dynamic/static VA assessments to validate the VA component in a multi-modal VR-based visual function system to detect subtle visual changes during LDSF. Materials and Methods VA will be assessed in healthy terrestrial subjects with best correctable vision VA of 20/20. Subjects will be tested with mono VA assessments for both traditional laptop-based and VR-based assessments. In addition to VA data, VR-based head-orientation, eye-tracking data, and cyclopean eye direction data will be collected.
Results
Validation studies with VR Dynamic VA are currently underway. Mean dynamic/static VA with traditional assessment, mean dynamic/static VA with VR-based assessment, mean head-orientation and mean cyclopean eye direction will be reported and assessed statistically.
Conclusion
This pilot study with VR-based visual assessment plans to showcase the reliability of dynamic and static VA assessments in a single, compact VR assessment system being developed for spaceflight. Future studies will be conducted with other visual function assessments to map a multi-modal assessment of visual function during spaceflight. These assessments should also be conducted with terrestrial analogs for SANS such as strict head-down tilt bed rest. Training with VRbased Dynamic Visual Acuity may serve as a countermeasure for planetary travel that can be conducted terrestrially and during spaceflight.
A28.
Health Older adults have unique and sometimes intensive healthcare needs as well as barriers to access, making extension of eHealth care for older adults a relevant consideration. Our research goal was to perform a systematic review of the literature describing barriers and facilitators of eHealth uptake and use by older adults. 9 articles published in peer reviewed journals were included in the review which revealed 3 major thematic groups of barriers and facilitators: (1) Personal Factors including attitudes and physical & functional abilities (2) Technological Factors including technology literacy and technology design, and (3) Structural-Societal-Socioeconomic Factors. Among these themes, the most important barriers included attitudes of a lack of perceived need for eHealth with preference for traditional healthcare [1,2,3,4,5], perceived problems with privacy, safety, reliability [1,2,3,5,6,8,9], physical and functional issues relating to sight and hearing [1,2,3,4,5,7], lack of basic competency in technology literacy [1,2,3,7], poor hardware and software design [1,2,3,4,9], socioeconomic and financial concerns [1,2,3,4,5]. In contrast, the most common and important facilitators were belief that technology can improve life and health [2,5,6,9], improved communication and support [1,5,6,8], convenience, efficiency, usefulness [1,2,3,6,4,9], training sessions focussed on technology skills and literacy [2,4,5], access to support from younger family members [2,9], affordable, cost Effective, financially beneficial [3,4,5]. Successful implementation of a national eHealth strategy requires acknowledgement and consideration of these barriers and facilitators toward older adults uptake and use of eHealth. | 2022-09-01T13:45:16.432Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "eebeec440f1ce99c869982512136f1b2df678a05",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "eebeec440f1ce99c869982512136f1b2df678a05",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245608389 | pes2o/s2orc | v3-fos-license | Injectable hyaluronic acid hydrogel loaded with BMSC and NGF for traumatic brain injury treatment
Injectable hydrogel has the advantage to fill the defective area and thereby shows promise as therapeutic implant or cell/drug delivery vehicle for tissue repair. In this study, an injectable hyaluronic acid hydrogel in situ dual-enzymatically cross-linked by galactose oxidase (GalOx) and horseradish peroxidase (HRP) was synthesized and optimized, and the therapeutic effect of this hydrogel encapsulated with bone mesenchymal stem cells (BMSC) and nerve growth factors (NGF) for traumatic brain injury (TBI) mice was investigated. Results from in vitro experiments showed that either tyramine-modified hyaluronic acid hydrogels (HT) or NGF loaded HT hydrogels (HT/NGF) possessed good biocompatibility. More importantly, the HT hydrogels loaded with BMSC and NGF could facilitate the survival and proliferation of endogenous neural cells probably by neurotrophic factors release and neuroinflammation regulation, and consequently improved the neurological function recovery and accelerated the repair process in a C57BL/6 TBI mice model. All these findings highlight that this injectable, BMSC and NGF-laden HT hydrogel has enormous potential for TBI and other tissue repair therapy.
Introduction
Traumatic brain injury (TBI) is a common neurotrauma and major cause of death and disability in the world. The external force can directly disrupt brain structure and function, leading to physical, cognitive, and behavioral symptoms and effects such as loss of consciousness, impairment of cognition and memory and other related neurological function. However, so far, there is no effective therapy in clinic [1]. Primary injury and secondary injury are two stages involved with the pathophysiological process of TBI [2]. Especially secondary injury causes cellular, chemical, tissue, or blood vessel changes in the brain that contribute to further destruction of brain tissue after initial impact, which leads to massive neuronal necrosis in the injured site, disrupts the blood-brain barrier and releases a large number of inflammatory factors, produces brain edema and neurological dysfunction [3,4]. The repair and rehabilitation of brain injury is a long-lasting challenging problem. Recently, many studies have presented that stem cell transplantation holds great promises for TBI treatment [5,6].
The development of stem cell therapy opens a new avenue for the brain function plasticity. When stem cells migrate to the damaged site of brain, they will survive and grow in a friendly microenvironment and crosstalk with cells and signals in many ways, including enhancing the secretion of neurotrophic factors, inhibiting neuroinflammation, promoting the synaptic formation of neurons at the injured site, and releasing neurotransmitters to promote the recovery of damaged nervous system [7]. Mesenchymal stem cells can also differentiate into neural cells at the lesion to replace the damaged or loss of neurons. Meanwhile, they can provide nutrition supplement, promote neurogenesis, protect the brain function, repair the injured brain structure and functional reconstruction [8][9][10]. Among them, bone marrow mesenchymal stem cells (BMSC) have attracted more attention due to their extensive sources, low immunogenicity and less ethical controversy [11]. However, stem cell retention, survival and differentiation in the lesion are far from satisfactory and hamper the brain functional recovery. Thus, an effective delivery of BMSC to the brain lesion and optimization of stem cell fate is still a technical challenge. It is known that building a suitable neural scaffold will be a pivotal strategy in favor of stem cell and drug delivery to the target area for cell-based therapy for TBI.
As the progress in tissue engineering field, development of new techniques will provide an innovation to solve the existing problems of stem cell transplantation. It was reported that improvement the therapeutic effect of stem cells has been achieved by tissue engineering modified methods in the treatment of a variety of diseases [12,13]. In TBI models, s series of hydrogels were developed as neural scaffolds by encapsulating stem cells and bioactive factors to repair cerebral function due to its three-dimensional network structure which is similar to neural tissue [14][15][16]. Among them, hyaluronic acid (HA) is a natural non-sulphated glycosaminoglycan, a major component of the extracellular matrix and gets involved in inflammatory response, angiogenesis and tissue regeneration [17,18]. HA has superior biocompatibility, biodegradability and easy to be chemically modified, which plays an important role in the process of wound healing [19,20]. A phenol-rich hyaluronic acid polymer has been of great interest for the development of in situ forming and injectable hydrogels enzymatically cross-linked by horseradish peroxidase (HRP) and galactose oxidase (GalOx) due to the controllable gelation rate, high specificity, and sensitive to outer condition changes.
Nerve growth factor (NGF) is a member of cytokine families that protects neuron survival, stimulates axonal growth and maintains synaptic plasticity, participates in physiological processes of neurotransmitter synthesis and release, and promotes the sensorimotor function recovery and axon regeneration [21][22][23]. However, the short half-life of exogenous NGF administration limits its bioactivity and therapeutic effect in vivo. Therefore, developing an efficient delivery of NGF with a hydrogel scaffold may better its bioactivity and provide a controlled-release of NGF, which is beneficial for repair and regeneration of neural injury [24][25][26]. By injection or spray-based minimal invasive approach, hydrogels enable remodeling in the lesion, encapsulate cells and/or biomolecules, and accurately fit to any irregular tissue defects [27,28].
Herein, we established a series of HT hydrogels dual-enzymatically cross-linked by GalOx and HRP. HT polymers performed as the natural neural scaffold material, BMSC as the seed cells, and NGF as the bioactive factor. The characterization and biocompatibility of HT hydrogels and HT/NGF hydrogels were investigated systematically. And the therapeutic effect of NFG and BMSC loaded in HT hydrogel was evaluated in TBI mice. All data suggested that this injectable HT hydrogel could successfully load of NFG and BMSC and has great potential for TBI treatment.
Synthesis of HT and HT/NGF hydrogels
HT hydrogels were prepared according to our previous method with some modifications [29]. The brief experimental steps are as follows: 0.5, 1, and 1.5 wt% HT polymers were dissolved in 100 mM/L D-galactose solution to obtain the pre-hydrogel solution, following by the addition of 1 U/mL HRP and 1 U/mL GalOx to synthesize HT hydrogel. For the preparation of HT/NGF hydrogels, 50, 100, 150, and 200 ng/mL NGF were added into the 0.5% pre-hydrogel solution respectively, following by addition of 1 U/mL HRP and 1 U/mL GalOx to induce gelation.
Characterization of HT hydrogels
An inverted tube test was used to determine the gelation time of 0.5, 1, and 1.5% HT hydrogels. When the GalOx and HRP were added to the solutions of HT polymers, the counting was started.
The injectable ability of HT hydrogels was first characterized by measuring the linear viscosity (η) under a frequency sweep mode (25 C, 1-100 s À1 ). Afterward, 500 μL of HT pre-hydrogel solution was transferred to a syringe to observe if it could be injected through a pinhole (25 G).
The water content of the HT hydrogels was calculated based on the formula: D (%) ¼ [(W w À W d )/W w ] Â 100, where D denotes the water content of the hydrogels, W w denotes the wet weight of the hydrogels, and W d denotes the dried weight after freeze-drying.
The degradation performance of the hydrogels was investigated using the formula: L (%) ¼ (W t /W i ) Â 100, where L denotes the mass residual rate after the hydrogels were immersed in PBS solution for 1, 3, 7, 14, 21, 28 and 35 days. The initial mass of hydrogels before immersing in PBS was labeled as W i , and the mass of hydrogels after immersion for 1, 3, 7, 14, 21, 28 and 35 days was labeled as W t .
The enzymatic degradation performance of the hydrogels was determined by another formula: L (%) ¼ (W t /W i ) Â 100, where L denotes the mass residual rate after the hydrogels are immersed in 15 U/mL hyaluronidase solution hourly. The initial mass of the hydrogels before immersing in 15 U/mL hyaluronidase solution was labeled as W i , and the mass of the hydrogels after immersion for each hour was labeled as W t .
The swelling ratio (%) was calculated by the following equation: swelling ratio (%) ¼ (W s /W i ) Â 100, where W s is the weight of hydrogels immersed in PBS solution from day 1 to day 7, and W i is the initial weight of the hydrogels. The measurement was repeated in triplicates.
The internal morphology of the HT hydrogel was characterized by scanning electron microscopy (SEM, FEI Quanta200, The Netherlands) after lyophilization, breakage and gold spraying.
The rheological behavior of the HT hydrogels was evaluated by a rheometer platform (TA DHR2, USA). The dynamic oscillation scanning angular frequency ranged from 0.1 to 100 rad/s, and the temperature and strain were set as 37 C and 1%, respectively.
Cytocompatibility of HT and HT/NGF hydrogels
CCK-8 assay was used to assess the cytocompatibility of HT and HT/ NGF hydrogels. The extracts from 0.5%, 1%, and 1.5% HT hydrogel and 0.5% HT hydrogels with different concentrations of NGF (50, 100, 150, and 200 ng/mL) were prepared by DMEM/F12 complete medium. The effect of different groups of HT and HT/NGF hydrogel extracts on the cell survival and proliferation of BMSC at the first day and second day was detected.
To evaluate the influence of hydrogels on the cellular activity, 3D culture was carried out as a test model. Briefly, BMSC were re-suspended with HT and HT/NGF pre-hydrogel solution at a density of 1 Â 10 6 cells/ mL, then HRP (1 U/mL) and GalOx (1 U/mL) were added to induce gelation. Each hydrogel (100 μL) was transferred to a 24-well plate, and 1 mL DMEM/F12 complete medium was added to each well and cultured at 37 C in a cell incubator containing 95% air and 5% CO 2 . After culturing for 3 and 5 days, the BMSC-loaded hydrogels were stained with Calcein-AM/PI working solution (Live/Dead kit) at 37 C for 20 min, and then observed under fluorescence microscopy (Leica DFC7000T, Germany). In addition, immunofluorescence of Ki67 was performed to analyze the proliferation of BMSC cultivated in hydrogels for 3 and 5 days, finally fluorescence was observed and photographed under inverted fluorescence microscope.
Ethics statement
All animal procedures were performed in accordance with the Guidelines for Care and Use of Laboratory Animals of Zhengzhou University and approved by the Animal Ethics Committee of Zhengzhou University.
Blood compatibility and histocompatibility of HT hydrogel in vivo
The hemolysis rate was used to test the blood compatibility of HT hydrogels. First, HT hydrogels were prepared and soaked in normal saline for 30 min, then fresh mouse blood was collected and added. After further incubation for 1 h, hydrogels were removed and centrifuged at 2000 rpm for 5 min. Photographs were taken and the absorbance of supernatant was detected at 545 nm. The blood of mouse was put in deionized water as the positive control group. Normal saline instead of water was as the negative control group to calculate the hemolysis ratio. The hemolysis ratio was calculated by the following equation: hemolysis ratio (%) ¼ [(OD T -OD N )/(OD P -OD N )] Â 100. OD T , OD N , and OD P donate the absorbance value of test group (hydrogel group), negative group, and positive group, respectively. In addition, the morphology of red blood cells (RBCs) in NS, 0.5%HT hydrogel, 1%HT hydrogel, and 1.5%HT hydrogel groups were observed and captured by an inverted microscope.
Hematoxylin and eosin (HE) staining was performed to investigate the immune response of the implanted hydrogel scaffold. 100 μL 0.5% HT pre-hydrogel solution was subcutaneously injected at the dorsum of mice (3 injected points each mouse). The hydrogels and surrounding tissues were separated on day 3, 7, 14, and sectioned for HE staining to assess the biocompatibility of HT hydrogel in vivo. Moreover, the heart, liver, spleen, lung, kidney, and blood samples were harvest on day 14 for HE staining and biochemical analysis. ALP, GOT, and GPT were analyzed by the corresponding assay kits, with the normal mice set as control group.
2.5.3. Experimental groups and establishment of a moderate TBI model C57BL/6 male mice (22-25 g) obtained from the Experimental Animal Center of Zhengzhou University were used in this study. BMSC were used as the seeded stem cells. TBI mice were divided into four groups randomly (6 mice in each group per batch): TBI mice treated with normal saline (NS) as the control group, three other groups of TBI mice were treated with NGF-loaded HT hydrogel scaffold (HT þ NGF), BMSCloaded HT hydrogel scaffold (HT þ BMSC), and treated with BMSC and NGF-loaded HT hydrogel scaffold (HT þ NGF þ BMSC), respectively. The TBI mice model was established by a typical Feeney's weightdrop method [30]. In brief, mice were given a normal preoperative hair removal under anesthesia. Then, the scalp was incised longitudinally along the median sagittal line, and the fascia was bluntly dissociated to expose the right skull. Next, a hole of diameter around 3 mm was opened which located at midway between the bregma and the lambda with the medial edge 1.5 mm lateral to the midline, and a craniocerebral percussion device (Shenzhen Ruiwode Lift Technology Co. Ltd, China) was adjusted so that the striker was hit precisely at the center of the opening. Subsequently, the striker was slowed down to make a contact with the dura, after a decline of 2 mm, free falling from a height of 20 cm with a 20 g impact hammer caused a moderate brain damage rapidly. Finally, routine cleaning of the wound and hemostasis were performed. After confirming that there was no active bleeding, the scalp was sutured. The mice's respiration and heartbeat were monitored, and the mice were kept warm until they completely woke up and their vital signs were stable.
HT hydrogel scaffold, NGF and BMSC injection
Seven days after TBI model establishment, C57BL/6 mice were anesthetized, and primary bone hole was exposed for in situ injection into the center of lesion. Briefly, the tip of microsyringe was placed 1.0 mm depth under the dura, and 20 μL pre-gel solution was slowly injected into the site of injury, which last approximately 2 min to reduce the leakage of cells along the needle tract. After injection, the needle was maintained in lesion for an additional 5 min before it was slowly pulled out. Finally, the scalp was sutured, and the mice were kept warm until they became active. Routine preventive antibiotics were applied, and the behavior tests such as limb movements, learning and memory ability, and wound healing of the TBI mice were observed at predetermined time points.
Neurological motor function assessment
On day 1, 3, 7, 14, 21, and 28 after treatment, the neurological motor function of the C57BL/6 mice (6 mice each group) was evaluated. The scores were determined by the double-blind method according to the modified neurological severity score (mNSS). The mNSS indexes including motor and sensory functions, balance, and reflexes were scored from 0 (healthy) to 18 (most severe) points in mice. The experimental protocols were described in our previous study [30].
Morris water maze
The learning and memory ability of the TBI C57BL/6 mice from day 23 to day 28 after treatment were investigated using a Morris water maze system (diameter 1.2 m, depth 0.6 m) with a platform (diameter 10 cm) located 2 cm underwater. The platform was marked with "○" so that mice could position themselves and search for the platform. Before entering the water, mice were placed on the platform for 10 s to acclimate themselves with the surrounding environment. Then, they were repositioned far away from the platform so that they were required to search for the marked platform. If the mice did not find the marked platform in 60 s, they would be placed again for an additional 10 s to refamiliarize themselves with the surrounding environment and underwent the test once more. The water temperature ranged from 19 C to 21 C, and entire test process was filmed with a camera. The escape latency and the time on the marked platform were recorded. On the last day, the platform was removed, mice were repositioned far away at the same place mentioned above, and swimming trails, the number of crossing platform, and the time that mice remained in the platform quadrant within 60 s were recorded.
Western blot
After treatment for 28 days, 6 mice in each group were sacrificed under anesthesia. Brain tissues around damaged areas were isolated and western blot were performed. In brief, tissues were lysed in lysis buffer. Then an equal amount of protein was loaded and separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis gels and transferred to a polyvinylidene difluoride membrane (EMD Millipore, Billerica, MA, USA). After that, primary antibodies against neuronal differentiation related proteins (NSE, NeuN, and NFL, Proteintech), neurotrophic factors (BDNF, Proteintech), inflammation-associated protein (IL-6, Proteintech) and apoptosis-related proteins (Bax, Bcl-2, Proteintech) were incubated respectively, followed by horseradish peroxidase (HRP)-conjugated goat anti-rabbit IgG secondary antibody. β-actin was used as an internal control. The protein analysis was visualized by using Quantity One software (Azure Biosystems C300, Azure c300, USA).
Immunofluorescence staining
To evaluate the neural remodeling by the injected hydrogels, the proliferation and activity of neural cells in the hippocampus were examined using Ki67 and NeuN immunofluorescence staining. In addition, inflammatory response in lesion area were further examined by Arg1 and iNOS immunofluorescence staining.
Damaged area analysis
After 28 days of implantation, the brain tissues were preserved with 4% paraformaldehyde carefully. Frozen sections of the specimens were prepared to measure the volume of brain injury. Serial coronal sections were made at 2.0 mm before and after the lesion site, with the thickness of 20 μm for each brain slice. One section was randomly selected from each of 10 consecutive brain slices for HE staining. Image J software was used to analyze the lesion area of each group. The brain injury volume formula is as follows: brain injury volume (unit: mm 3 ) ¼ average injury area  the number of brain slices n  10  2%. Then the brain tissues were photographed with a camera.
Statistical analysis
Data were given as means AE standard deviation (SD). Statistical analyses were plotted using Graph Pad Prism 8.0 software. One-way ANOVA were performed to determine significance in statistical comparisons, p < 0.05 was considered statistically significant.
Physical characterization of HT hydrogels
In our previous study, we developed a dual-enzymatically crosslinked HT hydrogel, and investigated the effect of enzyme activity on the physical and biological characteristics of HT hydrogels [29]. In this study, the influence of HT content on the physical and biological characteristics of HT hydrogels was further analyzed. The HT polymers of 0.5, 1, and 1.5 wt% were used to prepare hydrogels, and both of HRP and GalOx were set as 1 U/mL according to our previous study [29]. For the convenience of description, these hydrogels were named 0.5%HT, 1% HT, and 1.5%HT, respectively. The gelation time was 5.5 AE 0.7, 4.4 AE 0.8 and 7.2 AE 1.0 min respectively as shown in Fig. 1a, there was no linear correlation between gelation time and HT content. Injectable hydrogels have been widely studied due to their painless and minimally invasive merits. Hydrogels with shear-thinning ability can be injected directly into the desired position of injury, filling the wound well and making full contact with the wound site. Herein, a rheometer was applied to measure the relationship between HT hydrogel's viscosity and shear rate. As presented in Figure S1, increasing the shear rate reduced the viscosity, demonstrating the well shear-thinning capacity of HT hydrogel. Besides, the insets in Fig. S1 showed that HT hydrogel could be easily injected and maintained its shape after injection without breaking, clogging, or dissolving, all confirming the excellent injectability of HT hydrogel. From Fig. 1b, all hydrogels possessed a water content about 98% (98.3 AE 0.4%, 98.4 AE 0.2% and 98.1 AE 0.1% for 0.5%HT, 1%HT, and 1.5%HT hydrogels). Rheological results from Fig. 1c showed that the storage moduli of 0.5%HT, 1%HT, 1.5%HT hydrogels were all less than 100 Pa similar to that of brain tissues and expected to be beneficial for neural differentiation [28]. The stability of hydrogels from Fig. 1d displayed that all hydrogels maintained in PBS solution for more than 28 days, and 0.5% HT hydrogel showed the highest stability. In addition, HT hydrogels could be biodegraded by hyaluronidase within 12 h. The completely degradation time of 0.5%HT, 1%HT and 1.5%HT hydrogels was 7 h, 9 h and 12 h, respectively (Fig. 1e). Hyaluronic acid is a super-absorbent and swelling material. The swelling ratio increased with the increase of HT content, and the maximum swelling ratios of 0.5%HT, 1%HT and 1.5% HT hydrogels were 211.4 AE 19.2%, 574.9 AE 18.2% and 857.3 AE 41.2%, respectively (Fig. 1f). The pictures of HT hydrogels after soaking in PBS solution for 24 h was presented in Fig. 1g, which was consistent with the result of swelling behavior. Additionally, the microstructure of HT hydrogels was also analyzed by SEM. Results from Fig. 1h showed that all hydrogel had a loose and porous structure, which is similar with natural extracellular matrix. In summary, the main difference of these hydrogels is the swelling ability. In order to minimize the risk of secondary damage during implantation, 0.5%HT hydrogel with low swelling ratio was selected as the scaffold for in vivo implantation.
Cytocompatibility of HT and HT/NGF hydrogels
The cytocompatibility of HT hydrogels and NGF-loaded hydrogels (named HT/NGF) was measured by CCK-8 assay. As shown in Fig. 2a, HT hydrogels decreased the viability of BMSC slightly on the first day and second day compared with control group (without hydrogel treatment). But the cell activity of BMSC in all hydrogel groups were greater than 95%, much more beyond the requirement of 70% as the international criteria [31]. And there was no significant difference among 0.5%HT, 1% HT, 1.5%HT hydrogel groups. Moreover, compared with the pure 0.5% HT hydrogel, NGF-loaded 0.5%HT hydrogels presented a better cell viability. The supplement of NGF could promote cell survival and growth, and the cell viability obviously improve the cell viability than control group. Among these HT/NGF hydrogel groups, there was also no significant difference for cell viability. After CCK-8 assay, cells were labeled by Live/Dead kit (Calcein-AM/PI), almost all cells in each group were marked in bright green fluorescence (living cells) and cells' morphology in hydrogel groups presented no obvious change compared with control group, confirming the good cytocompatibility of HT and HT/NGF hydrogels.
Afterward, the cytocompatibility of these hydrogels was further analyzed by the three-dimensional cultivation method as shown in Fig. 2b. BMSC encapsulated in all hydrogels displayed well survival status (green fluorescence). Cell number of 1.5%HT hydrogel and 1%HT hydrogel was less than other hydrogel groups, which might be due to their higher swelling ratio and diluted cell density. Besides, there was nearly no difference among three hydrogel groups on day 3 and day 5. It was interesting that the encapsulated cells continuously leaked from hydrogels along with the process of cultivation and still maintained high viability as labeled with bright green (Fig. S2). This might partially explain why cell number within hydrogels did not increase as time goes on. Additionally, from the result of Ki67 immunofluorescence, cells cultured in hydrogels displayed a well proliferation behavior, indicating these hydrogels had little negative effect on cell growth. All these data from CCK-8 assay, Live/Dead fluorescence staining, and Ki67 immunofluorescence labeling demonstrate that these HT hydrogels and HT/NGF hydrogels possess good cytocompatibility in vitro.
Previous studies have indicated that the shear force exerted during injection has a negative effect on cell viability [32]. Here, the Live/Dead fluorescence staining of BMSC before and after injection was carried out to observe the influence of shear force. Results in Fig. S3 demonstrated that the shear force during the injection did not obviously affect cell viability in our study. The possible reason was that the gelation time of 0.5%HT hydrogel is 5.5 min, and the injection process could finish before gelation. During the injection, shear force of pre-hydrogel solution is far less than that of hydrogel. Therefore, the produced shear force during injection of HT pre-hydrogel is very mild, and the caused negative effect on cell viability could be negligible.
In vitro hemolysis test for blood compatibility
As shown in Fig. 3a, results of hemolysis test showed that there was nearly no hemolysis reaction in HT hydrogel groups. The hemolysis rates of 0.5%HT, 1%HT and 1.5%HT hydrogels were 0.14 AE 0.07%, 0.31 AE 0.20%, and 0.41 AE 0.20%, respectively (Fig. 3b). The data fully meet the demands of the international standard of biomaterials hemolysis rate 5%, indicating that these hydrogels have good blood compatibility [33]. Besides, the morphology of RBCs (red blood cells) after treated with hydrogels was observed using a microscope. All treated RBCs maintained a biconcave disc shape, similar to the shape of healthy RBCs in NS group from Fig. 3c.
Histocompatibility assessment
100 μL of 0.5%HT hydrogel was subcutaneously injected at the dorsum of C57BL/6 mice to evaluate the histocompatibility. The experimental mice had normal eating and activity, and there was no bleeding and swelling after subcutaneous implantation. Weight and volume of hydrogels were gradually down, indicating these hydrogels are biodegradable in vivo (Fig. 4a). As shown in Fig. 4b, lots of cells infiltrated into hydrogels, which contained inflammatory cells (white arrow), however, most cell infiltration occurred in the marginal area of hydrogels, and there was almost no inflammatory response in the tissues around subcutaneous injection site, suggesting that HT hydrogel has good histocompatibility and could be further applied in vivo.
In order to further evaluate the systemic biosafety of 0.5%HT hydrogel, the biochemical analysis of blood and HE staining of main organs on day 14 were also performed. From Fig. 4c, the blood biochemical indexes of ALP (alkaline phosphatase), GPT (glutamic-pyruvic transaminase), and GOT (glutamic-oxalacetic transaminase) displayed no significance between the healthy mice and 0.5%HT hydrogelimplanted mice. As presented in Fig. 4d, there was no obvious pathological change between Normal group and Hydrogel group, indicating that the implantation of 0.5%HT hydrogel would not cause noticeable impair to vital organs and tissues. All these above results highlighted the excellent biocompatibility of 0.5%HT hydrogel in vivo.
In vivo testing
To further investigate in vivo neural repair efficacy of HT hydrogel combined with BMSC and NGF. The moderate traumatic brain injury (TBI) contusion model of C57BL/6 mice was performed according to our previous method [34]. Treatment was performed 7 days after the establishment of TBI model. The timeline of animal experiment was as follows (Scheme 1): the day of hydrogel implantation was set as day 0; modified neurological severity score (mNSS) was performed on 1, 3, 7, 14, 21, and 28 days after implantation; Morris water maze (MWM) test was performed from day 23-28; and brain tissues were collected to measure the damaged volume, western blot and immunofluorescence assays on the 28th day.
BMSC and NGF-loaded HT hydrogel implantation promotes the recovery of neuromotor function in TBI mice
From Fig. 5, the mNSS score of TBI mice in HT þ NGF group, HT þ BMSC group and HT þ NGF þ BMSC group were significantly decreased compared with NS group (p < 0.05) on the 14th day after implantation. With the prolongation of treatment time, mNSS score in the HT þ BMSC group and HT þ NGF þ BMSC group continued to reduce significantly on the 21st and 28th days after implantation compared with not only NS group (p < 0.05) but also HT þ NGF group (p < 0.05). These results indicated that in the early stage of implantation, the HT þ NGF, HT þ BMSC and HT þ NGF þ BMSC groups had good therapeutic effect and could promote the recovery of neuromotor function. In the meantime, the outcome of TBI mice treated by HT þ BMSC hydrogel and HT þ NGF þ BMSC hydrogel implantation was better than HT þ NGF hydrogel implantation on day 21 and day 28.
BMSC and NGF-loaded HT hydrogel implantation improves the recovery of learning and memory function in TBI mice
To evaluate the recovery of learning and memory abilities of TBI mice in each group, Morris Water Maze (MWM) behavior test was performed from day 23 to day 28 after treatment. The results were presented in Fig. 6. As shown in Fig. 6b, the escape latency in HT þ BMSC group and HT þ NGF þ BMSC group was significantly lower than that in NS group (p < 0.05), indicating that mice in HT þ BMSC group and HT þ NGF þ BMSC group could better adapt to the training environment. As shown in Fig. 6a, c and 6d, compared with NS group and HT þ NGF group, mice in HT þ BMSC group and HT þ NGF þ BMSC group crossed the platform more frequently, and mice in HT þ NGF þ BMSC group stayed longer in the target quadrant (p < 0.05). Moreover, in comparison to the HT þ BMSC group, HT þ NGF þ BMSC group demonstrated a better outcome. These results indicated that HT/BMSC hydrogel and HT/NGF/BMSC hydrogel had better therapeutic effect and improved the recovery of learning and memory function of TBI mice, and HT/NGF/BMSC hydrogel treatment performed the best. 5. The mNSS score of implanted TBI mice in each group, which revealed the recovery of motor ability from day 1 to day 28 (*p < 0.05 compared with NS, #p < 0.05 compared with HT þ NGF hydrogel, mean AE SD, n ¼ 6).
BMSC and NGF-loaded HT hydrogel implantation alleviates the inflammatory response and apoptosis in the injured site of TBI mice
After 28 days transplantation, the expression of inflammation-related protein (IL-6) and apoptosis-related proteins (Bax and Bcl-2) was detected by western blot to elucidate the curative effect of BMSC and NGFloaded HT hydrogel on brain injury repair. As shown in Fig. 7a and b, the expression of inflammatory related protein IL-6 in HT þ NGF þ BMSC group was decreased significantly compared with that of NS group (p < 0.05), indicating that the inflammatory response after transplantation was reduced in TBI mice. Compared with NS group, the expression of apoptotic factor Bax decreased in all treatment groups, while its expression was significantly decreased only in HT þ BMSC and HT þ NGF þ BMSC groups (p < 0.05). Although the expression of antiapoptotic factor Bcl-2 increased in all treatment groups, only HT þ NGF þ BMSC group was significantly increased compared with NS group (p < 0.05). The enhanced expression of proapoptotic protein Bax and low expression of apoptotic inhibitory protein Bcl-2 both indicated that hydrogel implantation could obviously inhibit the neuronal apoptosis. As shown in Fig. 7c, compared with the NS group, stronger immunofluorescence intensity of Arg1 (marker of M2 macrophage/microglia) and weaker immunofluorescence intensity of iNOS (marker of M1 macrophage/microglia) were observed in HT þ NGF, HT þ BMSC, and the HT þ NGF þ BMSC groups, suggesting that hydrogel treatments could promote the polarization of macrophage/microglia from M1 type to M2 type. In a word, the above results of western bolt and immunofluorescence manifested that BMSC and NGF-loaded HT hydrogel implantations could significantly mitigate TBI-induced neuroinflammation and apoptosis around the injured site of TBI mice.
BMSC and NGF-loaded HT hydrogel injection enhances the cell survival of neurons in the injured site and neurogenesis of TBI mice
The expressions of brain-derived neurotrophic factor (BDNF) and neuron-specific markers (NFL, NSE and NeuN) in the damaged tissues were detected by western bolt to elucidate the curative effect of BMSC and NGF-loaded HT hydrogel on brain injury repair after 28 days of treatment. From Fig. 8a and b, in contrast with NS group, the expression of BDNF was increased in all hydrogel treatment groups, and significantly increased in the HT þ NGF þ BMSC group (p < 0.05). Similarly, the expression of NFL was increased in all hydrogel treated groups, and significantly increased in HT þ BMSC and HT þ NGF þ BMSC groups compared with NS group (p < 0.05). The expression of NSE in HT þ BMSC and HT þ NGF þ BMSC groups was increased significantly compared with NS and HT þ NGF groups (p < 0.05). In addition, the expression of NeuN was promoted in all treatment groups compared with NS group (p < 0.05). These results suggest that HT þ BMSC and HT þ NGF þ BMSC groups produce more neurotrophic cytokines and neuronrelated proteins, which effectively promote the neural repair and functional recovery in TBI mice. Furthermore, the proliferation of neural cells in the hippocampus was detected by immunofluorescence staining (Fig. 8c). Compared with NS group, the positive expression of both Ki67 and NeuN in DG region was obviously increased in the HT þ BMSC and HT þ NGF þ BMSC groups, and the positive expression in HT þ NGF þ BMSC group was the most abundant, which indicated that there were more proliferative cells in DG region and enhanced therapeutic effect by BMSC and NGF-loaded HT hydrogel treatment.
BMSC and NGF-loaded HT hydrogel accelerates the healing process of damaged tissue in TBI mice
After 28 days of in situ injection, the damaged tissue volume in each group was observed by HE staining (Fig. 9a), gross morphology (Fig. 9c), and then quantitatively analyzed by Image J (Fig. 9b). Compared with NS group, the damaged volume in the treatment groups (HT þ NGF group, HT þ BMSC group and HT þ NGF þ BMSC group) were all decreased significantly (p < 0.05). Compared with the HT þ NGF group, the damaged area of HT þ BMSC and HT þ NGF þ BMSC groups were smaller and the therapeutic effect were more obvious (p < 0.05). Particularly, the lesion volume in HT þ NGF þ BMSC group was the lowest among all the treatment groups, highlighting the best recovery outcome of the combined treatment (HT þ NGF þ BMSC hydrogel).
Discussion
TBI is a serious neurotrauma disease with a high incidence of death and disability globally, which causes a sudden serious brain structure disruption, large number of neuronal death and long-lasting or irreversible neurological dysfunction [35,36]. Currently, the effect of clinical treatment on neurological function after TBI is far from satisfaction [37]. At present, more and more preclinical studies or clinical trial by using mesenchymal stem cells (MSCs), induced pluripotent stem cells (iPSCs) and neural stem cells (NSCs) have been performed to evaluate the safety and efficacy of stem cell therapy for TBI [38]. Stem cell therapy has brought a new era for some intractable diseases include TBI. Stem cells transplantation could promote neural regeneration and functional reconstruction through multiple mechanisms including inhibition of neuroinflammation, direction of neural differentiation, secretion of neurotrophic factors and so on [39,40]. However, stem cell-based therapy still faces many technical bottlenecks, such as a low retention and survival of stem cells in the injured niche after transplantation, inefficient neural differentiation, which limit its therapeutic effects. To solve these problems, encapsulation of stem cells with hydrogel material to prompt cell retention and survival is an alternative promising and effective strategy.
A neural scaffold should possess a series of essential characteristics for supporting transplanted cell growth and simultaneously matching the brain microenvironment. Hydrogel is one of the widely studied tissue engineering material, also an excellent carrier for stem cells, bioactive factors and drugs. An ideal hydrogel includes a controllable gelation process, high water content, porosity, appropriate rheological behavior, suitable degradation performance, injectability, and the most important property of well biocompatibility. Currently, a variety of hydrogel materials have been studied in nervous system diseases, such as hyaluronic acid, sodium alginate, gelatin, collagen and polypeptides [14,[41][42][43][44]. As a major component of extracellular matrix of neural cells, hyaluronic acid plays an important role in maintaining the brain homeostasis by affecting cell migration, proliferation, differentiation and other cellular behaviors [18,45,46]. Hyaluronic acid hydrogels have good biocompatibility and have been widely used in tissue engineering and regenerative medicine. Nerve growth factor (NGF) regulates the production of neurotransmitters and improves the survival, growth and differentiation of neurons, and has shown superior ability to repair nerve injuries in animal models [47,48].
In this study, we synthesized and optimized an injectable hyaluronic acid hydrogel (HT hydrogel) through in situ enzymatically crosslinked technique by HRP and GalOx, and used it as a neural scaffold to deliver BMSC and NGF for TBI treatment. The 0.5%HT hydrogel possessed sufficient moisture (about 98%), low swelling ratio and appropriate rheological behavior that meet the physiological requirements for brain tissue repair and reduce the frictional irritation to the surrounding tissue [49,50]. A gelation time of 5 min would benefit the injection process, because a longer time of operation will lead to the loss of loaded cell and NGF. The continuous and porous structure facilitates the permeation of nutrients, exchange of oxygen and carbon dioxide, and discharge of metabolites, which could provide a friendly environment for cell survival, extension and proliferation. The superior cytocompatibility of the 0.5%HT and 0.5%HT/NGF hydrogels confirmed by 3D culture model and Ki67 immunofluorescence staining could ensure the survival and proliferation of loaded BMSC. Good biocompatibility of this hydrogel verified by hemolysis test, blood biochemistry assays, and HE staining is a prerequisite of its further application in vivo.
For animal treatment, normal saline (NS), HT þ NGF, HT þ BMSC and HT þ NGF þ BMSC hydrogels were micro-injected into the core of the lesion on day 7 after the establishment of TBI model. One week after TBI, the harsh microenvironment of lesion dramatically alleviated, and the severe inflammatory response basically subsided, which provide a relative stable living microenvironment for the implanted cells [51]. Therefore, the timepoint (7 days after TBI model) was chosen for μm. β-actin was used as a protein loading control; *p < 0.05 compared with NS group, #p < 0.05 compared with the HT þ NGF group; mean AE SD, n ¼ 3. Fig. 8. Expression of neuro-related proteins in the damaged brain tissue and DG region of each group after 28 days' treatment. (a) Western blot of BDNF, NFL, NSE and NeuN; (b) quantitative analysis of western blot results of BDNF, NFL, NSE and NeuN; (c) immunofluorescence staining of Ki67 and NeuN to detect the neural cell proliferation in DG region after 28 days' treatment, DAPI label the nucleus, scale bar represented 100 μm. β-actin was used as internal control; *p < 0.05 compared with NS group, #p < 0.05 compared with HT þ NGF group; mean AE SD, n ¼ 3. Fig. 9. Statistical results of brain tissue defect area in each group after 28 days' transplantation. (a) HE staining of tissue sections; (b) quantitative analysis of lesion volume, and (c) representative photographs of the damaged volume in each group after 28 days' treatment. *p < 0.05 compared with NS group, #p < 0.05 compared with HT þ NGF group, mean AE SD, n ¼ 3.
treatment. In addition, after 7 days, a cavity formed at the injury site, providing space for in situ hydrogel injection [52][53][54]. The mNSS scoring and MWM are very common tests to assess the neurological function situation including motor function, learning and memory ability [55,56]. The results of mNSS indicated that all the three treatment groups had improved neural function in TBI mice, and HT þ BMSC and HT þ NGF þ BMSC groups exhibited better therapeutic recovery of neuromotor function. The results of MWM demonstrated that the number of crossing platform and staying time in target zone significantly increased in HT þ BMSC and HT þ NGF þ BMSC groups compared with others. The motor function and cognition showed a consistent rising trend. Therefore, both mNSS and MWM test results confirmed that HT þ BMSC and HT þ NGF þ BMSC groups indeed promoted the neurological function recovery in TBI mice after treating for 28 days, when compared with NS group.
In addition, we further analyzed more related protein expressions for cell growth including IL-6, Bax, Bcl-2, BDNF, NFL, NSE, NeuN by western blot. Combined with immunofluorescence staining of Arg1, iNOS, NeuN and Ki67, these data suggested that HT þ BMSC and HT þ NGF þ BMSC groups markedly reduced neuroinflammation, inhibited cell apoptosis, promoted neurotrophic factor secretion, and enhanced endogenous neural cell survival and proliferation. Moreover, data analysis of the damaged volume indicated that composite hydrogels significantly facilitated the repair process of brain damage. Therefore, it is clear that BMSC and NGF-loaded HT hydrogel makes a striking contribution to the neurological function recovery and tissue regeneration in TBI mice, and the working mechanism includes apoptosis inhibition, immunoregulation, neurotrophic factors secretion and neurogenesis synergistically.
Conclusions
In this work, dual-enzymatically cross-linked hyaluronic acid hydrogels by GalOx and HRP were developed as a novel neural scaffold to simultaneously load NGF and BMSC for TBI treatment. HT hydrogels have good injectability, stability, biodegradability, low storage modulus (<100 Pa), and superior biocompatibility. 0.5%HT hydrogel with the lowest swelling ratio was more suitable as the implanted scaffold compare with 1%HT and 1.5%HT hydrogel. In situ injection of NGF and BMSC-loaded HT hydrogel could significantly promote the functional recovery of motor, learning and memory ability, and accelerate the healing process of damaged brain tissues. The molecular mechanism mainly involves that hydrogel implantation not only provides a positive nutrition supply for cell survival and proliferation, but also suppresses neuroinflammation and apoptosis. These findings provide a solid basis for the application of BMSC and NGF loaded HT hydrogel in TBI treatment.
Declaration of competing interest
The authors declare no competing financial interest. | 2022-01-01T16:04:29.820Z | 2021-12-29T00:00:00.000 | {
"year": 2021,
"sha1": "cf218af98aa86af839bd1999b0b080002a44a06d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.mtbio.2021.100201",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5645812f154bd88c07fd50b293dde1af5c92bb8d",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267632217 | pes2o/s2orc | v3-fos-license | Plant-based diets and cardiovascular risk factors: a comparison of flexitarians, vegans and omnivores in a cross-sectional study
Background The growing trend towards conscious and sustainable dietary choices has led to increased adoption of flexitarian diets, characterised by plant-based eating habits with occasional consumption of meat and processed meat products. However, the cardiovascular disease (CVD) risk factors associated with flexitarian diets compared to both vegans and omnivores remain underexplored. Methods In this cross-sectional study, 94 healthy participants aged 25–45 years, categorized into long-term flexitarians (FXs ≤ 50 g/day of meat and meat products, n = 32), vegans (Vs, no animal products, n = 33), and omnivores (OMNs ≥ 170 g/day of meat and meat products, n = 29) were included. Various CVD risk factors were measured, including fasting blood samples for metabolic biomarkers, body composition analysis via bioimpedance, blood pressure measurements, arterial stiffness evaluated through pulse wave velocity (PWV) and metabolic syndrome (MetS) severity was determined using browser-based calculations (MetS-scores). Dietary intake was assessed using a Food Frequency Questionnaire (FFQ), diet quality was calculated with the Healthy Eating Index-flexible (HEI-Flex), while physical activity levels were recorded using the validated Freiburger questionnaire. Results The data showed that FXs and Vs had more beneficial levels of insulin, triglycerides, total cholesterol, and LDL cholesterol compared to OMNs. Notably, FXs revealed the most favorable MetS-score results based on both BMI and waistline, and better PWV values than Vs and OMNs. In addition, FXs and Vs reported higher intake rates of vegetables, fruit, nuts/seeds and plant-based milk alternatives. Conclusion The flexitarian diet appears to confer cardiovascular benefits. While Vs had the most favorable results overall, this study supports that reducing meat and processed meat products intake, as in flexitarianism, may contribute to CVD risk factor advantages. Supplementary Information The online version contains supplementary material available at 10.1186/s40795-024-00839-9.
Introduction
Plant-based diets have gained popularity in Germany and Western countries in general which is likely driven by increased awareness of sustainable lifestyles, animal welfare and health concerns [1,2].In addition, the flexitarian diet, which is plant-based but includes small amounts of meat and processed meat products, is winning followers who cite health aspects as their primary motivation [3,4].
In recent years, cardiovascular disease (CVD) has remained the leading cause of death worldwide as well as in Germany, and more than half of all deaths are directly related to it [5,6].Consequently, when assessing the health effects of different dietary patterns, risk factors of CVD should be taken into account.However, the causes of CVD are diverse and can be divided into modifiable and non-modifiable risk factors.Non-modifiable risk factors include age, gender and genetic predisposition.In contrast, diet and lifestyle are important modifiable risk factors [1,[6][7][8][9][10][11].
Typical omnivore diets rich in meat and especially processed meat products have been shown to be associated with a higher prevalence of CVD risk factors such as obesity [12,13], hypertension [14,15], insulin resistance [7,11], unfavourable blood lipid levels [10,12] and adverse vascular changes [5,7].In Germany, dietary habits of omnivores are characterised by a high consumption of meat and processed meat products above the recommended intake rates (> 600 g/week) of the German Nutrition Society [16,17].Additionally, a physically active lifestyle (> 2.5 h/week of moderate activity) reduces the risk of development and progression of atherosclerosis, which is an important target for intervening and preventing CVD risk factors [18][19][20].However, the physical activity levels are too low in western industrialized countries (< 2.5 h/week) [21], including Germany, with only 38% of people reaching the recommended activity rates [22].
While the multiple cardiovascular health benefits of an exclusively plant-based vegan diet have been widely described [23][24][25][26][27][28], current studies focusing on a plantbased flexitarian diet are still rare [29][30][31][32].Therefore, it is unclear whether a diet that is healthy for the cardiovascular system necessarily excludes animal products, or whether a reduction in meat and processed meat products is sufficient to benefit from the health-promoting effects.
Although CVD usually occurs in older age, dietary and lifestyle factors in younger years play a crucial role in the development of the disease [33,34].Unfortunately, there is limited research on the CVD risk profile of FXs compared to Vs and OMNs in Germany.Thus, the aim of this cross-sectional study was to evaluate associations of a flexitarian diet compared to a vegan and omnivore diet on CVD risk factors in a young to middle-aged, healthy German study population.
This study was part of the interdisciplinary research project 'NES' (Nachhaltige Ernährungsstile) between the Leibniz University of Hannover and the Georg August University of Göttingen, Germany.
Study design and participants
This cross-sectional study was conducted at the Institute of Food Science and Human Nutrition, Leibniz University of Hannover, Germany.Ethical approval was provided by the Ethics Commission of the Medical Chamber of Lower Saxony (Hannover, Germany) at 9th of September 2019 under 43/2019.The study was carried out between March and August 2020.However, the investigations only took place during the non-lockdown periods and only people who had not previously been infected with COVID were included in the study.Written informed consent was obtained from all participants in accordance to the guidelines of the Declaration of Helsinki.The study was registered in the German Clinical Trials Registry in January 2020 (DRKS 00019887).
The detailed study design has recently been published by Bruns et al., 2022 [35].
Interested subjects were included in the study if they followed their diet for at least ≥ 1 year and were categorized as follows: (a) flexitarians (FXs): meat and processed meat products consumption ≤ 50 g/day (equivalent to ≤ 350 g/week), (b) vegans (Vs): complete exclusion of food of animal origin, (c) omnivores (OMNs): meat and processed meat products consumption ≥ 170 g/day (equivalent to ≥ 1190 g/week).Meat and processed meat products were defined as red and white meat for meat and ham, sausage, cold cuts, meatballs and meat nuggets for meat products.Consumption limits for FXs were derived from national and international meat and processed meat products intake recommendations [17, 36,37], and for OMNs on per capita consumption between 2011 and 2018 in Germany and Europe, respectively [38,39].Notably, to ensure a clear distinction between FXs and OMNs, subjects with a daily consumption of meat and processed meat products ≥ 50 g and ≤ 170 g were excluded.
Participant eligibility was assessed through a multistep process.First, interested subjects were preselected via an online questionnaire, which mainly contained questions about the inclusion and exclusion criteria (e.g.age, sex, anthropometrics, health status, dietary habits) to check whether they were suitable for the FX, V or the OMN group.Secondly, potentially eligible participants subsequently underwent a face-to-face interview, which focused on dietary habits (e.g. the quantity of meat and processed meat products consumption) as well as lifestyle factors.Thirdly, only subjects who reported no change in their behaviour due to the pandemic situations were included in the study.Finally, as a result, the decision to participate in the study was made.
Moreover, the study aimed to ensure a homogeneous cohort in terms of a narrow age range, gender, BMI within the physiological range (20 and 28 kg/m²) and non-smoker.The main exclusion criteria were: acute febrile infections, metabolic or malignant diseases, gastrointestinal disorders, pregnancy or lactation, endocrine and immunological diseases, food intolerances, and drug or alcohol dependence (Fig. 1).Finally, matched participants were invited to come to the Institute for an examination day.
Anthropometric data and body composition
On the examination day, the participants' height and weight were measured to calculate the BMI according to Fig. 1 Flow chart of the study the standard formula [40].Waist and hip circumferences were also determined using a tape measure.Body composition parameters were assessed by multi-frequency bioelectrical impedance analysis (BIA) according to the manufacturer's guidelines using Nutriguard M (Data Input Company, Darmstadt, Germany).
Food frequency, diet quality calculation and physical activity questionnaires
Dietary habits were recorded using the validated Food Frequency Questionnaire (FFQ) of the Robert Koch Institute (RKI), Berlin, Germany [41].It consists of 57 questions with several sub-questions on dietary habits/ food group intake in the previous 4 weeks.In addition, 28 questions were included on plant-based alternative products, low or highly processed, respectively.
Diet quality was calculated using the HEI-Flex score, which is a modification of the validated Healthy Eating Index-2015 (HEI-2015) [42].Based on the FFQ data, a single HEI-Flex score value was calculated for each participant and then the median of each diet group was presented.In detail, information of diet quality calculations based on the HEI-Flex can be found elsewhere [35].
Health-relevant activity as a confounding factor (Appendix 1) was recorded using the validated German Freiburger questionnaire to assess the activity level of each participant [43].
Arterial stiffness measurements
Arterial stiffness was determined by pulse wave velocity (PWV) analysis and blood pressure measurements.Both were taken according to the manufacturer's recommendations of boso ABI-system 100 PWV, BOSCH + SOHN, Jungingen, Germany, 2019.
All measurements and analysis were carried out by trained nutritionists from the Institute.
Biomarker analysis in blood and MetS-score calculation
After an overnight fast, the blood samples were obtained by an arm vein puncture and stored below 5 °C.Samples were transported to the accredited and certified Laboratory of Clinical Chemistry, Hannover Medical School, Germany, for analysis.
For the assessment of insulin resistance, the Homeostatic Model Assessment (HOMA) was used according to the following formula: HOMA index = fasting insulin (µU/ml) x fasting blood glucose (mg/dl) / 405 [44].
The systemic immune inflammation index (SII) was estimated using the following formula: SII = P xN L , where P, N and L are the numbers of peripheral platelets, neutrophils and lymphocytes, respectively [45].
The browser-based American Metabolic Syndrome (MetS) Severity Calculator was used to determine individual MetS severity using established calculations [46,47].These calculations take into account several CVD risk parameters, such as systolic blood pressure, triglycerides, HDL cholesterol and fasting glucose, as well as information on sex, age, race/ethnicity and weight.As a result, a single value is calculated for each person, usually using body mass index (MetS-score based on BMI).In addition, the MetS-score can also be calculated on the basis of waist circumference (MetS-score based on waist circumference).As there are advantages and disadvantages of using BMI and waist circumference to calculate the MetS-score, both methods were presented because (a) BMI correlates well with the percentage of total fat, but to a limited extent when the percentage of muscle mass in the total mass of an individual is high, and (b) waist circumference is a better predictor of metabolic risk than BMI.However, waist circumference is a less good measure of visceral fat in normal weight and younger subjects.
Data analysis and statistical methods
Assuming an effect size ≥ 0.8, the sample size of n = 25 per group was based on a significance level (alpha) of 0.05 and a beta of 0.8 for detecting differences between the three diets.A minimum of 30 participants per group were enrolled, taking into account an expected drop-out rate of 15%.SPSS software (IBM SPSS Statistics Version 28.0.1.0;Chicago, IL, USA) was used for statistical analysis.Data are presented as median ( x ) and 25th-75th percentiles.The Kolmogorov-Smirnov test was used to test for normality.Normally distributed data were tested with univariate one-way analysis of variance (ANOVA) and Bonferroni correction for post hoc analysis to assess differences between the three dietary patterns.Non-normally distributed data were tested with the non-parametric Kruskal-Wallis test to detect statistically significant differences between the three groups.Regression calculations were performed stepwise: First, CVD risk factors were selected that differed significant between the three study groups after adjustment for confounders (total cholesterol, LDL, PWV and both MetS-sores).Second, to assess the relationship between these CVD risk factors and consumption amounts of food groups, the Spearmans rho correlation coefficient (rho) was used at the p rho ≤0.05 level (Appendix 2) [48].Third, linear regression models were applied to evaluate the associations between these CVD risk parameters and identified food groups intake (Appendix 3).To approach normality, all dependent variables were log-transformed.The residuals were tested for uniform linear dispersion.If homoscedasticity was present, a bootstrap was performed using the BCamethod.In the regression analysis, an adjusted association (age, sex, BMI and total activity) with the dependent variable (cholesterol, LDL, PWV) was included for each food group.Both MetS-score values were only adjusted for total activity, as age, sex and weight status were already taken into account in the scoring calculations.Heat-Map colours are based on adjusted standardized regression coefficients β (Fig. 2).
The statistically significance level of p ≤ 0.05 was used for all analyses.
Anthropometric and body composition
Anthropometric and body composition measures indicate a healthy study collective (Table 1).Sex-specific values were only reported if there were statistically significant differences between the sex-specific diet groups.In all groups, the median BMI was within the normal range.Lower BMI values were observed for the FX women compared to the OMN women (p = 0.05), but body weight did not differ significantly between the three diet groups.Only for body fat were significant differences found.FX women had significantly lower values for both body fat in kg and in percent compared to OMN women (p = 0.013 and p = 0.003, respectively), whereas the difference was not significant for men.In addition, the V women had a significantly lower percentage of body fat compared to the OMN women.
Food group intake and diet quality between the three study groups
There were no significant differences in median consumption of beverages (low/no calorie), softdrinks (sugared) and bread/rice/noodles/potatoes between the three diet groups (Table 2).In contrast, OMNs consumed the least vegetables, FXs twice as much and Vs three times as much (p < 0.001), with significant differences between FXs and Vs compared to OMNs.Similarly, median fruit consumption was about twice as high in FXs and Vs compared to OMNs (p = 0.018), with only Vs and OMNs differing significantly.
Although FXs consumed in median only half as much milk as OMNs, the difference was not significant.The intake of plant-based milk alternatives was about five times lower for FXs than for Vs (p = 0.001), and also lower for plant-based dairy alternatives (p = 0.001).OMNs consumed in median neither plant-based milk nor plantbased dairy alternatives.
FXs and OMNs had significantly lower intake rates of legumes compared to Vs (p < 0.001).Regarding nuts/ seeds, FXs reported an intake about 4 times higher than OMNs, but only about half as much as Vs (p < 0.001).The consumption of sweets and alcohol was not significantly different between FXs and Vs, but was significantly higher in OMNs than in Vs.
As expected, FXs had a significantly lower meat and processed meat products consumption than OMNs.The consumption of plant-based meat alternatives was highest among the Vs, significantly lower among the FXs and not consumed by OMNs (p < 0.001).The reported median intake of fish/fish products and eggs did not differ significantly between FXs and OMNs.However, OMNs consumed twice as many eggs as FXs.
The HEI-Flex score results differed significantly between all diet groups (p < 0.001) with Vs showing the most favorable diet quality, followed by FXs and then by OMNs.
Comparison of CVD risk profile parameters between the three study groups
The median values of CVD risk markers were within the reference ranges in all diet groups (Table 3).
Observing of blood glucose markers in the three study groups showed similar levels of fasting glucose, HbA1c and HOMA, while Vs had the lowest fasting insulin concentrations compared to FXs and OMNs (p = 0.016), with statistical significance between Vs and OMNs.
Regarding blood lipid markers, both FXs and Vs had significantly lower levels than OMNs for total and LDLcholesterol (p < 0.001).HDL-cholesterol levels were not statistically different between the three groups.Moreover, FXs and Vs had significantly lower fasting triglycerides than OMNs (p = 0.008).
Concerning the inflammatory state, no significant difference was observed between the three diet groups in the SII.
In terms of metabolic syndrome (MetS) severity, FXs had lower (more favorable) score levels, closely followed by Vs and significantly better than OMNs based on BMI (p = 0.012) and waist circumference (p = 0.027).However, all diet groups had MetS-score values associated with a low risk (MetS-score < 0) of CVD events [47].
Regarding vascular health parameters, there were no significant differences between the three diet groups for systolic and diastolic blood pressure.Notably, significantly lower (better) values (p = 0.022) were observed for PWV in the flexitarian group compared to Vs and OMNs.
Further examination of whether the associations on CVD risk factors remained significant after adjustment for covariates showed that there were still significant differences between the three dietary groups for total cholesterol, LDL, both MetS-scores and PWV levels.However, the differences in triglyceride concentrations and insulin lost significance after correction for confounders (age, sex, BMI, total activity).
Associations between CVD risk factors and food groups intake
In relation to total cholesterol, significant positive associations were observed for dairy products, sweets and HOMA: Homeostasis Model Assessment Index according to [44] MetS-score: Metabolic Syndrome Severity Score based on BMI resp.waistline according to [46,47] meat consumption (Fig. 2, Appendix 3), with dairy and meat intake showing the most pronounced associations (β ≥ 0.220).Conversely, inverse significant relationships were found for intake of fruit, plant-based dairy alternatives, legumes and HEI-Flex score points, with standardized regression coefficients of β≤-0.219.No significant associations were observed for median intakes of plantbased meat and milk alternatives.
For LDL cholesterol, significant positive associations were found between median consumption of softdrinks, sweets and meat (β ≥ 0.225).Conversely, statistically significant negative associations emerged for median intake of vegetables, fruit, dairy alternatives, legumes and HEI-Flex score points (β≤-0.199).
For the two MetS-scores (based on BMI and waistline), significant positive associations were found with processed meat consumption (β ≥ 0.286).In addition, the MetS-score based on BMI exhibited significant associations with median meat consumption (β = 0.237), while the MetS-score based on waistline indicated a significant relationship with median sweets intake (β = 0.223).Conversely, significant negative coefficients were found between both MetS-scores and median vegetable intake as well as HEI-Flex scores (β≥-0.263).Likewise, a negative relationship was evident between MetS-score based on waistline and fruit intake (β≤-0.201).For PWV, significant positive associations were observed for the median consumption of meat and processed meat, respectively (β ≥ 0.226).
Overall, the regression analyses showed higher β-coefficients for animal-based food groups (β > 0), indicating adverse associations with CVD risk indicators.Conversely, higher median intakes of plant-based food groups were often corresponding to negative β-coefficients (β < 0), indicating a favorable association with the CVD risk profile.
Discussion
Dietary choices play a crucial role in influencing the CVD risk [1,49,50].While recent studies have already described cardiovascular health benefits for Vs and vegetarians [51], data on flexitarianism are still insufficient.As the consumption of meat and processed meat products is associated with an unfavorable CVD risk profile [52][53][54], the aim of the present study was to evaluate whether a cardiovascular-healthy diet requires the complete elimination of all animal products, as in veganism, or whether a reduced consumption of meat and processed meat products towards a more plant-based diet, as in flexitarianism, already supports beneficial outcomes on CVD risk factors.Therefore, a healthy, adult German Fig. 2 Heat map of standardized regression coefficients (β) obtained from linear regression analyses between median food group intakes and CVD risk parameters study cohort with clearly defined FXs, Vs and OMNs was included.
The results of the present study were compared with data from different dietary patterns along the plant-based spectrum because, on the one hand, a precise and generally accepted definition of flexitarianism is still lacking [55,56] and, on the other hand, previous research with clearly defined flexitarian study groups is rare.In addition, other studies often include self-defined dietary groups, had a higher proportion of women [31,32,57], a wider age range [9,29,32,57], or participants with pre-existing conditions [27,58,59].In contrast, the present study not only records the median consumption of various food groups, but also differentiates between processed foods and plant-based alternative products.Importantly, this study included both the "classic" individual blood parameters of CVD risk and also sum scores to assess the severity of MetS as an important CVD risk factor.Additionally, arterial stiffness was determined by measuring PVW as a CVD risk marker.Therefore, the present study results have an exploratory pilot character and data were not adjusted for multiple testing [60].
As is already known from studies comparing the beneficial CVD impact of a vegetarian versus an OMN diet, the present results supports these findings also for the flexitarian dietary pattern.The FXs (and Vs) showed a more favorable CVD risk profile in terms of blood lipid profile (total cholesterol, LDL), MetS-scores and PWV compared to OMNs.However, there were no statistically significant differences between the three study groups in blood glucose markers, blood pressure levels and inflammatory state.The reasons for this are unclear and may be due to the fact that the participants were young and healthy.In addition, there were also no significant differences in total energy intake between the groups, as recently published [61].
Blood lipid profile parameters
The results supported that a higher consumption of vegetables, fruit, and legumes was associated with lower total cholesterol and LDL levels.Similarly, both FXs and Vs had significantly lower concentrations of total and LDL cholesterol compared to OMNs.These findings are consistent with a recent study (n = 258) which observed significantly lower levels of total and LDL cholesterol in several plant-based diets, including FXs, compared to OMNs [31].Likewise, other studies have reported lower values of total and LDL cholesterol in non-OMN participants [10,[62][63][64].
Triglyceride concentrations in the current study were also significantly higher in OMNs than in FXs and Vs, but the difference lost significance after adjustment for confounders (age, sex, BMI, physical activity).However, previous research has also shown conflicting results regarding the effect of a plant-based diet compared to an omnivore diet on triglyceride levels.While a meta-analysis (2018) observed significantly lower triglyceride values in Vs compared to OMNs [65], another (2015) found no significant difference between a vegetarian diet compared to an omnivore diet [66].Moreover, there were no significant differences in HDL cholesterol levels between the two plant-based diets (FXs and Vs) and OMNs in the present study.These results are consistent with other studies that have also found no differences in HDL levels between various plant-based diets and OMNs [31,67].
MetS-score calculations
In the present study, both FXs and Vs had lower (better) MetS-score levels compared to OMNs.In particular, FXs appeared to have the most beneficial values of the three diet groups.Notably, all groups achieved results associated with an intermediate (MetS-score = 0) or low (MetSscore < 0) risk level [46,47].These findings are in line with a cross-sectional analysis of the Adventist Health Study 2 (n = 773), which also found a lower risk of MetS in semivegetarians compared to OMNs [68].Also, a more recent review (2021) showed, that a vegan diet seems to be useful in the prevention and treatment of MetS [69].However, in the absence of comparable European or German results, the present values are compared with the U.S. population sample, which may limit direct comparability due to potential national differences, e.g.dietary habits.Nonetheless, the utilization of the MetS-scores is promising as it avoids relying on fixed cut-off values that are traditionally used for estimating the metabolic syndrome risk and enabling the identification of individuals with scores below a threshold who would not typically be considered at risk for CVD.
Pulse wave velocity levels
Notably, FXs had significantly lower (more favorable) pulse wave velocity compared to OMNs in this study, even after adjusting for confounding factors.These findings are consistent with a study by Acosta-Navarro and colleagues (n = 88), who examined PWV in healthy male vegetarians and OMNs (age ≥ 35 years) and found significantly lower levels for vegetarians compared to OMNs [70].Other studies have also reported improved vascular structure in participants following a plant-based diet compared to OMNs [10,71,72].
Associations between food group intake, diet quality and CVD risk parameters
The results of the present study suggest that higher median consumption rates of softdrinks, dairy products, sweets, meat and processed meat are associated with higher total and LDL cholesterol levels, MetS-scores and PWV values.Additionally, it was observed that both FXs and Vs had significantly lower intake rates of sweets compared to OMNs.These findings are consistent with a cohort study (n = 17,824), which also reported higher consumption of sugary foods like softdrinks and sweets among OMNs compared to vegetarians [73].Similarly, a recent review from 2020 highlighted that increased consumption of sweets, typically rich in free sugars and refined starches, is associated with a higher risk of obesity and elevated LDL levels [74].
As defined, OMNs had significant higher median consumption of meat and processed meat products compared to FXs and Vs in this study.Further, positive associations were observed between meat and particularly processed meat consumption and total and LDL cholesterol levels, MetS-scores, and PWV values.These findings are consistent with previous studies that have related meat and processed meat consumption to various CVD risk factors [9,[75][76][77].For example, a cohort study (n = 81,529) concluded, that an increased red meat consumption was associated with higher cholesterol levels, hypertension and higher body weight [54], and a systemic review and meta-analysis of cohort studies (2019) found evidence, that a reduction in processed red meat intake is associated with a lower risk for CVD [78].
Furthermore, both FXs and Vs reported a high median consumption of plant-based milk, plant-based dairy, and meat alternatives, while OMNs reported no consumption of these products.However, the associations between these food groups and CVD risk factors were inconclusive in the present study.However, the impact of these products on human health are still not well investigated, as they vary greatly in composition and many are highly processed, containing high levels of salt, sugar, and/or saturated fat [79,80].These ingredients have been associated to a higher CVD risk, but their overall effects are still debated [81][82][83].
In terms of diet quality, based on the HEI-Flex score values, both FXs and Vs had significantly higher, more favorable, levels compared to OMNs.Furthermore, the regression calculations in the present study supported that higher score points (= higher diet quality) were associated with a more favorable blood lipid profile as well as MetS-scores.These findings are in line with previous studies that found inverse associations between diet quality and blood lipid parameters [84,85].Additionally, a cross-sectional polish study (n = 535) reported an inverse relationship between HEI-2015 scores and the occurrence of the metabolic syndrome [86].
Strengths and limitations
The study has several limitations that should be considered.Firstly, the cross-sectional design limits the ability to establish causality between dietary patterns and CVD risk factors.Future research employing longitudinal or intervention designs would provide more robust evidence.Secondly, the sample size of 94 participants does not allow the generalizability of the findings.Therefore, it is important to interpret the results as findings from an exploratory pilot study.The latter, the use of food frequency questionnaires to assess dietary intake, may lead to recall bias and inaccuracies in reporting.Also, the dietary assessment method did not allow the accurate capture of dietary fiber intake, different fatty acids (MUFAs, PUFAs), and phytochemicals.
Despite these limitations, the study has several strengths.The well-controlled study design ensured homogeneity of the three dietary groups in terms of age, sex, BMI, health and smoking status.Notably, in contrast to previous studies, the present study also included MetS-scores and PWV as additional CVD risk indicators.
Conclusion
In conclusion, both plant-based diets, FXs and Vs, were associated with improved blood lipid profiles and higher diet quality compared with OMNs in the present study.FXs were often intermediate between Vs and OMNs, with some CVD risk parameters approaching or exceeding those of Vs.Notably, FXs had the most favorable MetS-scores and PWV values compared to the other two groups.Overall, the results supported a beneficial impact of a flexitarian diet on CVD risk parameters in the present cohort.However, further research with larger, clearly defined flexitarian study populations is needed to better understand the influence of this dietary pattern on CVD risk factors.
study design, methodology, reviewing and editing, supervision.All authors have read and agreed to the submitted version of the manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL.
FXs
= flexitarians, Vs = vegans; OMNs = omnivores Data are shown as median (x ) with 25th, 75th percentiles Differences between groups were analyzed with One-way ANOVA for normally distributed data and Kruskal-Wallis test with post/hoc Bonferroni correction for non-normally distributed data p < 0.05 was considered statistically significant p-values in bold represent statistical significance * statistically significant difference between FXs and OMNs ** statistically significant difference between Vs and OMNs *** statistically significant difference between FXs and Vs SII: Systemic-Immune-Inflammation Index
Table 1
Anthropometric and body composition
Table 2
Food group intake and diet quality between the three study groups [35]lues in bold represent statistical significance * statistically significant difference between FXs and OMNs ** statistically significant difference between Vs and OMNs *** statistically significant difference between FXs and Vs 1 HEI-Flex score values: Score points (SP) based on calculations with the Healthy Eating Index-flexible according to[35]with cut-off values (V) of: Vmax = 100 SP and Vmin = 0 SP; higher SP indicate higher diet quality
Table 3
Comparison of CVD risk profile parameters between the three study groups | 2024-02-14T05:08:38.659Z | 2024-02-12T00:00:00.000 | {
"year": 2024,
"sha1": "1c502d3e4f82c59c9ebaf7fac436295ee1604e0e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1c502d3e4f82c59c9ebaf7fac436295ee1604e0e",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13706931 | pes2o/s2orc | v3-fos-license | Meclizine Preconditioning Protects the Kidney Against Ischemia–Reperfusion Injury
Global or local ischemia contributes to the pathogenesis of acute kidney injury (AKI). Currently there are no specific therapies to prevent AKI. Potentiation of glycolytic metabolism and attenuation of mitochondrial respiration may decrease cell injury and reduce reactive oxygen species generation from the mitochondria. Meclizine, an over-the-counter anti-nausea and -dizziness drug, was identified in a ‘nutrient-sensitized’ chemical screen. Pretreatment with 100 mg/kg of meclizine, 17 h prior to ischemia protected mice from IRI. Serum creatinine levels at 24 h after IRI were 0.13 ± 0.06 mg/dl (sham, n = 3), 1.59 ± 0.10 mg/dl (vehicle, n = 8) and 0.89 ± 0.11 mg/dl (meclizine, n = 8). Kidney injury was significantly decreased in meclizine treated mice compared with vehicle group (p < 0.001). Protection was also seen when meclizine was administered 24 h prior to ischemia. Meclizine reduced inflammation, mitochondrial oxygen consumption, oxidative stress, mitochondrial fragmentation, and tubular injury. Meclizine preconditioned kidney tubular epithelial cells, exposed to blockade of glycolytic and oxidative metabolism with 2-deoxyglucose and NaCN, had reduced LDH and cytochrome c release. Meclizine upregulated glycolysis in glucose-containing media and reduced cellular ATP levels in galactose-containing media. Meclizine inhibited the Kennedy pathway and caused rapid accumulation of phosphoethanolamine. Phosphoethanolamine recapitulated meclizine-induced protection both in vitro and in vivo.
Introduction
Acute kidney injury (AKI) is a common clinical problem associated with an increasing prevalence, high morbidity, mortality, and prolonged length of hospitalization (Hsu et al., 2007;Lameire et al., 2005;Xue et al., 2006). AKI is also a risk factor for progression to chronic kidney disease (CKD) (Siew and Deger, 2012;Lo et al., 2009;Wald et al., 2009;Ferenbach and Bonventre, 2015;Canaud and Bonventre, 2015). Global or local ischemia contributes to the pathogenesis of AKI which complicates various clinical conditions. Ischemia-reperfusion injury (IRI) is also a risk factor for delayed graft function and chronic allograft nephropathy (Wirthensohn and Guder, 1986;Brezis et al., 1984;Fletcher et al., 2009). As the options to prevent AKI are few and its prognosis is poor, novel interventional strategies are needed. Episodes of nonlethal ischemia can precondition the kidney protecting it against subsequent ischemia (Park et al., 2001;Joo et al., 2006;Ali et al., 2007;Bonventre, 2002). It would be highly desirable to mimic ischemic preconditioning with pharmaceutical intervention. While there are studies reporting that agents, such as high mobility group box 1 (HMGB1) (Wu et al., 2014), isoflurane (Su et al., 2014), inhibitors of hypoxia inducible transcription factor (HIF) or carbon monoxide (Bernhardt et al., 2006) given to animals prior to the ischemic event are protective, in most studies the agents are administered close to the time of IRI. Furthermore the effects are not keep in sustained if given more than a few hours prior to the IRI, nor are the mechanisms understood.
Ischemia plays a central role in the initiation and establishment of AKI because the nephron has a high energy demand and intrarenal oxygen tensions in the outer medulla are low and further reduced by hypoperfusion (Brezis and Rosen, 1995). The kidney proximal tubular epithelial cells are particularly sensitive to IRI because they have minimal glycolytic capacity and rely on mitochondrial metabolism for ATP synthesis (Wirthensohn and Guder, 1986;Klein et al., 1981;Uchida and Endou, 1988). In addition, in the setting of tubular cell injury, mitochondrial respiration can result in the generation of oxidants. Therefore, shifting energy metabolism from mitochondrial respiration to glycolysis could be a viable therapeutic strategy to minimize cell injury. EBioMedicine 2 (2015) [1090][1091][1092][1093][1094][1095][1096][1097][1098][1099][1100][1101] Meclizine was identified by using a small molecule screening strategy to identify clinically useful drugs that are capable of shifting cellular energy metabolism from respiration to glycolysis (Gohil et al., 2010). Meclizine is an "over-the-counter" FDA approved histamine receptor blocker used for the treatment of nausea, vomiting, and dizziness associated with motion sickness and has been used for many decades. Meclizine shifts cellular energy metabolism from mitochondrial respiration to glycolysis (Wirthensohn and Guder, 1986), via direct targeting of cytosolic phosphoethanolamine metabolism (Gohil et al., 2013). We found that meclizine reduced inflammation, mitochondrial oxygen consumption, oxidative stress, mitochondrial fragmentation and tubular injury after kidney IRI. Meclizine pretreatment of HK-2 cells reduced NaCN-induced LDH release, cytochrome c release and mitochondrialdependent ATP production and increased lactate production. In addition, meclizine inhibited the Kennedy pathway and led to rapid accumulation of phosphoethanolamine. Exogenous addition of ethanolamine in vivo and in vitro was protective, suggesting that meclizine mediated protection occurs via cytosolic phosphoethanolamine metabolism. Thus, our study not only offers a clinically used drug as a potential therapeutic agent but also identifies a previously unidentified pathway that can be targeted for kidney IRI.
Animal Experiments
All mouse studies followed the fundamental guidelines for Animal Care and Use in Research and Education and were performed in accordance with the animal use protocol approved by the Institutional Animal Care and Use Committee of Harvard Medical School. Experiments were performed in 8-10 wk old male C57BL/6 mice purchased from Charles River Laboratories. Animals were anesthetized with sodium pentobarbital (60 mg/kg body weight intraperitoneally) prior to surgery. Kidneys were exposed through flank incisions and rendered ischemic by clamping the renal pedicles with nontraumatic clamps (Roboz, Rockville, MD). After 27 min at 36.5-37°C the clamps were removed. Male mice were used because they are more susceptible to ischemia and have a more consistent response to ischemia (Park et al., 2004). Successful renal ischemia and reperfusion was documented by visual inspection of the kidney. Blood pressure was not monitored in this study. Two hours after surgery, 1 ml of NaCl 0.9% was administered intraperitoneally. Some animals were subjected to sham surgery. In the toxin AKI models mice received a one-time intraperitoneal injection of cisplatin (25 mg/kg body weight, Sigma-Aldrich, St. Louis, MO, USA) or aristolochic acid (10 mg/kg body weight, Sigma-Aldrich) in NaCl 0.9%. The control group was administered NaCl 0.9% only. Meclizine or vehicle (10% Kolliphor® EL in PBS) was administered intraperitoneally at different doses and different time-points.
For the dose-response experiment, mice received 10, 30, 60 or 100 mg/kg body weight of meclizine 17 h and 3 h before IRI. For the time-course experiment, one injection of meclizine (100 mg/kg) was given 8, 17 or 24 h before IRI. For preconditioning experiments, 100 mg/kg of meclizine was injected 17 h before IRI. Some mice received two injections of meclizine (100 mg/kg) 0 (after removing clamps) and 8 h after IRI. Other animals were injected with ethanolamine (150 mg/kg body weight of ethanolamine, Sigma-Aldrich), pH 7.4 in phosphate buffered saline (PBS) or vehicle (PBS) administered intraperitoneally 2 h before, immediately after and 24 h after IRI. Mice were sacrificed 24 or 48 h after release of the pedicle clamps, 3 days after cisplatin injection or 5 days after aristolochic acid, and 48 h after ethanolamine injection for tissue analysis. Mice were randomly divided into experimental groups. Serum creatinine was measured in all mice. Before sacrifice, some animals were placed in metabolic cages for 3 h to collect urine for kidney injury molecule-1 (KIM-1) measurement.
Renal Function
Serum creatinine was measured by the picric-acid method using the Beckman Creatinine Analyzer II (Beckman, Brea, CA). Serum blood urea nitrogen (BUN) was measured using the Infinity Urea Kit (Thermo Scientific, West Sussex, UK).
KIM-1 Measurement
Urine KIM-1 concentration was measured using the Luminex xMAP technology (Vaidya et al., 2011). Briefly, 30 μl of urine sample was incubated with~6000 anti-mouse KIM-1-coupled beads/well for 1 h followed by 3 washes with PBS-Tween 20 (PBST). Beads were then incubated with biotinylated KIM-1 detection antibody for 45 min and washed again 3 times with PBST. Quantification was achieved by incubating samples with picoerythrin-coupled streptavidin (Invitrogen) and exciting at 532 nm. The signal from this fluorochrome was detected using the Bio-Plex 200 system (BioRad) and is directly proportional to the amount of antigen bound to the microbead surface. Data were interpreted using a 13-point standard five parametric logistic regression model. All samples were analyzed in triplicate and the intra-assay variability was less than 5%.
Histology
Kidneys were fixed in 10% formalin overnight and then placed into 70% ethanol. Paraffin sections of embedded kidneys were stained with hematoxylin and eosin (H&E) and scored in a blinded fashion. The acute tubular necrosis score was determined by quantitating detachment of epithelial cells, loss of brush border, cast formation, inflammatory cell infiltrate and scored from 0 to 4 based on the % of the area that presented these alterations. 0, no lesion; 1, b 25% of parenchyma affected by the lesion; 2, 25-50% of parenchyma affected by the lesion; 3, 50-75% of parenchyma affected by the lesion and 4, N 75% of parenchyma affected by the lesion.
Immunofluorescence Staining
Kidneys were fixed in 4% PLP (4% paraformaldehyde, 75 mM Llysine, 10 mM sodium periodate) for 2 h at 4°C, and then placed in 30% sucrose overnight. Tissues were snap frozen in optimal cutting temperature compound (OCT, Sakura FineTek, Torrance, CA) and cryosections of 7 μM were mounted on microscope slides. Sections were incubated overnight with primary antibodies as indicated: anti-F4/80 (hybridoma supernatant) and anti-GR1 + (eBioscience), or anti-Kidney-specific cadherin (Ksp-cadherin) (Morizane et al., 2013). Slides were then incubated with Cy3 or FITC labeled secondary antibodies (Jackson ImmunoResearch). Sections were mounted in Vectashield containing DAPI to stain the nuclei (VectorLabs, Burlingame, CA). Neutrophils were expressed as the mean number of GR1 + cells per 400 × magnification field and macrophages as the mean per unit area of F4/80 + cells in 400 × magnification fields using ImageJ. Kspcadherin positive area was also expressed as the mean of KSP stain positive area in 200 × magnification fields. Ten randomly selected images per mouse were quantified using ImageJ software (http:// rsbweb.nih.gov/ij/) (Schrimpf et al., 2012;Grgic et al., 2012). All images were obtained by standard or confocal microscopy (Eclipse 90i, C1 Eclipse, respectively; both from Nikon).
Assays of Mitochondrial Physiology
C57BL/6 mice were treated with two intraperitoneal injections of 100 mg/kg meclizine at 17 and 3 h before sacrificing. Mitochondria were isolated from kidneys by differential centrifugation and resuspended in experimental buffer containing glutamate and malate as respiratory substrates (125 μM) to a final concentration of 0.5 mg/ml (Gohil et al., 2010). Coupled and uncoupled respiration was measured following addition of 0.1 mM ADP and 5 μM carbonyl cyanide mchlorophenyl hydrazone, respectively. O 2 consumption was monitored with a Fiber Optic Oxygen Sensor Probe (Ocean Optics) at 25°C.
Electron Microscopy
Pieces of mouse kidney tissue were fixed in 4% paraformaldehyde, post-fixed in 1% osmium tetroxide, dehydrated in graded alcohols, and embedded in Epon. A tissue block of approximately 1 mm 3 was collected from each kidney, including a portion of renal cortex and outer medulla for standard processing. Semithin sections of each block were stained with toluidine blue stain and examined by light microscopy to select for ultrathin sectioning. Ultrathin sections were cut, placed on nickel grids, and examined using a digital electron microscope (JEOL USA JEM-1010). Mitochondrial area was measured by using ImageJ software (Birk et al., 2013).
Cell Culture
The HK-2 (human kidney-2; human proximal epithelial cell) and LLC-PK1 cells were purchased from the American Type Culture Collection. Cells were cultured in DMEM or DMEM/F12 containing 10% fetal bovine serum, in a humidified atmosphere with 5% CO 2 at 37°C.
Measurement of Lactate Production
Increased lactate production was used as a marker of upregulation of glycolysis. Briefly, HK-2 cells were subcultured 1:4 from a confluent culture plate into a 10 cm dish. Once confluence was reached, cells were treated with 25 μM of meclizine or vehicle for 17 h. After incubation cells were washed and incubated in 1 ml PBS at 37°C for 1 h, and then incubated in 1 ml of PBS containing 1 mM glucose at 37°C for 1 h. Samples were collected and 50 μl of 1.6 M perchloric acid was added to 1 ml of PBS containing 1 mM glucose to stop metabolism. Lactate production was measured at a wavelength of 340 nm after incubation of 100 μl of each sample with a 1 ml reaction buffer (0.1 M Tris, 0.4 M hydrazine, 0.4 mM EDTA, 10 mM MgSO 4 , 80 mg/ml NAD, LDH 5 mg/ml, pH 8.5) for 1 h at room temperature. Results were normalized to the protein content of the sample.
LDH Assay
Cell viability after various treatments was evaluated by LDH microplate titer assay as previously described (Chen et al., 1990). At the end of various treatments, 100 μl of culture medium was collected to measure media LDH levels. Then total LDH levels were determined by addition of Triton X-100 (final concentration 0.1%) to the cells at 37°C for 30 min to release all LDH. The percentage of LDH release was calculated by dividing the media LDH after a treatment by total LDH.
ATP Measurement
Cells were seeded into a 12 well plate (0.5 × 10 5 cells/well) and allowed to grow overnight. Growth medium (DMEM or DMEM/F12 containing 10% fetal bovine serum) was then replaced with 25 mM glucose or 10 mM galactose medium and cells cultured for 24 h under 21% or 5% O 2 for 24 h. For the meclizine or ethanolamine preconditioning experiment, after incubation with 25 μM of meclizine, 10 μM ethanolamine or DMSO for 17 h growth medium was replaced with 10 mM galactose medium and cells were cultured for another 24 h. ATP content was assessed using the ATP Bioluminescent Assay Kit (Sigma-Aldrich, St. Louis, MO, USA) and normalized to total cellular protein.
Determination of the Intracellular Concentration of Phosphoethanolamine
Intracellular concentration of phosphoethanolamine in meclizinetreated HK-2 cells was determined as follows: HK-2 cells were seeded into a 6 cm plate (0.25 × 10 6 cells/dish). After 20 h of growth, cells were treated with 25 μM meclizine or DMSO for approximately 17 h. Cells were scraped, collected in methanol extraction solution (80% methanol, 20% H 2 O), and phosphoethanolamine levels were quantified by liquid chromatography-mass spectrometry (LC-MS) (Gohil et al., 2013).
Analysis of the Release of cytochrome c
To determine the cytochrome c released from mitochondria during chemical hypoxia with or without meclizine pretreatment, cells were permeabilized with 0.05% (wt/vol) digitonin in an isotonic sucrose buffer for 2-4 min (Brooks et al., 2009). The cytosolic fraction released by digitonin was collected for western blot analysis using specific antibodies to cytochrome c.
Western Blot Analysis
HIF1α stabilization and the release of cytochrome c were analyzed by western blot analysis. Extracts (20 μg/lane protein) from cells pretreated with either 0.1% DMSO, 25 μM meclizine or 500 μM CoCl 2 (Sigma) or digitonin lysate, were separated by SDS-PAGE, transferred onto a PVDF membrane, and subjected to Western blotting using anti-HIF-1α (Novus Biologicals, Littleton, CO), anti-cytochrome c (BD Pharmingen, San Diego, CA), or anti-β-actin (Cell signaling), antibodies. β-actin was used as a loading control. Proteins were visualized using HRP-conjugated secondary antibodies (Dako, Glostrup, Denmark) and ECL detection reagents (GE Healthcare, Milwaukee, WI). The ECL film was scanned using a commercial office scanner (Epson Expression 1680 Scanner) and evaluated in ImageJ.
Statistical Analysis
Statistical analysis was performed using Prism 6.0 (GraphPad Software Inc.). Evaluation of the data was carried out using the unpaired two-tailed t test when two groups were compared or one-way Analysis of Variance (ANOVA) followed by Tukey's post-test when multiple groups were compared. A p value lower than 0.05 was considered to be significant. Statistical power analyses were performed to evaluate sample numbers necessary for the main group comparisons reflected in Fig. 1B and D. Animal number in each group was chosen to have a statistical power higher than 0.80 (Cohen, 1992;Faul et al., 2007Faul et al., , 2009). When not specifically stated results are presented as means of at least three independent experiments and error bars indicate ±SEM.
Pretreatment With Meclizine Protects the Kidney Against IRI
To evaluate whether meclizine pretreatment protects the kidney against IRI, we treated mice twice, 17 and 3 h before 27 min of ischemia, with different doses of meclizine and analyzed kidney function 24 h after surgery (Fig. 1A). A flow diagram for the meclizine doseresponse experiments is presented in Supplementary Fig. 1. Serum creatinine levels were increased in mice subjected to IRI, reflecting kidney dysfunction (Fig. 1B). There was a dose-dependent protection afforded by meclizine which was statistically significant at the 100 mg/kg dose level. At this dose of meclizine serum creatinine levels were 0.90 ± 0.10 (mean ± SEM) mg/dl in the meclizine-treated group vs 1.40 ± 0.20 mg/dl in the vehicle group (p b 0.01) (Fig. 1B). In a second set of animals meclizine (100 mg/kg) was administered at different time points before IRI (Fig. 1C). A flow diagram for the meclizine time course experiments is presented in Supplementary Fig. 2. Kidney function was analyzed 24 h after surgery. KIM-1 mRNA (Havcr1) levels and histology were evaluated 48 h after surgery. When mice were treated with only one dose of meclizine (100 mg/kg) 17 or 24 h before IRI, there was a significant decrease in creatinine levels 24 h after reperfusion compared to the respective vehicle-treated group (17 h pretreatment: 0.89 ± 0.11 vs 1.59 ± 0.10 mg/dl, p b 0.001; 24 h pretreatment: 0.60 ± 0.17 vs 1.50 ± 0.25 mg/dl, p b 0.05) (Fig. 1D). There was no difference in the amount of body weight loss between the vehicle and meclizine (100 mg/kg 17 h before IRI) treated mice 24 h after IRI (vehicle 24.35 ± 0.4 to 22.45 ± 1.03 g, meclizine: 24.90 ± 0.74 to 22.13 ± 0.69 g) or in weights 48 h after vehicle or meclizine administration at 17 and 3 h prior to IRI (23.60 ± 0.40 g, vehicle; 23.88 ± 0.38 g, meclizine). Pretreatment for 8 h before ischemia resulted in decreased serum creatinine levels 24 h after reperfusion but this difference failed to meet statistical significance when compared to the vehicle-treated group (Fig. 1D). Blood urea nitrogen (BUN) levels were also decreased in the group treated 17 h before injury with 100 mg/kg meclizine in comparison to the vehicle group (90 ± 10 vs 122 ± 7 mg/dl, p b 0.05) (Fig. 1E). In addition to serum creatinine and BUN, there was less upregulation of Havcr1 mRNA in the 48 h post-IRI kidney from mice pretreated 17 h prior to ischemia with meclizine (123 ± 14 fold increase) when compared to vehicle-treated mice (311 ± 22 fold increase; p b 0.001) (Fig. 1F). Pretreatment with meclizine (100 mg/kg) 17 h before IRI reduced tubular necrosis score when compared to the vehicle-treated IRI group (2.5 ± 0.1 vs, 3.8 ± 0.1 p b 0.001) (Fig. 1G, H).
Meclizine Inhibits Mitochondrial Respiration and Reduces Kidney Injury After IRI
We tested meclizine as a potential therapeutic agent for kidney IRI based on its previously reported activity as mitochondrial respiration attenuating agent. While earlier work (Gohil et al., 2010(Gohil et al., , 2013 has clearly shown that meclizine attenuates mitochondrial respiration in an in vitro cell culture system, it is not known whether meclizine would attenuate respiration when administered to a whole organism. Therefore, to test the effect of meclizine on kidney respiration, mitochondria were isolated from the kidneys of mice pretreated with meclizine. Mice were treated with two doses of meclizine (100 mg/kg), 17 h and 3 h before sacrifice. Kidney mitochondria isolated from mice that received meclizine, had decreased O 2 consumption after ADP addition and a further decrease after exposure to the uncoupling agent carbonyl cyanide 3-chlorophenylhydrazone (CCCP), when compared with kidney mitochondria isolated from vehicle-treated mice (Fig. 3A).
Heme-oxygenase-1 (Hmox1) and inducible nitric oxide synthase (Nos2) are up-regulated after kidney injury, related to IR-induced oxidative stress (Aragno et al., 2003;Szeto et al., 2011). As expected, the kidney mRNA expression levels were up-regulated 24 h after IRI in both meclizine and vehicle-treated animals. Meclizine-pretreated mice subjected to IRI showed lower fold-increases of Hmox1 (7.09 ± 2.01 fold) when compared with the vehicle-treated group (13.3 ± 1.87 fold) (p b 0.05) and lower increases in Nos2 (1.17 ± 0.35, in the meclizine group vs 2.94 ± 0.74 in the vehicle-treated control group, p b 0.05), indicating a reduced oxidative stress ( Fig. 3B and C). To check the number of viable tubular cells after IRI, we evaluated Kidney-specific cadherin (Ksp-cadherin) expression (Thomson et al., 1995). When tubular cells are damaged, the number of Kspcadherin positive tubular cells is decreased (Morizane et al., 2014). Ischemia led to a decrease in the normalized number of Kspcadherin positive tubular cells, but meclizine pretreatment partially mitigated this decreased expression (0.32 ± 0.05 in the vehicle group vs 0.74 ± 0.12 in the meclizine-treated group, when normalized to sham-treated mice, p b 0.05) (Fig. 3D and E). Electron microscopy revealed loss of brush borders of proximal tubule cells with extensive damage to the mitochondria which is reflected by round and fragmented mitochondria after IRI in vehicle-treated mice ( Fig. 3F and G) (Brooks et al., 2009). In contrast, the representative images from a meclizine-pretreated kidney ( Fig. 3H and I) showed intact brush borders and many elongated mitochondria on the basal side of the tubular cells. Mean mitochondrial area is 2.73 ± 0.43 times greater in the vehicle vs meclizine pretreated kidney (p b 0.01) (Fig. 3J). These data revealed that meclizine reduced tubular damage-induced oxidative stress and inhibited IRI-induced mitochondrial structural changes.
Meclizine Is Not Protective If Given After IRI or in Two Toxicity Models of AKI
To evaluate whether meclizine would also be protective when given after the injury had been already established, we treated mice with 100 mg/kg of meclizine twice, one injection right after reperfusion and a second injection 8 h after IRI. No significant differences were measured in serum creatinine levels, tubular necrosis score or kidney KIM-1 mRNA (Havcr1) expression between the vehicle and meclizine-treated group after IRI (Fig. 4A-D). There was also no difference in the tissue mRNA expression of inflammatory mediators (Fig. 4E). Thus protection by meclizine was limited to preconditioning.
We also tested whether preconditioning with meclizine was effective in two toxicity models of AKI. Mice were treated with meclizine 17 h or 1 h before the injection of aristolochic acid or cisplatin. Serum creatinine and urinary KIM-1 were measured at the peak of the toxic injury, 5 days after aristolochic acid injection or 3 days after cisplatin injection. Mice injected with aristolochic acid had increases in serum creatinine levels and urinary KIM-1 compared to sham control mice. In contrast to the protection observed with meclizine preconditioning in the IRI model, pretreatment with meclizine either 17 h or 1 h before aristolochic acid injection had no effect on creatinine and urinary KIM-1 levels (Fig. 4F, G). Mice injected with cisplatin had significant increases in serum creatinine and urine KIM-1 levels, neither of which was modified by meclizine pretreatment (Fig. 4H, I).
Meclizine Attenuates LDH and cytochrome c Release During 2-DG and NaCN Treatment of Tubular Epithelial Cells
To evaluate whether meclizine protected kidney tubular epithelial cells in vitro, cells were pretreated with meclizine 17 h prior to chemical anoxia induced by 1.5 mM of NaCN and 10 mM of 2-DG. A significant decrease of % LDH release in LLC-PK1 and HK-2 cells was observed in cells pretreated with 25 μM of meclizine when compared with cells pretreated with DMSO only (LLC-PK1 cells, 11.6 ± 3.2% vs 25.9 ± 1.5, p b 0.01, Fig. 5A, HK-2 cells, 27.0 ± 5.9 vs 47.6 ± 6.3%, p b 0.05, Fig. 5B, respectively). Furthermore meclizine pretreatment blocked the release of injury-associated cytochrome c from HK-2 cells exposed to chemical anoxia (Fig. 5C, D).
Meclizine Up-regulates Glycolysis in Glucose Containing Media and Reduces Cellular ATP Levels in Galactose Media
Culturing cells in galactose as a sugar source forces mammalian cells to rely on mitochondrial oxidative phosphorylation (OXPHOS) (Aguer et al., 2011;Gohil et al., 2010). LDH release was increased and cell viability was reduced in galactose media when oxygen was decreased from 21 to 5%. A significant increase in % LDH release and decrease in cell viability were observed in cells cultured in 5% O 2 with 10 mM galactose when compared with cells cultured in 5% O 2 with 25 mM glucose (% LDH release: 33.0 ± 3.89 vs 15.7 ± 0.24%, p b 0.05, Fig. 6A; cell viability: 12.0 ± 4.23 vs 65.1 ± 2.83%, p b 0.01, Fig. 6B). Cellular ATP levels also decreased when cells were cultured in 5% O 2 with 10 mM galactose vs 5% O2 with 25mM glucose (5.58 ± 0.46 vs 9.98 ± 0.31 nmol/mg protein, p b 0.05, Fig. 6C). Thus culturing cells with galactose as an energy source forces kidney tubular epithelial cells to rely on mitochondrial oxidative respiration rather than glycolysis.
Discussion
Meclizine is an attractive potential therapeutic agent for IRI, since it is well established to be safe in humans and it has an unusual mechanism of protection. Meclizine inhibits mitochondrial respiration. Meclizine decreases post-ischemic serum creatinine levels when given at least 17 h before injury. The fact that prolonged pretreatment is necessary would make this safe drug appropriate for situations where there is a predictable increase in the probability of developing AKI, such as cardiac surgery, ICU stay or perhaps during allograft preservation prior to transplantation.
The shift in cellular energy production from mitochondrial respiration, which consumes oxygen, to anaerobic glycolysis, is a natural adaptation to reduced oxygen availability (Ramirez et al., 2007). Redirecting energy metabolism toward anaerobic glycolysis can reduce ischemiainduced ROS production, oxidative damage and suppress apoptosis (Vaughn and Deshmukh, 2008;Jeong et al., 2004). We have shown that meclizine attenuates mitochondrial respiration likely through increase in cellular phosphoethanolamine and increases mRNA levels of glycolytic enzymes and lactate production. Significantly lowered expression of post-ischemic HO-1 and iNOS in meclizine-treated mice reflect decreased oxidative stress caused by IRI (Birk et al., 2013;Jeong et al., 2004). This protective effect is brought about by a HIF-independent mechanism (Gohil et al., 2010(Gohil et al., , 2013. Thus FDA-approved meclizine shifts energy metabolism (Gohil et al., 2010) and may be a useful candidate for chemical ischemic preconditioning in kidney. While the doses used here are high, more effective agents may be developed which can be used at lower concentrations with potentially fewer off-target effects.
Reducing inflammation has been shown to be important for a better outcome in IRI-related organ damage (Mauriz et al., 2001;Meng et al., 2001). Meclizine pretreatment of mice attenuates IRI-induced kidney tubular damage and is associated with a reduction of inflammation including a reduction in granulocytes and expression of a number of cytokine genes, all of which are well established to contribute to the postischemic inflammatory milieu (Szeto et al., 2011) (Kielar et al., 2005 (Kreisel et al., 2011).
Postischemic structural and functional changes in mitochondria are closely linked (Kaasik et al., 2007). IRI in kidney induces fragmentation of mitochondria which leads to sustained energetic deficits, release of cytochrome c, and activation of cell death pathways in proximal tubule epithelial cells (Brooks et al., 2009;Barsoum et al., 2006). We have shown that meclizine attenuates mitochondrial structural changes and release of cytochrome c. Meclizine does not protect the kidney when administered after the initial injury. Acute exposure to meclizine did not decrease O 2 consumption on kidney mitochondria isolated from mice (Gohil et al., 2010). By contrast when mice received meclizine 17 h before IRI there was a decrease in isolated mitochondrial O 2 consumption. Thus meclizine must be present prior to an insult to be effective for the reduction of mitochondrial respiration. Meclizine inhibits phosphate cytidylyltransferase 2 (PCYT2) and causes an increase in cytosolic phosphoethanolamine, an ethanolamine derivative that is a central precursor in the biosynthesis of membrane phospholipids (Gohil et al., 2013). High levels of intracellular phosphoethanolamine inhibit mitochondrial respiration (Gohil et al., 2013;Modica-Napolitano and Renshaw, 2004). Ethanolamine, a precursor of phosphoethanolamine, also inhibits mitochondrial respiratory activity (Modica-Napolitano and Renshaw, 2004;Gohil et al., 2013). In this study we show that meclizine pretreatment increases intracellular phosphoethanolamine in HK-2 cells and ethanolamine has protective effects both in vitro in renal epithelial cells and in vivo in the kidney. Thus a renoprotective effect of meclizine may be mediated by accumulation of intracellular phosphoethanolamine (Fig. 7I).
While the dose used in these studies is higher than the dose used clinically in humans there was no meclizine induced weight loss or other evidence for toxicity in mice at these doses (Gohil et al., 2010). At the dose used in this study mice are protected against acetaminophen-induced liver toxicity (Huang et al., 2004). Although meclizine is classified as a histamine (H1) antagonist and a muscarinic acetylcholine receptor antagonist, the other 64 annotated H1 receptor antagonists and 33 annotated muscarinic antagonists in the original chemical library screen had no effect on oxygen consumption (Gohil et al., 2010). It is likely that the renal protective effect is independent of histaminergic or muscarinic signaling or HIF-stabilization. This study justifies the further development of meclizine-like agents that can be selected for their mitochondrial effects and given to humans to affect mitochondrial respiration while minimizing anti-histaminergic and anticholinergic effects and maintaining efficacy in protecting against kidney injury.
We did not measure blood pressure. Although we closely monitored and tightly controlled the body temperature between 36.5 and 37°C and the technical success of ischemia-reperfusion by checking the kidney color after clamping and after removing the clips (Park et al., 2004;Wei and Dong, 2012), we cannot completely exclude the possibility that altered hemodynamics may contribute to a modification of kidney injury after ischemia-reperfusion.
In conclusion, we have shown that pretreatment with 100 mg/kg of meclizine 17 or 24 h prior to ischemia protected mice from IRI. Meclizine reduced mitochondrial oxygen consumption, and attenuated oxidative stress and mitochondrial fragmentation after IRI. Meclizine induced intracellular phosphoethanolamine accumulation which inhibits mitochondrial respiration. These findings suggest that pretreatment with meclizine, or a derivative, may reduce kidney injury induced by shock, sepsis, cardiovascular surgery and early allograft dysfunction. Further studies of efficacy are required to rigorously determine optimal dosing
Role of the Funding Source
This work is supported by a grant from the National Institutes of Health/NIDDK to J.V.B. (R37 DK39773, RO1 DK072381). S.K. is the recipient of a Research Fellowship (Sumitomo Life Welfare and Culture Foundation, Japan and NOVARTIS Foundation for Gerontological Research, Japan) for the Promotion of Science.
Conflicts of Interest
J.V.B. and T.I. are co-inventors on KIM-1 patents, which have been assigned to Partners Healthcare and licensed to a number of companies. (I) Summary of the mechanisms proposed for meclizine-induced protective effects against ischemic injury. Meclizine inhibits phosphate cytidylyltransferase 2 (PCYT2) and causes an increase in cytosolic phosphoethanolamine, a central precursor in the Kennedy pathway. High levels of intracellular phosphoethanolamine inhibit mitochondrial respiration. **p b 0.01 and *p b 0.05. Statistical significance was determined using t test (A, B, C, F, G) or one-way ANOVA followed by Tukey's post-hoc test (D, E). The columns and error bars are the mean ± SEM. J.V.B. is a consultant to Astellas, Takeda and Pfizer. He is a consultant to, and holds equity in, MediBeacon, Sentien and Thrasos, and has grant support from Novo Nordisk and Roche.
V.K.M. is an Investigator of the Howard Hughes Medical Institute. V.K.M. and V.G. are co-inventors on a pending patent application that has been submitted by Partners Healthcare on new clinical uses of meclizine and its derivatives. | 2018-04-03T05:25:41.987Z | 2015-07-29T00:00:00.000 | {
"year": 2015,
"sha1": "9ffd081f19a48ed20256f9ac664906f1d7043b2f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ebiom.2015.07.035",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1820e0c02e4c4b5e76f116edab5e7f590cab542a",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
88517385 | pes2o/s2orc | v3-fos-license | Kumaraswamy autoregressive moving average models for double bounded environmental data
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval $(a,b)$ following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Introduction
The Kumaraswamy family of distribution was introduced in Kumaraswamy (1980) for modeling double bounded random processes with hydrological applications. It is very flexible being able to approximate several types of distributions and its density can present many shapes, such as unimodal, uniantimodal, increasing, decreasing or constant.
The Kumaraswamy distribution has been applied to a wide variety of problems, especially in hydrology (Nadarajah, 2008).
The flexibility of the beta distribution encourages its empirical use in a wide range of applications (Lemonte et al., 2013;Jones, 2009;Nadarajah, 2008). However, the beta distribution does not satisfactorily fit hydrological process such as daily rainfall, daily stream flow, etc (Kumaraswamy, 1976(Kumaraswamy, , 1980Lemonte et al., 2013). On the other hand, in hydrology and related areas, the Kumaraswamy distribution is deemed as a better alternative to the beta distribution (Nadarajah, 2008;Lemonte et al., 2013), so that several works applying the Kumaraswamy distribution can be found (Nadarajah, 2008). This is also true in the Engineering literature, as, for instance, in Sundar and Subbiah (1989), Fletcher and Ponnambalam (1996), Seifi et al. (2000), Ponnambalam et al. (2001), Ganji et al. (2006), and Koutsoyiannis and Xanthopoulos (1989).
Despite its importance and wide range of applications in hydrology, the Kumaraswamy distribution is still a stranger for statisticians. In fact, the lack of tractable expressions for the mean and variance, given by (1) and (2), respectively, has hindered its utilization for modeling purposes (Lemonte et al., 2013;Mitnik and Baek, 2013). An alternative to circumvent this problem is to consider a median-based re-parameterization aiming to facilitate its use in regression-based models (Mitnik and Baek, 2013). For the Kumaraswamy distribution, the median has the following simple expression: where 1 − 0.5 1 δ 1 ϕ = µ is the median of the rescaled variable Y =Ỹ −a b−a ∈ (0, 1). Most time series appearing in natural sciences, including hydrology, climatology and environmental applications consist of observations that are serially dependent overtime (Salas et al., 1997;Machiwal and Jha, 2012;Lohani et al., 2012;Valipour et al., 2013). Most conventional time series models are based on Gaussianity assumptions (Chuang and Yu, 2007). One classical example is the class of autoregressive integrated moving average models (ARIMA) (Box et al., 2008;Brockwell and Davis, 1991). However, it has been recognized that the Gaussian assumption is too restrictive for many applications (Tiku et al., 2000), specially in hydrology. Indeed, as previously discussed, several double bounded hydrologic data can be accurately modeled by the Kumaraswamy distribution. Despite of this, to the best of our knowledge, a specific time series model to serially dependent Kumaraswamy variables has never been considered in the literature. Thus, in this work our goal is to introduce and study a dynamic time series model for Kumaraswamy distributed random variables. In order to define the proposed model, we shall follow similar construction as the generalized autoregressive and moving average model (GARMA) (Benjamin et al., 2003) and the beta autoregressive and moving average model (β ARMA) (Rocha and Cribari-Neto, 2009), but we shall employ a parametrization for the Kumaraswamy distribution in terms of its median.
The paper is organized as follows. In Section 2 we introduce the proposed model. In Section 3 we present a complete conditional maximum likelihood theory for KARMA models, including closed forms for the conditional score vector and Fisher information matrix and the related asymptotic theory. The construction of confidence intervals and hypothesis testing is also discussed. On Section 4 we present several topics regarding model diagnostics and forecasting. In Section 5 we present a Monte Carlo simulation study to assess the finite sample performance of the conditional maximum likelihood approach developed. An application of KARMA models to relative humidity data is presented in Section 6. Section 7 closes the article. For ease of presentation, some technical results are deferred to the Appendix.
The proposed model
In order to define the proposed model, we shall introduce an autoregressive moving average (ARMA) time series structure to accommodate the presence of serial correlation in the conditional median of the Kumaraswamy distribution. For this reason, we shall call the proposed model as KARMA model. We shall employ a similar parameterization as in Mitnik and Baek (2013) in terms of its median (see Figure 1). Now let {Ỹ t } t∈Z be a stochastic process for which,Ỹ t ∈ (a, b) with probability 1, for all t ∈ Z and fixed a, b ∈ R with a < b and let F t = σ {Ỹ t ,Ỹ t−1 , . . . } denote the sigma-field generated by the information observed up to time t ∈ Z.
Assume that, conditionally to the previous information set F t−1 ,Ỹ t is distributed according to K(μ t , ϕ, a, b) and let for 0 < y t < 1, where y t =ỹ t −a b−a , µ t =μ t −a b−a . This particular form of the density is very appealing since it allows modeling without any transformation, as it is commonly done in literature (see, for instance, Rocha and Cribari-Neto, 2009), but dealing with the simpler distribution of Y t .
The cumulative distribution and quantile functions, are given respectively by: The conditional mean and variance ofỸ t , in terms of µ t and ϕ, are given respectively by: Let g : R → (0, 1) be a continuously twice differentiable monotone link function for which the inverse g −1 : (0, 1) → R exists and is twice continuously differentiable as well. We propose the following specification for the conditional median µ t : where η t is the linear predictor, x t is the r-dimensional vector containing the covariates at time t, β = (β 1 , . . . , β r ) is the r-dimensional vector of parameters related to the covariates, while φ = (φ 1 , . . . , φ p ) and θ = (θ 1 , . . . , θ q ) are the AR and MA coefficients, respectively. As usual, we shall assume that the AR and MA characteristic polynomials do not have common roots and the AR coefficients are such that the related characteristic polynomial does not have unit roots. Invertibility and causality conditions for the ARMA component are not needed and, thus, not required. For more details on ARMA modeling, we refer the reader to Brockwell and Davis (1991). Observe that, since µ t ∈ (0, 1), all the traditional link functions such as logit, probit, loglog, etc, can be applied to the model. Results related tõ µ t andỹ t , such as predicted values and confidence intervals, can be easily obtained throughμ t = µ t (b − a) + a and The proposed KARMA(p, q) model is given by specification (3) and (5). We observe that the dynamic part of the model (5) is the same as in Rocha and Cribari-Neto (2009). However, the random component of the model (3) is completely different and it is parametrized in terms of the median. Regression methods based on the median are known to be robust against atypical observations in the response (John, 2015;Lemonte and Bazán, 2016). It is also known that, compared to mean based models, median based ones present a better performance when the population distribution is asymmetric (Lemonte and Bazán, 2016). Since the specification (5) is based on the median, the proposed KARMA model inherits these attractive properties.
Conditional likelihood inference
Parameter estimation can be carried out by conditional maximum likelihood. The boundary parameters a and b are assumed to be either known (as it is often the case for rates and proportions) or previously consistently estimated. Letỹ 1 , . . . ,ỹ n be a sample from a KARMA(p, q) model under specification (3) and (5), where x t denote the r-dimensional vector of covariates for y t , assumed to be non-stochastic and let γ = (α, β , φ , θ , ϕ) be the (p + q + r + 2)-dimensional parameter vector. The conditional maximum likelihood estimators (CMLE) are obtained upon maximizing the logarithm of the conditional likelihood function. We observe that the log-likelihood function for γ, conditionally on F t−1 , is null for the first m = max(p, q) values of t, and hence we have where
Conditional score vector
Recall that η t = g(µ t ). By differentiating the conditional log-likelihood function given in (6), with respect to the jth element of the parameter vector γ, γ j = ϕ, for j = 1, . . . , (p + q + r + 1), the chain rule yields where, for simplicity, we wrote . (8) We can write Observe that the task of computing the score vector greatly simplifies to obtain ∂ η t ∂ γ j for each coordinate γ j of γ. For the derivative of with respect to α, let r t = g(y t ) − g(µ t ) be the error term, so that For the derivative of with respect to β , for l = 1, . . . , r, we have where x tl is the lth element of x t . For the derivative of with respect to φ , for i = 1, . . . , p, For the derivative of with respect to θ , for j = 1, . . . , q, we have Finally, for the derivative of with respect to ϕ, direct differentiation of (6) is easier to compute, yielding .
In matrix form, the score vector can be written as , c = (ϕc m+1 , . . . , ϕc n ) and M, P, R be the ma- The conditional maximum likelihood estimator of γ if it exists, it is obtained as a solution of the system U(γ) = 0, where 0 is the null vector in R p+q+r+2 . There is no closed for for the solution of such a system. Conditional maximum likelihood estimates are, thus, obtained by numerically maximizing the log-likelihood function using a Newton or quasi-Newton nonlinear optimization algorithm; see, e.g., Nocedal and Wright (1999). In what follows, we shall use the quasi-Newton algorithm known as Broyden-Fletcher-Goldfarb-Shanno (BFGS) method (Press et al., 1992).
The iterative optimization algorithm requires initialization. The starting values of the constant (α), the regressors parameter (β ) and the autoregressive (φ ) parameters were selected from an ordinary least squares estimate from a linear regression, where Y = (g(y m+1 ), g(y m+2 ), . . . , g(y n )) are the responses and the covariates matrix is given by For the parameter θ , the starting values are set to zero.
Conditional information matrix
In this section we derive the conditional Fisher information matrix. In order to do that we need to compute the expected values of all second order derivatives. For γ i = ϕ and γ j = ϕ, with i, j ∈ {1, . . . , p + q + r + 1}, it can be shown that Let
Simple calculus yields
so that, by the multiplication rule we obtain Finally, taking conditional expectation and from Lemma 2 in the Appendix B substituting (A.1) into (10), from (9) we From (11), the task of obtaining the information matrix simplifies to obtain the derivatives of η t with respect to the parameters which were previously obtained in Section 3.1.
Derivatives with respect to ϕ, however, are simpler to obtain directly. For the second derivative of t with respect to ϕ, recall that c t is given by (8) and so we have Taking conditional expectation in (12) and substituting the results from Lemma 2 in the Appendix B, it follows where ψ : (0, ∞) → R is the digamma function defined as ψ(z) = d dz log Γ(z) , ψ (z) = d dz ψ(z) is the trigamma function, κ = 0.5772156649 . . . is the Euler-Mascheroni constant (Gradshteyn and Ryzhik, 2007) and k 0 = π 2 /6 + κ 2 − 2κ. As for the derivative with respect to γ j = ϕ, we have and upon taking conditional expectation it follows that . . , w n }, and D = diag{d m+1 , . . . , d n }.
The joint conditional Fisher information matrix for γ is where , 1 is an (n − m) × 1 vector of ones, and tr(·) is the trace function. We note that the conditional Fisher information matrix is not block diagonal, and hence the parameters are not orthogonal (Cox and Reid, 1987).
The next Theorem establishes the strong consistency and asymptotic normality of the CMLE for the KARMA(p, q) model. In order to guarantee that the asymptotic variance-covariance matrix is positive definite, we shall need some assumptions on the covariates in the model. Let Z t = 1, x t−1 , h(t, 1), . . . , h(t, p), r t−1 , r t−2 , . . . denote the design (covariate) matrix related to (5), where h(t, j) = g(y t− j ) − P g(y j ) (x 1 , . . . , x t− j−1 ) and P g(y j ) (x 1 , . . . , x t− j−1 ) denotes the projection of g(y j ) into the space generated by x 1 , . . . , x t− j−1 . We assume that Z t belongs to a compact set Ω of the appropriate real space. We assume further that ∑ n t=1 Z t Z t > 0, for sufficiently large n, and that, at the true value of γ, the matrix K is positive definite for the given set of covariates. A detailed discussion can be found in Fokianos and Kedem (2004) and Andersen (1970).
The proof of Theorem 3.1 is given in the Appendix C.
Confidence intervals and hypothesis testing inference
The results in Theorem 3.1 allow the construction of asymptotic confidence intervals/regions and test statistics for hypothesis testing. Letỹ 1 , . . . ,ỹ n be a sample from a KARMA(p, q) model, γ i denote the ith component of the true parameter vector γ and K( γ) i j be the (i, j)th element of the inverse of the conditional information matrix (13) evaluated at γ ∈ R p+q+r+2 , where γ i is the ith coordinate of the CMLE γ obtained from the sample. From the results in Theorem 3.1, we have from which asymptotic confidence intervals for the individual model parameters can be constructed by standard methods. More specifically, let z δ be the δ standard normal upper quantile. A 100(1 − α)%, 0 < α < 1/2, asymptotic confidence interval for γ i , i = 1, . . . , (p + q + r + 2), is We can also apply the results in Theorem 3.1 to derive asymptotic test statistics for hypothesis testing. Let γ 0 i be a given hypothesized value for the true parameter γ i . To test H 0 : γ i = γ 0 i against H 1 : γ i = γ 0 i , we can apply an asymptotic version for the signed square root of Wald's statistic (Wald, 1943), which is given by (Pawitan, 2001) Under H 0 , the limiting distribution of Z is standard normal. Thus, the test is performed by comparing the calculated Z statistic with the usual quantiles of the standard normal distribution.
From the results in Theorem 3.1, it is also straightforward to derive versions for the likelihood ratio (Neyman and Pearson, 1928), Rao's score (Rao, 1948), Wald's (Wald, 1943) and the gradient (Terrell, 2002) statistics to perform more general hypothesis testing inference. In large samples and under the null hypothesis, such test statistics are (approximately) chi-squared distributed with the same degrees of freedom as their counterparts under independence.
Diagnostic analysis and forecasting
This section introduces some diagnostic measures and forecasting methods. Diagnostic analysis can be applied to a fitted model to determine whether it fully captures the data dynamics. A fitted model that passes all diagnostic checks can be used for out-of-sample forecasting.
Information criteria are important tools for automatic model comparison/selection. Information criterion such as Akaike's (AIC) (Akaike, 1974), Schwartz's (SIC) (Schwarz, 1978), and Hannan and Quinn's (HQ) (Hannan and Quinn, 1979) are obtained in the usual fashion from the maximized conditional log-likelihood function.
Residuals are an important measure for determining whether the fitted model provides a good fit to the data (Kedem and Fokianos, 2002). Various types of residuals are currently available in literature for several classes of models (Mauricio, 2008). For the proposed KARMA(p, q), the standardized (or Person's) or deviance residuals can be considered. However, we suggest the quantile residuals (Dunn and Smyth, 1996), that possess several advantages over other residuals. The quantile residuals are defined by where Φ −1 denotes the standard normal quantile function. The quantile residuals not only can detect lack of fit in regression models but its distribution is also approximately standard normal (Dunn and Smyth, 1996;Pereira, 2017).
If the model provides a good fit to the data, the index plot of the quantile residuals should display no noticeable pattern.
When the model is correctly specified the residuals should display white noise behavior, i.e., they should follow a zero mean and constant variance uncorrelated process (Kedem and Fokianos, 2002). A good alternative to test the adequacity of the fitted model is to deploy a Ljung-Box type test (Ljung and Box, 1978) based on the residual. More details can be found in Greene (2011) and references therein.
Forecasting the conditional median of a KARMA(p, q) model can be done using the theory of time series forecasting for ARMA models (Brockwell and Davis, 1991;Box et al., 2008). Let h 0 denote the forecast horizon. We shall assume that the covariate values x t , for t = n + 1, . . . , n + h 0 , are available or can be obtained. For instance, if the covariates are deterministic functions of t, as for instance, sines and cosines in harmonic analysis, dummy variables, polynomial trends, etc, they can be determined for values of t > n.
The first step is to obtain the estimates µ m+1 , . . . , µ n for the conditional median µ t based on the CMLE γ. To do that we need to recompose the error term {r t } n t=1 , which we will denote by r t . We start by setting r t = E(r t ), which usually equals 0, for t ∈ {1, . . . , m}. Starting at t = m + 1, we sequentially set θ j r t− j and r t = g(y t ) − g( µ t ), for t ∈ {m + 1, . . . , n}. Now, for h = 1, 2, . . . , h 0 , the forecasted values of µ n+h are sequentially given by where r t = 0, for t > n, and
Numerical evaluation
In this section we present a Monte Carlo simulation study to assess the finite sample performance of the CMLE for KARMA models developed in Sections 3.1 and 3.2. We simulate 10,000 replications of a KARMA ( To generate a size n sample from a KARMA(p, q) process, the following algorithm is useful. The first step is to set r t = 0 and µ t = g −1 (α), for t = 1, . . . , m. Second step: for t = m + 1, we obtain η t through (5), then we set µ t = g −1 (η t ). Finally,ỹ t is generated from (3), using any adequate method. The so-called inversion method is very easy to apply in this context, that is, we generate u ∼ U(0, 1) and We iterate the second step for t = m + 1, . . . , n 0 + n, where n 0 > m denotes the size of a possible burn in. We used n 0 = 2m in the simulations. The desired sample isỹ n 0 +1 , . . . ,ỹ n 0 +n . All routines were written in R language by the authors and are available upon request.
Tables 1 and 2 present the simulation results. Performance statistics presented are the mean, percentage relative bias (RB%), and mean square error (MSE). The percentage relative bias is defined as the ratio between the bias and the true parameter value times 100. We observe that the overall performance of the CMLE is very good, except, as expected, for the very small sample size n = 70. The estimates greatly improve from the case n = 70 as the sample size increases. Overall the parameter estimator with the smallest relative bias is ϕ while θ 1 and θ 2 are the ones with the greatest relative bias in all situations. In general, the estimates perform better in the autoregressive estimator than in the part of the moving averages. Such fact was already discussed by Ansley and Newbold (1980) in traditional ARMA models, for example. It was also verified in β ARMA model in Palm and Bayer (2017). Thus, simulation studies show that inferences about parameters of moving averages are usually poorer compared to other parameters.
In all situations, the estimates present small MSE.
Application to relative humidity data
The relative air humidity (or simply relative humidity, abbreviated RH) is an important meteorological characteristic to public health, irrigation scheduling design, and hydrological studies. Low RH is known to causes health problems, such as allergies, asthma attacks, dehydration, nasal bleeding, among others (Falagas et al., 2008;Zhang et al., 2016). High RH, on the other hand, is also known to cause respiratory problems, besides being responsible for the increase in precipitation which, in excess, can cause serious consequences, for instance, to urban drainage (Silveira, 2002). The vapor pressure, for example, is a function of the RH and it is an important variable in evapotranspiration estimation methods, such as the Penman-Monteith (Allen et al., 1998), which is one of the most important and accurate method in hydrology to estimate evapotranspiration (Shuttleworth, 1993;Allen et al., 1998). It is also widely applied in physical based hydrological simulations (Collishonn et al., 2007;Arnold et al., 2012). Given its relevance, understanding and modeling its behavior is of utmost importance, and so is accurate forecasting of the RH.
For instance, it helps the State taking preventive measures regarding public health, management of water resources as well as in climate predictions.
Relative humidity is an important climate quantity which influences the weather in several ways. The time series we shall analyze represent the monthly average RH registered in the aforementioned station from January 2000 to December 2016, yielding a sample size n = 204. However, the last 12 observations have been reserved for forecasting comparison. The data is freely available at INMET's website (http://www.inmet.gov.br). Figure 2 presents the time series plot (Figure 2(a)) and the seasonal component in the data (Figure 2(b)), sample autocorrelation (ACF) (Figure 2(c)) and sample partial autocorrelation (PACF) functions (Figure 2(d)).
From the Figures 2(a) and 2(b) we observe a clear seasonal component. There are several ways to account for this monthly seasonal component. We shall consider a simple harmonic regression approach (Bloomfield, 2013), by introducing the following covariates: x t = sin(2πt/12), cos(2πt/12) , for t ∈ {1, . . . , n}. With the logit as link function, using the three-stage iterative Box-Jenkins methodology (Box et al., 2008) to select the fitted model, we successfully modeled the data using a KARMA(5, 4) model with the covariates given above. Table 3 brings the fitted KARMA model while Figure 3 brings some residual diagnostic plots. Figure 3(a) presents the residual plot against time. From this plot we observe no distinct pattern overtime and the typical white noise behavior for the residuals. Table 3. All plots and tests indicate that the fitted model can be safely used for out-of-sample forecasting.
The out-of-sample forecast of the adjusted KARMA model is presented in Figure 4. We observe that the forecast was able to capture the distinctive seasonal pattern present in the actual data. Figure 4 also shows the forecast values for the fitted β ARMA(5,4), with the same order of the best KARMA model, and the β ARMA(2,1) which was the best β ARMA model. In order to have a better comparison we present some goodness-of-fit measures. The mean square error (MSE) and mean absolute percentage error (MAPE) between the actual data (y n+h ) and out-of-sample predicted ( µ n+h ) values, for h = 1, . . . , 12, of the fitted models are presented in Table 4. We note that the proposed model outperforms the β ARMA model in both measures.
Conclusions
In this work we introduced a new class of dynamic regression models for double bounded time series. More specifically, in the proposed KARMA(p, q) models, the conditional median of the Kumaraswamy distributed variable is assumed to follow a dynamic model involving covariates, an ARMA structure, unknown parameters and a link func- tion. Inference regarding KARMA model parameters is discussed and a conditional maximum likelihood approach is fully developed. In particular, closed expression for the score vector and the conditional Fisher information matrix are obtained. The conditional maximum likelihood approach is shown to produce consistent and asymptotically normal estimates. Based on the asymptotic results, the construction of confidence intervals and hypothesis testing is discussed.
Diagnostic analysis and forecasting tools are also discussed. To assess finite sample performance of the CMLE in the KARMA framework, a Monte Carlo simulation study is performed. The simulation study showed that the CMLE performs very well even for small sample sizes. To exemplify its usefulness, an application of the KARMA model to monthly relative humidity data from Brasilia, the Brazilian capital city, is presented and discussed.
An R implementation of the KARMA model An implementation in R language (R Development Core Team, 2017) to fit the KARMA model is available at http://www.ufsm.br/bayer/karma.zip.
Upon expanding (1 − y ϕ ) δ t −2 into its binomial series, we have To evaluate the series above, we change the index to i = k + 2 and rewrite The result now follows by the Newton's series for the digamma function (formula 8.363.8 in Gradshteyn and Ryzhik, 2007, with n = 0), that is, and the identity ∑ ∞ k=1 (−1) k n k = −1. Similar technique yields The result now follows by similar argument as the previous case and from the Newton's series for the digamma and ψ (x) functions (formula 8.363.8 in Gradshteyn and Ryzhik, 2007, with n = 0, 1), that is,ψ (x) = ∑ ∞ k=0 1 (x+k) 2 .
Appendix C. Proof of Theorem 3.1 Proof: In order to obtain the results, we only need to check that assumptions 2.1-2.5 from Andersen (1970) are fulfilled. Assumption 2.1 follows from Section 3.1. Assumption 2.2 follow from standard results for ARMA models with covariates (Hannan, 1973). To show that Assumption 2.3 holds, observe that, for small δ in a neighborhood of 0, the argument for the variance can be written as (recall that conditionally on the past, y t 's are independent) where J x = log log 0.5/ log(1 − µ x t ) and C t are (non-random) real constants. The terms log(y t ), log(1 − y ϕ+δ t ) and log(1 − y ϕ t ) can be shown to be continuous functions of their arguments so that the result follows (see Andersen, 1970). Assumption 2.3 are satisfied by the definition of the KARMA model and the results on Section 3. Assumption 2.4 is a consequence of Lemma 1 in Appendix A and the final condition follows from the assumptions on the design (covariate) matrix and Section 3.2. | 2017-10-13T20:37:07.000Z | 2017-10-13T00:00:00.000 | {
"year": 2017,
"sha1": "c12696914872494cd4ee38a9dea4af36c3de99ac",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.05069",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "31fffaaf1bf0901e209038ff5bb3a8bbbb96a5ad",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
67856212 | pes2o/s2orc | v3-fos-license | A novel dynamic asset allocation system using Feature Saliency Hidden Markov models for smart beta investing
The financial crisis of 2008 generated interest in more transparent, rules-based strategies for portfolio construction, with Smart beta strategies emerging as a trend among institutional investors. While they perform well in the long run, these strategies often suffer from severe short-term drawdown (peak-to-trough decline) with fluctuating performance across cycles. To address cyclicality and underperformance, we build a dynamic asset allocation system using Hidden Markov Models (HMMs). We test our system across multiple combinations of smart beta strategies and the resulting portfolios show an improvement in risk-adjusted returns, especially on more return oriented portfolios (up to 50$\%$ in excess of market annually). In addition, we propose a novel smart beta allocation system based on the Feature Saliency HMM (FSHMM) algorithm that performs feature selection simultaneously with the training of the HMM, to improve regime identification. We evaluate our systematic trading system with real life assets using MSCI indices; further, the results (up to 60$\%$ in excess of market annually) show model performance improvement with respect to portfolios built using full feature HMMs.
Introduction
Smart beta is a relatively new term that has become ubiquitous in asset management over the last few years. The financial theory underpinning Smart Beta, known as factor investing, has been around since the 1960s, when factors were first identified as being drivers of equity returns (Agather & Gunthorp, 2017). These factor returns can be a source of risk and/or improved return, and understanding whether any additional risk is adequately compensated with higher returns is important. (Ang, 2014).
By selecting stocks based on their factor exposures, active managers can build portfolios with particular factor exposures and so use factor investing to improve portfolio returns and/or lower risk, depending on their particular objectives. Smart beta aims to achieve these goals at a reduced cost by utilising a transparent, systematic, rules-based approach, bringing down the costs significantly when compared to active management (Asness, 2016).
While smart beta strategies have shown strong performance in the long run, they often suffer from severe short-term drawdown (peak-to-trough decline) with fluctuating performance across cycles (Arnott et al., 2016). These fluctuations can arise from extreme macroeconomic conditions, elevated volatility, heightened correlations across multiple markets and URL: elizabeth.fons@postgrad.manchester.ac.uk (Elizabeth Fons) uncertainty monetary and fiscal policy responses. In this paper we address this by building a regime switching model using Hidden Markov Models (HMMs). Hidden Markov models have become one of the mainstream techniques to model times series data (Baum et al., 1970;Rabiner, 1989), with applications across many areas such as speech recognition, text classification and medical applications. We first study how a regime switching framework can be used to detect regimes across factors and, if so, add value to smart beta strategies. The prevalent approach in regime switching frameworks for asset allocation has been to specify in advance a static decision rule dependent on the predicted state (Nystrup et al., 2017a). An alternative approach is to dynamically optimise a portfolio using information from the inferred regime parameters. We follow this second approach and use the regime information to construct different types of portfolios (more return oriented and more risk focused). In a first step we build a dynamic asset allocation (DAA) system to construct portfolios through a regime switching model and perform a systematic analysis using hundreds of combinations of factors by training the HMM with the same factors that will be used for the allocation in the portfolio. Our study shows that using the regime information from the HMM has a better performance than a single regime allocation and we find that more return-oriented portfolios yield better risk-adjusted returns than their benchmarks, while the performance of more risk focused portfolios show some improvement.
Finally, the common factor in the majority of the research on regime-switching models in finance is that it considers either a single or a small set of assets to build the model, with the selection criteria for the assets usually coming from domain knowledge. The reason for this is that unsupervised feature selection for HMMs is very limited, with wrapping methods exhibiting high computational cost or with very few methods specific for HMMs (Adams & Beling, 2017). In most applications of HMMs, features are either pre-selected based on expert knowledge or feature selection is omitted entirely. One of the few feature selection algorithms developed for HMMs is the feature saliency hidden Markov model (FSHMM) proposed by Adams et al. (2016), where the feature selection process is embedded in the training of the HMM. We incorporate this FSHMM into our dynamic asset allocation system. with two benefits: (1) by selecting the features during the training we expect to improve regime identification by selecting features that are state dependent and rejecting features that are state independent; (2) it allows incorporation of many features on a model and let the algorithm decide which ones contribute to regime identification, thus avoiding the need for expert knowledge in the construction of financial cycles.
The main contributions of this paper are the following: 1. We build a dynamic asset allocation (DAA) system using an HMM for regime detection and perform a systematic study using multiple combinations of assets and comparing performance with their single-regime portfolio counterparts. We show that the DAA system consistently performs better than the benchmarks; 2. We extend our DAA system by incorporating a Feature Saliency HMM for feature selection, thus improving regime identification; 3. We test the DAA system with embedded feature selection on real life investable indices using MSCI indices and show an improvement in risk-adjusted return on strategies built using the DAA system with FSHMM with respect to strategies built using DAA system without feature selection.
This paper is organized as follows: Section 2 gives an overview of previous work on HMM in finance; Section 3 introduces hidden Markov models and feature saliency hidden Markov models; data and index construction are described in Section 4; Section 5 introduces the dynamic allocation system, the feature saliency algorithm and its incorporation into our dynamic asset allocation system; Section 6 shows the experimental results of the DAA system, and the incorporation of embedded feature selection. Finally, we test the DAA system with feature selection using investable assets; conclusions and further work are considered Section 7.
Previous work
In finance, HMMs have been used extensively to build regime-based models, since Hamilton proposed using a regimeswitching model to identify economic cycles using the GNP series (Hamilton, 1989). As pointed out by Ang & Timmermann (2012) HMMs can simultaneously capture multiple characteristics from financial return series such as time-varying correlations, skewness and kurtosis, while also providing good approximations even in processes for which the underlying model is unknown (Ang & Bekaert, 2003;Bulla et al., 2011;Bulla & Bulla, 2006;Nystrup et al., 2015Nystrup et al., , 2017b. In addition, HMMs allow for good interpretability of results, as thinking in terms of regimes is a natural approach in finance. Examples of dynamic asset allocation are Reus & Mulvey (2016) that use a HMM to build a dynamic portfolio using currency futures and Bae et al. (2014) that use a HMM to identify market regimes using different asset classes, with regime information helping portfolios to avoid risk during left-tail events. Guidolin (2012) provides an extensive review on applications of Markov switching models in empirical finance covering stock returns, term structure of default-free interest rates, exchange rates and joint processes of stock and bond returns.
Outside of asset allocation, HMMs have been used to capture energy prices dynamics (Dias & Ramos, 2014) to build credit risk systems, for example Petropoulos et al. (2016) build a credit rating system using a students'-t HMM, addressing two problems in current systems, their heavy-tailed actual distribution and their time-series nature; Elliott et al. (2014) build a model using double hidden Markov model to extract information about true credit qualities of firms. Dabrowski et al. (2016) study HMMs and other Bayesian networks to build early warning systems to detect systemic banking crisis and find that Bayesian methods provided superior performance on early warning than traditional signal extraction logic models and Zhou & Mamon (2012) investigate three popular short-rate models and extend them to capture the switching of economic regimes using a finite-state Markov chain.
So far, little work has been done on the impact of regime switching models to factor investing. Among them, Guidolin & Timmermann (2008) found evidence of four economic regimes in size and value factors that capture time-variations in mean returns, volatilities and return correlations. Liu et al. (2011) and Ma et al. (2011) study time-varying risk premiums using a six factor model to explain the returns of sector ETFs. In their work they cover a short period of testing time (9 months) and do not consider transaction costs.
Theoretical background
In this section we present the hidden Markov model and the feature saliency hidden Markov model that can simultaneously train the model and perform feature selection.
Hidden Markov Models (HMMs)
HMMs are sequential models that assume an underlying hidden process modeled by a Markov chain and a sequence of observed data as a noisy manifestation of this latent process (Murphy, 2012).
Given o = {y 1 , ..., y T } the sequence of observed data where each x t ∈ R L with L the dimension of observations and x = x 1 , . . . , x T the latent sequence of states where x t ∈ {1, . . . , K} with K the number of latent states. The HMM model parameters are Λ = (π, A, µ, σ) where π and A correspond to the initial probability and transition probabilities, and µ and σ are the mean and variance of the state dependent Gaussian feature distribution (generally called emission probabilities, symbolized here by b x t ), the graphical model of the HMM can be seen in Figure 1 where blue squares represent latent variables, orange circles are observations and green circles represent model parameters. The complete likelihood can be written as:.
In this work the sequence of noisy observations are factor indices returns and the underlying hidden process is the state of the market that generates them. We assume that the emission probabilities are Gaussian. While normal distributions are a poor fit to financial returns, the mixture of normal distributions provide a much better fit capturing stylize behaviors including fat tails and skewness (Nystrup et al., 2015;Ang & Timmermann, 2012).
The training of HMMs is done by the Baum-Welch algorithm, a type of Expectation-Maximization (EM) algorithm (Rabiner, 1989). The E-step calculates the expected value of the log-likelihood with respect to the state, given the data and current model parameters and the M-step maximizes the expectation computed in the previous step to update the model parameters. The algorithm iterates between these two steps until convergence. The expectation of the complete log-likelihood function is given by: where Λ are the parameters for the current iteration and Λ is the set of parameters from the previous iteration. Following Adams et al. (2016), we place priors on the parameters and calculate the MAP estimate, so the Q function is modified by adding the prior on the model parameters, G(Λ): The EM algorithm is as follows, the Q function in 2 is calculated in the E-step and the equation 3 is maximized in the M-step.
FSHMM
The feature saliency HMM considers a feature relevant if its distribution is dependent on the underlying state and irrelevant if it is independent. Given a set of binary variables {z 1 , . . . , z L } that indicate the relevance of the feature, i.e. z l = 1 if the lth feature is relevant and z l = 0 if it's irrelevant, the feature saliency ρ l is defined as the probability that the l-th feature is relevant. Assuming the features are conditionally independent given the state enables the multivariate Gaussian to be written as a multiplication of univariate Gaussians, and the conditional distribution of y t given z and x can be written as follows: where r(y lt |µ il , σ 2 il ) is the Gaussian conditional feature distribution for the l-th feature and q(y lt | l , τ 2 l ) is the state-independent feature distribution. The FSHMM model parameters are Λ = (π, A, µ, σ, ρ, , τ) where the first four parameters correspond to the regular HMM, ρ is the feature saliency and and τ are the mean and variance of the state independent Gaussian feature distribution. Figure 2 shows the feature saliency Hidden Markov Model. The marginal probability of z is: The joint probability distribution of y t and z given x is: The complete likelihood for the FSHMM is given by: The MAP estimation of the FSHMM is similar to the HMM using EM but the Q function incorporates the hidden variables associated with feature saliency and can be written as: The update steps of the EM algorithm are shown in Appendix Appendix A and the pseudocode for the MAP FSHMM formulation is given in Algorithm 1. A detailed description of the equation derivations and the steps of the algorithm can be found in Adams (2015).
Algorithm 1 MAP FSHMM Algorithm 1: Select initial values for π i , a i j , µ il , σ il , l , τ l and ρ l for i = 1 . . . I, j = 1 . . . I, and l = 1 . . . L 2: Select initial values forp i ,ā i j , m il , s il , ζ il , η il , b l , c l , ν l , ψ l and k l for i = 1 . . . I, j = 1 . . . I, and l = 1 . . . L 3: Select stopping threshold δ and maximum number of iterations M 4: Set absolute percent change in the posterior probability between current iteration and previous iteration ∆L to ∞ and the number of iterations it to 1 5: while ∆L > δ and it < M do 6: E-step: calculate probabilities γ t (i), ξ(i, j), e ilt , h ilt , g ilt , u ilt , v ilt following A.1 to A.7 7: M-step: update parameters π i , a i j , µ il , σ 2 il , l , τ 2 l , ρ l following A.8 to A.14 8: ∆L 9: it = it + 1 10: end while 11: Perform feature selection based on ρ l and construct reduced models As well as the parameters estimated through EM, the model also has several hyperparameters to set in advance. The most relevant is the weight parameter k l that can be used as an informative exponential prior on ρ. Setting higher values of k l for the parameters translates into a higher cost in the algorithm, so in order for the algorithm to select that feature, it needs more evidence that this feature is relevant. This can either be used to reduce the number of selected features or as a proxy for the cost of selecting a feature in the optimization process. The heuristic to select a reasonable value of k l is to scale it with the number of observations as T/4 with T the number of observations.
Smart Beta investing
As mentioned, smart beta is a systematic, low cost implementation of factor investing, where securities are selected based on their exposure to an attribute that has been associated with a persistent higher return in the past, called a factor. Factors can be fundamental characteristics of the economy (macroeconomic factors) or of companies (style factors). Macroeconomic factors can be thought of as capturing the broad risks and returns across assets classes while style factors can be thought of as aiming to explain returns and risks for securities within asset classes.
This paper looks at style factors in the equity market. Within style factors, dozens of indicators have been identified. The majority can be grouped into families, with style factors within a family measuring similar characteristics and often highly correlated. An example of this is momentum, which includes factors measuring returns over different periods (3months, 6-months, 12-months etc). While there is no universal definition of these families or the factors that belong in each family there are common themes. Typically, families will comprise: value, growth, momentum, quality, size and some sort of volatility/risk/beta measure. There may be variations on this, for example Dividend Yield is sometimes viewed as a factor family in its own right or sometimes it is viewed as a member of the Value family; sometimes the Value family can be split into Value and Deep Value.
Data
Below is the description of the two datasets used, and table 2 summarises their main characteristics.
Daily factor data from S&P500 index
The first dataset is a set of style factors which are constructed based on the S&P 500 universe of US stocks. The style factor for each individual stock is determined, the universe is ranked and a portfolio is constructed with the top 20% of stocks and short positions (negative weights) in the bottom 20% of stocks. This is repeated each month. The resulting style factor portfolio will have a strong exposure to the factor and no exposure to the overall market (because the negative holdings offset the positive weights) - Table 1 shows these. The data is supplied by a broker and consists of 25 style factors covering a time period from 1988 to 2016. This dataset is used throughout the analysis.
Daily MSCI USA enhanced indices
The second dataset is supplied by MSCI and consists of a range of indices which they publish. Like the first dataset, the individual style factors are calculated using underlying stocks and their style factor exposures. These individual style factor indices are then grouped into six style factor families, and its these indices that are used in this paper. We use the six MSCI USA enhanced style indices, which are: value, low size, momentum, quality, low volatility and dividend yield Bender et al. (2013). These have different inception dates, with the most recent beginning in 1999, which limits the period we can use this dataset for to 1999-2016.
The advantage of using a published set of indices (such as the MSCI indices) is that they can be packaged into an easy Historical Sales Growth-3Yr Growth to purchase product, such as an Exchange Traded Fund (ETF), by a separate investment company. As an example, an investor who wants to buy US value stocks can buy an MSCI US enhanced Value ETF, which would involve buying one security (the ETF) rather than the underlying stocks. By removing the need to analyse and purchase the underlying companies, the complexity and cost of implementing a smart beta strategy can be reduced. This allows us to test our Novel DAA system with real world assets.
Dynamic asset allocation system
Investment on single factor strategies has been shown to have significant returns over the long term but how to build multi-factor strategies and rotate factors according to market conditions is not straightforward. Factor indices are time series data, hence we take advantage of the capacity of hidden Markov models to identify underlying regimes in sequences of observations and build a dynamic asset allocation system. We will first determine the optimal number of hidden states to model market regimes and then, in order to avoid excessive transactions costs through frequent rebalancing, we optimize the rebalancing signal.
DAA system
We design a dynamic trading framework with daily evaluations and monthly re-adjustments as shown in figure 4. Each day a new vector of returns is added to the training set with an expanding window, and the state is predicted. Returns are lagged by one day in order to avoid look-ahead bias. Because this prediction is noisy, we'll determine an optimal window of consecutive days in the new state before the portfolio is rebalanced. Once a change of state has been accepted, the vector of means and covariance matrix from the new state are retrieved and the portfolio weights optimized, with transaction costs calculated after the rebalance. After a full month has passed, we add this new batch of data to the training set with an expanding window and retrain the model. Figure 5 shows how data is added daily with an expanding window. While this will not produce immediate changes in the model parameters (transition matrix and emission distributions) in time they should change slightly to accommodate the new information. Therefore, we can capture changes on the dynamics of the system over time.
Model selection
The number of latent states in a HMM has to be set in advance, before training. One option is to use the Bayesian Information criterion (BIC), a penalized log-likelihood function that can be used for model selection (Schwarz, 1978). BIC is defined by: where d is the number of free parameters in the model and N is the number of samples. Thus, calculating the score over a range of K states, we can select the model with the lowest value. Another option is to follow a greedy approach, calculating performance of the portfolios built with a different number of regimes and selecting the model with highest performance.
In the financial HMM literature (Guidolin & Timmermann, 2008), regime switching models normally range between two and four states. Keeping the number of states low allows better interpretability, so we selected 200 random combinations of 5 assets each and used this combinations to train an HMM with 2, 3, 4, 5 and 6 hidden states respectively. From each HMM information we built different types of portfolios, as will be explained in section 6.1. The performance of each portfolio was calculated using the IR ratio (the ratio between annualized return and annualized volatility); the plots of BIC and performance as a function of number of states are shown in Figure 6. The BIC score is quite similar for states three to six (four being the lowest) and is slightly higher for two states. While this would suggest use of a four regime model, performance of portfolios for three and four states is significantly lower than for two states, so we have selected a two-state model. Two-state models can be interpreted as expansion-contraction.
System calibration
The dynamic asset allocation system requires a trained HMM to model regime changes and the selection of an optimal time window to decide when a change of state has taken place and the portfolio has to be rebalanced.
For the first part of the work, where we want to test if the proposed DAA system adds value to multi-factor strategies, Figure 6: The Top plot shows the boxplot of BIC number for different number of states: a two state model has a higher BIC but there is no distinction between three, four and five; the Bottom plot shows performance of portfolios as a function of number of hidden states. the two state model yields a better performance for the majority of portfolios.
we test it using multiple combinations of factors, and calibrate the system for each combination. From a pool of 25 factor indices we select n assets randomly and use their returns to train a HMM. As factors can be grouped into five families (following table 1), we randomly select one factor from each group so all families are represented. This yields a total of 1260 combinations. We then use the same factors to build the portfolios.
We divide the data set into three parts, training (15 years), validation (9 years) and test set (4 years). In order to avoid getting stuck in a local maximum we do random initialization with initial parameters calculated from the training data and select the model with highest score. Figure 7 shows the process of training, validation and test using the DAA system.
The regime prediction is done by passing the whole series of returns up to the previous day to decode the most probable sequence of hidden states, and keep the last value as the state prediction. This daily prediction is noisier that it would be if a whole month of returns was passed together, and we cannot re-balance a portfolio each time a change of state is flagged, as quite often this would mean a daily re-balance. Instead, in the validation set, we look for a window of d consecutive days in the same new state and then we flag a change of regime and re- balance the portfolio accordingly. Figure 8 shows the performance of a selection of portfolios as a function of the time window d. While certain combinations of assets perform consistently better than others with larger windows, smaller windows have the worst performance in all cases. The main reason is that performance of portfolios is adjusted for transaction costs, so smaller windows mean higher portfolio turnover and therefore, higher costs. We use the training set to identify the optimal window for each combination of assets. The colormap corresponds to the performance measured by IR (adjusted for transaction costs) as a function of window size. In the majority of cases performance is low for smaller windows due to frequent re-balance; performance tends to improve with window size, 15. However, if the window is too large, performance may decrease again as it fails to take advantage of more frequent regime changes.
DAA system with Feature Saliency: FS-DAA
So far, we proposed a DAA system where the time series to train the HMM were known in advance, which can be a limitation. Therefore, we propose a novel DAA system that incorporates an embedded feature selection method during the training, by using a Feature Salience Hidden Markov Model (FSHMM) as described in section 3.2. This method allows to select features that contribute to the regime identification, called regime dependent, and rejects features that don't depend on the regimes. Figure 9 shows the different stages for training, validation and test using this new DAA system, that we called FS-DAA. FS-DAA takes multiple time series data and fits a FSHMM, that assigns a saliency to each time series. Higher saliency means that the feature is selected. Because FSHMM proposes that features are conditionally independent, the fitted model has diagonal covariance matrices. We therefore take the selected relevant features and used them to train a HMM with full covariance matrices. Figure 9: Full schematic of calibration and usage of the DAA system with embedded feature selection for smart beta investing.
As a first step to assess whether FSHMM can distinguish between relevant features and noise, we generated irrelevant features of random noise and added them to our daily factor data set. We tested this using different number of features, number of observations and values of k l . For each case, k l was the same for all features, both relevant and noise. Results are summarized in Tables A.5 and A.6. In all cases, the algorithm assigned low values of saliency for the irrelevant features and high values for the relevant ones.
Secondly, we train a DAA system using all 25 features from the factor dataset, and we train a FS-DAA system that takes the 25 features, selects the relevant ones and then trains a HMM only with those factors and compare the regimes obtained. Finally, using these two systems, we build a strategy using a MSCI USA enhanced family of factor indices. Both models are trained using 16 years of data (from 1990 to 2006) and then retrained every month until 2016. We use 7.5 years of trading data to estimate mean and covariance of the MSCI indices for each regime, from Jan 1999 to June 2006, to have a robust estimation of the covariance matrix for both regimes. We then use a validation set of 6 years to select the optimal time window to set a change of state, and a test set of 4 years.
One advantage of the proposed DAA system is that it allows to decouple data used to train the HMM to detect regimes from the data used for allocation. This is useful for factor investing because we can build factors with a long history (as the factor dataset) and then use real life, investable assets that have a shorter history (MSCI enhanced data) to build the portfolios.
Results and analysis
Firstly, the DAA system performance is compared with baseline strategies on the large factor dataset. Then, the implementation of FSHMM algorithm is discussed. Lastly, we test the proposed FS-DAA system with real life assets using the MSCI indices dataset.
Trading strategies and benchmarks
Instead of constructing only one kind of portfolio we build several: Risk Parity, Maximum diversification, Minimum Variance, Max return, Max Sharpe and a modified max return -(for a short description of each portfolio, see Appendix (Appendix B). Risk Parity (RP), Maximum diversification (MD) and Minimum Variance (MV) are constructed taking into account only the covariance matrix, so they can be considered more risk aware. Max return (MR), Max Sharpe (Sharpe) and modified max return (Dyn) all consider the mean of the return during the construction, so they tend to be more aggressive.
For comparison we built an equally weighted portfolio and a benchmark for each asset combination. Each benchmark is constructed using the same optimization method as its DAA system counterpart, but are rebalanced monthly and the covariance matrix is estimated using "single regime" past returns. The DAA-system instead has two covariance matrices, one for each regime. All portfolios and their benchmarks are constructed taking into account transaction costs. Costs are calculated by multiplying portfolio turnover (how much a portfolio is rebalanced) with a transaction cost of 50bps (0.5%), for each selling and buying.
DAA system compared to baseline
We first evaluated our DAA system by using 1260 combinations of randomly selected assets to train the HMM and for the allocation, and compare it with their benchmarks. Figure 10 shows the performance measured through Sortino ratio of all portfolios calculated using the DAA system, and their benchmarks. We can see that all portfolios constructed using regime information perform better than their counterpart. Portfolios that are more return-oriented because are calculated using the mean returns in the optimization process improve greatly with respect to their benchmarks while more risk focused portfolios show an improvement with respect to their single-regime counterparts but show a similar performance to equally weighted portfolios.
The highest performing portfolio is Sharpe, that takes into account both mean and covariance in the construction process. Figure 11-Top shows the annualized return as a function of annualized volatility for the Sharpe portfolios and their benchmarks. Portfolios built using HMMs show a higher return and less volatility than their unconditional counterpart, and higher return and volatility than the EQ portfolios. Figure 11-Bottom shows a risk adjusted return metric (Sortino) for the same portfolios. We can see that the HMM portfolios yield a better performance than their benchmarks. Table 3 shows different performance metrics averaged for each type of portfolio. In most cases, HMM-portfolios show Sortino ratio Figure 10: Boxplots corresponding to the Sortino ratio for all portfolios calculated using a HMM (blue) and their benchmarks (orange) and an equally weighted portfolio (green). Figure 11: Left plot shows annualized return as a function of annualized volatility for Sharpe portfolios built using HMM information (blue), Sharpe portfolios rebalanced monthly (orange) and EQ portfolios (green). Right plot corresponds to the Sortino distribution of the plots. All plots correspond to the test set (are out of sample). better performance than their unconditional benchmarks on all metrics, and more return-oriented portfolios perform better than equally weighted ones. Performance improvement comes both from higher returns and risk reduction in return-oriented portfolios. Additionally, skewness and kurtosis are lower than benchmark returns and maximum drawdown is lower (and for Table 3: Average performance of portfolios built using HMMs and their benchmarks. Top portfolios that are more aggressive have a higher risk adjusted return (measured through IC and Sortino ratios) than their unconditional counterpart and the equally weighted portfolio. Bottom portfolios that are more defensive (only the covariance matrix is taken into account in the construction process) perform worse than their benchmark counterparts and the EQ portfolio. a shorter period of time) in most cases.
DAA system with FSHMM
We then used the algorithm to detect relevant features in our data set of 25 factor indices. Figure 12 shows the feature saliencies of all factor return series for different values of k. As the training set has about 3800 observations, we chose values of k closer to a quarter of that number following the heuristics proposed in Adams et al. (2016). The selected features are: Book Value Yield, 1 Yr Fwd Earnings Yield, Sales Yield, 6 Month Price Momentum, 12 Month Price Momentum, EPSCV, Beta. This is of interest as the selected factors represent four of the six or seven factor families mentioned in section 3.3. For comparison, we trained a HMM using all 25 feature and a model trained with the selected assets. Figure 13 shows the predicted state and estimated probabilities for the model after training. We can identify state 1 as a "good state", and state 0 as a "bad" state. The plots clearly identify the 2008 economic crisis -the first steps developed in August and September of 2007 with some episodes between January and May 2008 before the big crash in September 2008. Both models identify spikes of state 0 in the second half of 2007 and transition fully to state zero during 2008. The model trained with relevant features tends to be more sensible to the distress state -it spends 24% of the time in this state versus 20% of the model trained with the full set of features. The average duration of state 0 is 3.8 days vs average length of 3.2 days of the full model. No smoothing was applied to the predicted probabilities to calculate these values.
DAA-FS system with MSCI indices
In this section we evaluate performance of the DAA-FS system using a subset of factors from the daily factor dataset after feature selection, and MSCI enhanced factors for allocation, and compare it with the DAA system without feature selection, that trains the HMM with all 25 factors from the dataset.
For simplicity we calculated only Sharpe, MR and Dyn portfolios, as they showed a significantly better performance when using a regime switching model in their construction than risk-focused portfolios and their benchmarks. Figure 14 shows the cumulative return of these three portfolios with a full feature HMM, FSHMM and the benchmarks constructed without regime information. Both HMM portfolios perform better than their benchmarks (top plot) and portfolios constructed using an HMM with feature selection perform slightly better than portfolios built with a full feature HMM (bottom plot).
Metrics performance for all portfolios and for the MSCI enhanced indices net of market are shown in table 4. All metrics are annualized and are out-of-sample, covering the period Jan-2012 to Feb-2016. The results obtained using DAA and FS-DAA show a robust improvement with respect to their bench- marks. We can see that only three MSCI indices have a positive IR in the period, and two of the three FSHMM portfolios show the highest IR in all cases. Reduction of downside risk is achieved in most cases that use either a full-feature HMM or a FSHMM with respect to their benchmarks and the MSCI indices.
Conclusions and future work
The main focus of the paper is to improve smart beta strategies through the use of regime switching models. The main contributions from this work are: 1. We have shown that constructing a portfolio using information from a HMM with two latent states trained with the same assets that will be used for allocation, improves performance with respect to the same portfolio built with a single regime approach. We have tested this by calculating different types of portfolios, ranging from more risk focused to more aggressive. The improvement is more significant for returnoriented and balanced portfolios where return or riskadjusted return is optimized achieving on average an information ratio of 50% annually in excess of market, and is less evident in risk-focused portfolios (Risk Parity, Minimum Variance and Maximum diversification) with an improvement on IR of 25% on average annually. 2. We have developed a systematic framework for asset allocation using an embedded feature selection algorithm to identify features of relevance to the model. This improves the model's accuracy and allows for a more objective approach to portfolio construction in the sense that it should help to prevent biases in the feature selection process which is normally done by a financial expert. We used a FSHMM algorithm to select relevant features from a pool of well known factor indices and compared it with a HMM trained with the whole set of assets. Both models showed agreement on regime identification, with the model trained using only relevant features being more sensitive to periods of economic distress. 3. We have tested both models using real, investable assets through MSCI USA enhanced factor indices. Portfolios constructed using information from the FSHMM trained with relevant features show a higher performance than the same portfolios constructed using a HMM trained with full set of features. Possible extensions of the model for future work could be to include macroeconomic series in the HMM, where the embedded feature selection could potentially solve the problem of selecting relevant economic series, allowing for a more precise identification of economic cycles. This would be particularly interesting for other asset classes such as fixed income, but this is outside of the scope of this paper.
A drawback of using HMMs is that the number of latent states has to be known in advance, or selected through BIC, which is not always effective, or with a greedy approach choosing the model with higher performance. This could be addressed using an infinite HMM (Beal et al., 2002).
The FSHMM algorithm as developed by Adams, Beiling and Cogill has the following EM update steps (for simplicity we follow their notation): With γ t (i) and ξ(i, j) calculated with the forward-backward algorithm. The additional updates are: (A.14) whereT = T + 1 + k l . Table A.5 shows feature saliency of 5 relevant features and three irrelevant features generated with N(0, 1) with different number of observations and number of hidden states. Table A.6 shows the same but with 10 relevant features and 5 added series of noise, for different states and values of k parameter.
Appendix B. Portfolio description
All portfolios constructed are long only, i.e. w ≥ 0.
• Max return: Given an estimated vector of means, it maximizes the return given a constrain that no asset can have a weight greater than 80%.
• Dyn: If all estimated mean asset returns are positive, it weights the assets proportional to their mean, else, it equally weights them.
• Sharpe: is a classic mean-variance portfolio that maximizes return given a set level of risk.
• Risk parity: focuses on the allocation of risk, each asset on the portfolio contributes the same risk as defined by where V is the covariance matrix.
• Max diversification Maximizes the diversification ratio defined as: w Σ 2 √ w Vw where Σ is a vector of all asset volatility and V is the covariance matrix.
• Min Var: finds the portfolio with minimum variance, defined by: w Vw where V is the covariance matrix. Table A.5: Feature saliency of five factor returns time series (ρ 1 to ρ 5 ) and three irrelevant series of random noise (ρ 6 to ρ8), all calculated with k = 50. All irrelevant features have saliency below 0.25, and most of the financial series have saliency close to one, except ρ 3 that has a small saliency in most of the cases. Table A.6: Feature saliency of ten factor returns time series (ρ 1 to ρ 10 ) and five irrelevant series of random noise (ρ 11 to ρ 15 ). With a small value of k all irrelevant features are discarded and all relevant features have high saliency. With a larger k, noise features are discarded, but also financial features start being selected. All series have 2000 observations. Case ρ 1 ρ 2 ρ 3 ρ 4 ρ 5 ρ 6 ρ 7 ρ 8 ρ 9 ρ 10 ρ 11 ρ 12 ρ 13 ρ 14 ρ 15 | 2019-02-28T00:40:17.000Z | 2019-02-28T00:00:00.000 | {
"year": 2019,
"sha1": "56fa5cc77d43c6c312f17c55b6f603b08dd12446",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1902.10849",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "56fa5cc77d43c6c312f17c55b6f603b08dd12446",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Economics"
]
} |
174809444 | pes2o/s2orc | v3-fos-license | Predictive MGMT status in a homogeneous cohort of IDH wildtype glioblastoma patients
Methylation of the O(6)-Methylguanine-DNA methyltransferase (MGMT) promoter is predictive for treatment response in glioblastoma patients. However, precise predictive cutoff values to distinguish “MGMT methylated” from “MGMT unmethylated” patients remain highly debated in terms of pyrosequencing (PSQ) analysis. We retrospectively analyzed a clinically and molecularly very well-characterized cohort of 111 IDH wildtype glioblastoma patients, who underwent gross total tumor resection and received standard Stupp treatment. Detailed clinical parameters were obtained. Predictive cutoff values for MGMT promoter methylation were determined using ROC curve analysis and survival curve comparison using Log-rank (Mantel-Cox) test. MGMT status was analyzed using pyrosequencing (PSQ), semi-quantitative methylation specific PCR (sqMSP) and direct bisulfite sequencing (dBiSeq). Highly methylated (> 20%) MGMT correlated with significantly improved progression-free survival (PFS) and overall survival (OS) in our cohort. Median PFS was 7.2 months in the unmethylated group (UM, < 10% mean methylation), 10.4 months in the low methylated group (LM, 10-20% mean methylation) and 19.83 months in the highly methylated group (HM, > 20% mean methylation). Median OS was 13.4 months for UM, 17.9 months for LM and 29.93 months for HM. Within the LM group, correlation of PSQ and sqMSP or dBiSeq was only conclusive in 51.5% of our cases. ROC curve analysis revealed superior test precision for survival if additional sqMSP results were considered (AUC = 0.76) compared to PSQ (cutoff 10%) alone (AUC = 0.67). We therefore challenge the widely used, strict PSQ cutoff at 10% which might not fully reflect the clinical response to alkylating agents and suggest applying a second method for MGMT testing (e.g. MSP) to confirm PSQ results for patients with LM MGMT levels if therapeutically relevant. Electronic supplementary material The online version of this article (10.1186/s40478-019-0745-z) contains supplementary material, which is available to authorized users.
Introduction
Glioblastoma (GBM) is the most common and most aggressive primary brain tumor. The histological examination of neurosurgical tumor specimens as well as the immmunohistochemical or molecular determination of the IDH1/2 status remain the gold standard for diagnosis of GBM [13]. Despite aggressive therapy, the survival of patients with GBM is approximately 15-17 months [21]. The current standard GBM therapy usually consists of neurosurgical resection, radiotherapy and additional chemotherapy with temozolomide (TMZ), an alkylating agent. But, chemosensitivity to TMZ strongly depends on epigenetic silencing by methylation of the O(6)-Methylguanine-DNA methyltransferase (MGMT) promoter [15]. Different randomized trials have shown that methylation of the MGMT promoter in GBM patients is associated with significantly higher survival rates if treated with radiotherapy and TMZ [4]. At the stage of recurrent disease, a TMZ rechallenge seems only reasonable in patients with clear methylation of the MGMT promoter based on the results of the DIRECTOR trial [24]. Recent data from the NOA-09 trial showed that newly diagnosed GBM patients with methylated MGMT promoter might benefit from a more intense first-line treatment regimen with CCNU in combination with TMZ [8], accepting an increased toxicity for an improved prognosis. These trials emphasize the importance of reliable MGMT status assessment and the need for predictive cutoff levels for clinical decision-making.
The methylation status of the MGMT promoter is widely determined by quantitative pyrosequencing (PSQ) [12,28]. PSQ analysis uses a defined cutoff value to classify cases as "methylated" or "unmethylated" [1]. In many neurooncological centers, the biological cutoff is 10% [27]. However, a very strict cutoff value might not fully reflect the clinical response to TMZ therapy. Various previous studies that focused on the technical assessment of the MGMT status have suggested higher predictive cutoff levels above 10% [14,17,18].
Here, we aimed to determine a predictive cutoff level for clinical decision-making on the basis of a welldefined patient cohort of 111 IDH wildtype GBM patients. Three methylation groups were identified, which showed a very distinct clinical course in terms of PFS and OS: unmethylated 0-9% (UM), low methylated 10-20% (LM), and highly methylated > 20% (HM).
Methods and Material
Tissue samples, clinical and patient data Two hundred ninety patients with newly diagnosed, previously untreated GBM (WHO grade IV) patients have been diagnosed between 2010 and 2015 at the Departments of Neurosurgery and Neuropathology, Charité Berlin, Germany. GBM diagnosis was confirmed by at least two experienced neuropathologists after surgical resection or stereotactic biopsy. According to the current WHO classification of CNS tumors [13], IDH mutation status was determined by IDH1 R132H immunohistochemistry (IHC) and bidirectional Sanger sequencing of exon 4 of the IDH1 as well as IDH2 gene for all GBM patients younger than 55 years [13]. Gliosarcoma, epithelioid glioblastoma, giant cell glioblastoma and IDH mutant tumors were excluded. The following clinical data were assessed: age at diagnosis, Karnofsky performance status (KPS), tumor localization, extent of resection and residual tumor volume, type and timing of adjuvant therapy, second-line therapy at recurrence, follow-up time, progression-free (PFS) and overall survival (OS) in months. The extent of tumor resection was determined by measuring the contrast-enhancing tumor volume in mm 3 on T1-subtraction MRI imaging preand 48 hours postoperatively using the Brainlab iMRI software (Brainlab AG, Munich, Germany). Gross total resection (GTR) was defined as residual tumor volume < 2% [22]. PFS was assessed according to RANO criteria [25]. We identified 205 IDH wildtype GBM patients who matched the criteria mentioned above. Three long-term survivors (LTS; OS > 5 years) were identified in our cohort. For two LTS cases, DNA was sufficient to perform a genome-wide methylation analysis (EPIC array) which confirmed the diagnosis of GBM, IDH wildtype (Additional file 1: Figure S3 and Figure S4).
Ethical statement
This study was conducted according to the ethical principles of medical research involving human subjects according to the Declaration of Helsinki. The clinical data were assessed and anonymized for patients' confidentiality. Ethical approval (EA2/064/17) was granted by the institutional ethics board of the Charité Ethics Committee.
DNA extraction, bisulfite treatment and analysis of MGMT promoter methylation status in tumor samples
Areas of high tumor cell content (≥ 80%) were chosen and macro-dissected for further analysis (Additional file 1: Figures S1a, dashed line; 1 b). Genomic DNA was extracted from formalin-fixed and paraffin embedded (FFPE) samples using the Qiagen DNeasy blood and tissue DNA extraction kit according to the manufacturer´s protocol (Qiagen, Hilden, Germany). The DNA was sodium bisulfite-modified using the EZ DNA Methylation-Gold™ Kit (Zymo Research, Irvine, CA).
Pyrosequencing (PSQ) Quantitative methylation analyzes were performed using the PyroMark Q24 MGMT kit (Qiagen, Hilden, Germany) and an automated Pyro-Mark Q24 System (Qiagen, Hilden, Germany) following the manufacturer's instructions. Data was analyzed with the PyroMark Q24 Software 2.0 (Qiagen, Hilden, Germany). The percentage of methylated alleles was calculated as the mean value of the methylation percentage obtained. The cutoff value ≥ 10% was defined to classify MGMT methylated vs. unmethylated cases, which is commonly used and has been validated for routine clinical diagnostics [27]. Standardized positive and negative controls were included in every PSQ run. The PSQ results were evaluated by at least two experienced neuropathologists.
Semi-quantitative methylation-specific PCR (sqMSP) sqMSP was performed with primers specific for either "methylated" or "unmethylated" DNA as previously described [5]. Original MSP PCR gels are shown in Additional file 1: Figure S2. Primers and PCR programs are listed in the methods and material section of Additional file 1. Semi-quantitative analysis of the optical band intensity (I) was performed using ImageJ (National Institutes of Health, Bethesda, USA). The following equation was used: Band intensity unmethylated in% Direct Bisulfite Sequencing (dBiSeq) dBiseq was carried out as previously described [16] with minor adaptations. Primers and PCR program are listed in the methods and material section of Additional file 1.
Analysis of MGMT promoter methylation status in positive and negative controls
Both, positive and negative controls (listed in Additional file 1: Table S1) were assessed by PSQ, sqMSP, and dBiseq. Samples of non-neoplastic brain tissue and one samples with genomic DNA extracted from whole peripheral blood served as negative controls. The primary cell line SF126 and 7 tumor samples with clear MGMT promoter methylation levels > 30% were used as positive controls.
Genome-wide DNA methylation analysis DNA methylation signature analysis was performed using the Illumina Infinium Methylation EPIC array as previously described [2].
IDH1 and IDH2 Sanger sequencing
Bidirectional Sanger sequencing of exon 4 of IDH1 and IDH2 was performed in IDH R132H IHC-negative or -equivocal cases in all patients < 55 years of age. PCR primers for the genomic regions corresponding to IDH1 exon 4 (codon R132) and IDH2 exon 4 (codon R172) and the flanking intronic sequences are displayed in the methods and material section of Additional file1. Sequencing was performed at Eurofins Genomics, Ebersberg, Germany.
Immunohistochemical procedures
Immunohistochemical When no agreement was reached, the sections were reviewed by our team of neuropathologists at our department (Charité) and further molecular diagnostics (e.g. IDH1/IDH2 bidirectional Sanger sequencing, genome-wide DNA methylation analysis (EPIC analysis)) was performed.
Statistical analysis
Statistical analysis was performed in cooperation with the Charité´s Institute for Biometrics and Clinical Epidemiology using GraphPad Prism 5 (GraphPad Software, La Jolla, CA, USA). Kaplan-Meier survival curves were obtained and differences in PFS and OS were tested for statistical significance using the log-rank test. Significance level was set at p < 0.05. ROC analysis was used for diagnostic test evaluation. The true positive rate (Sensitivity) was plotted as a function of the false positive rate (100-Specificity) for different cutoff points. The area under the ROC curve (AUC) measured the accuracy. An AUC of 1 represents a perfect test; 0.8-0.9 a good test, 0.7-0.8 a fair test, 0.6-0.7 a poor test, and an area of ≤ 0.5 represents a worthless test.
Study cohort
Heterogeneity of the patient cohort (e.g. in terms of the IDH status) has been a major point of criticism in previous studies where the predictive mean MGMT promoter methylation cutoff had to be determined. Therefore, we selected a homogeneous group of IDH wildtype GBM patients with KPS > 70%, who received i) GTR of GBM manifestation, ii) Stupp regime within 4-6 weeks after initial surgery [20], and iii) completed Stupp regime after 6 cycles or until progression of disease, assessed according to the RANO criteria (n=111). All clinical information is displayed in Table 1. GBM diagnosis was confirmed by at least two experienced neuropathologists using a standardized panel of conventional and immunohistochemical stainings (Additional file 1: Figures S1 a-f ) . All cases were proven IDH wildtype by bidirectional Sanger sequencing. Patients with IDH1 (Additional file 1: Figures S1 g-j) and IDH2 (Additional file 1: Figure S5) mutant tumors were excluded.
Defining a transition zone
LM patients demonstrated a similar clinical course compared to UM patients in terms of PFS and OS, which indicated that the widely used PSQ cutoff of 10% does not fully reflect the clinical response to alkylating agents. We have therefore defined the LM group (10-20%) as a "transition zone" between unmethylated and clearly methylated cases. To validate the PSQ MGMT results in this particular subgroup of the unselected study cohort, these cases (LM, n=35) were additionally analyzed by sqMSP (n=32/35). In 53.1 % (n=17/32) sqMSP and PSQ results were disconcordant (representative MSP and PSQ results are shown in Figures 2 c, d). For n=22/35 Table S1. In general, in cases with PSQ ≥ 16%, we observed a very high consistency between PSQ, MSP and dBiseq results. We additionally investigated the survival profiles of all transition zone patients after combining PSQ and MSP results. First, we redistributed the LM patients to either the UM or HM category based on MSP testing. As expected, the differences between UM vs. HM were highly significant: PFS (***p<0.0001, HR 3.002, CI 1.886 to 4.778) and OS (***p<0.0001, HR 2.629, CI 1.729 to 3.997, Additional file 1: Figure S6 a, b). Next, we defined the following more detailed four groups to investigate if the integration of MSP resulted in a redistribution of LM patients to either the UM or HM category: UM, LM + MSP unmethylated, LM + MSP methylated, and HM. The results still clearly indicated a transition zone for median PFS and OS, which seemed independent of the MSP results (Additional file 1: Figure S6 c, d). Moreover, curve comparison between PSQ LM + MSP unmethylated and PSQ LM + MSP methylated showed no significant difference, most likely due to small sample size and presence of one LTS patients within the LM group.
Regarding the aforementioned results, we performed ROC curve analysis for prognostic test evaluation for PSQ (cutoff 10%) alone and for PSQ (cutoff 10%) combined with sqMSP results. LM cases that were considered MGMT unmethylated by sqMSP were therefore assigned to the UM group, LM cases that were considered MGMT methylated by sqMSP were therefore assigned to the HM group. ROC curve analysis revealed superior test precision with an AUC = 0.76 for PSQ (cutoff 10%) combined with sqMSP results compared to PSQ (cutoff 10%) alone (AUC = 0.67; Figure 2 a). Additionally, we performed step-wise cutoff testing for 10%, 12%, 15%, 17%, and 20% PSQ results. At a cutoff of 17%, highest test precision was reached with an AUC of 0.77 (Figure 2 b).
Discussion
We demonstrate that IDH wildtype GBM patients with low methylation of the MGMT promoter (mean 10-20%) represent a "transition zone" in terms of PFS and OS compared to clearly unmethylated (0-9%) and highly methylated (> 20%) patients. For patients with low methylated MGMT promoter (10-20%), PSQ results could be validated in only 51.5 % (n=17/33 samples, Additional file 1: Table S1) by one other method (sqMSP or dBiseq) to be clearly methylated.
Both, MSP and PSQ, have independently been suggested as the "gold standard" for methylation analysis of the MGMT gene promoter [3,11]. As to which method to use, the scientific community has not reached a consensus yet Fig. 1 a, b: Kaplan-Meier curves for progression-free (PFS) and overall survival (OS) of subgroup analysis comparing the different methylation groups (mean MGMT promoter methylation): 0-9%, 10-20%, 21-30%, 31-40%, and > 40%. c, d: Kaplan-Meier curves for progression-free (PFS) and overall survival (OS) of subgroup analysis comparing the different methylation groups UM, LM, and HM according to mean MGMT methylation PSQ results [3,19]. Several studies have demonstrated the prognostic value of MSP. Nevertheless, MSP primers are generated to detect either unmethylated or fully methylated MGMT promoter sites, which may in turn result in a lower sensitivity of this method [10]. Furthermore, MSP lacks international standardization [19]. In contrast to MSP, PSQ provides information about the extent of methylation at each individual CpG site, which improves the sensitivity of analyzing heterogeneous methylation patterns within a tumor sample [10]. Nevertheless, the optimal cutoff value is still a matter of scientific debate [1]. The predictive cutoff is strongly influenced by i) interlaboratory differences, ii) technical challenges of MGMT testing, which are strongly dependent on successful bisulfite treatment of the DNA [6], and particularly iii) tissue processing, such as formalin-fixation and paraffin-embedding [17,18]. Therefore, determining a "grey zone" seems to be a more reasonable approach than setting a very strict cutoff. Even though previous studies have identified 10% as the PSQ cutoff to distinguish methylated from unmethylated samples -often based on biological determinants comparing non-neoplastic to neoplastic tissue [4,17,27,28] -several more recent studies have suggested to introduce a "transition" or "grey zone" [7,17,18,26] for partly methylated tumors that perhaps cannot be assigned to either the methylated or unmethylated category. Many of these studies were criticized due to small sample size and heterogeneous patient population [28] including different therapeutic regimens and IDH mutant as well as IDH wildtype GBM patients.
Seeing that IDH mutant GBMs demonstrate a hypermethylator phenotype and show a favorable clinical course, the impact of MGMT methylation on survival may have been overestimated in those studies [23].
Clearly, our study also has some limitations that restrict the interpretation of our data. There are both, the retrospective character and the single center experience. Nevertheless, a key advantage of this study is that it provides a large data set (n=111) from a both clinically and molecularly very well-documented and characterized subgroup of IDH wildtype GBM patients (according to the most recent WHO classification).
As the different methylation groups demonstrate a very distinct clinical course in terms of PFS and OS, and PSQ and sqMSP/sBiseq results are only concordant in 51.5% of LM patients -which might partly be explained by a heterogeneous methylation pattern and techniquedependent analysis of different CpG sites within the MGMT promoter [19] -we conclude that PSQ results in patients with low MGMT promoter methylation (10-20%) should be interpreted with caution. If therapeutically relevant, a second technique, e.g. MSP could be additionally used to substantiate the results in MGMT PSQ transitional (10-20%) cases. Our ROC curve analysis indicates that the combination of PSQ and MSP results is diagnostically beneficial in the LM patient cohort. Our results, furthermore, suggest 17% as the most accurate cutoff value for PSQ analysis. It has been the consensus in clinical practice to also treat patients with low level MGMT methylation as a potential benefit cannot be excluded. Nevertheless, further scientific investigation is necessary to establish this efficacy. Especially in elderly (≥ 70 years) or fragile GBM patients, a further stratification would be favorable as these patients have a higher risk of chemotherapy-related toxicity and demonstrate less survival benefit from alkylating agents if MGMT is unmethylated [19]. To conclude, we recommend the following classification system be used (particularly if FFPE samples are used): clearly unmethylated (< 10%), low methylated (between 10-20%), and clearly methylated (> 20%), which correlated with significantly improved PFS and OS in our cohort. | 2019-06-07T20:32:30.496Z | 2019-06-05T00:00:00.000 | {
"year": 2019,
"sha1": "3787529593a65054f7d7f7bc84495d0ef7a194fb",
"oa_license": "CCBY",
"oa_url": "https://actaneurocomms.biomedcentral.com/track/pdf/10.1186/s40478-019-0745-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6132f6743e40726acd5584cd4143f1bc6fc515f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256494227 | pes2o/s2orc | v3-fos-license | IQ trajectories in autistic children through preadolescence
Abstract Background We extended our study of trajectories of intellectual development of autistic individuals in early (mean age 3 years; T1), and middle childhood (mean age 5 years, 7 months; T2) into later middle childhood/preadolescence (mean age 11 years, 6 months; T3) in the longitudinal Autism Phenome Project cohort. Participants included 373 autistic children (115 females). Methods Multivariate latent class growth analysis was used to identify distinct IQ trajectory subgroups. Baseline and developmental course group differences and predictors of trajectory membership were assessed using linear mixed effects models for repeated measures with pairwise testing, multinomial logistic regression models, and sensitivity analyses. Results We isolated three IQ trajectory groups between T1 and T3 for autistic youth that were similar to those found in our prior work. These included a group with persistent intellectual disability (ID; 45%), a group with substantial increases in IQ (CHG; 39%), and a group with persistently average or above IQs (P‐High; 16%). By T3, the groups did not differ in ADOS‐2 calibrated severity scores (CSS), and there were no group differences between Vineland (VABS) communication scores in CHG and P‐High. T1‐T3 externalizing behaviors declined significantly for CHG, however, there were no significant T3 group differences between internalizing or externalizing symptoms. T1 correlates for CHG and P‐High versus ID group membership included higher VABS communication and lower ADOS‐2 CSS. A T1 to T2 increase in VABS communication scores and a decline in externalizing predicted CHG versus ID group membership, while T1 to T2 improvement in VABS communication and reduction in ADOS‐2 CSS predicted P‐High versus ID group membership. Conclusions Autistic youth exhibit consistent IQ developmental trajectories from early childhood through preadolescence. Factors associated with trajectory group membership may provide clues about prognosis, and the need for treatments that improve adaptive communication and externalizing symptoms.
INTRODUCTION
Given the heterogeneity of autism (Geschwind & Levitt, 2007), it remains difficult to provide reliable answers about what the future holds for young autistic children. Some never acquire functional spoken language, sustain close interpersonal relationships outside of family members or caregivers, or live independently. Others develop meaningful reciprocal friendships, obtain post-secondary education, and work and live in the community (Mason et al., 2021). Some even "lose" their autism diagnoses (Fein et al., 2013). Intellectual ability level, as assessed using IQ or a developmental quotient (DQ) (both referred to as IQ) is perhaps the most significant predictor of outcomes across key life domains for autistic individuals (Miller & Ozonoff, 2000;Munson et al., 2008). Early IQ also is the strongest predictor of adult outcomes in autistic individuals .
While there have been multiple studies examining the association between intellectual functioning in childhood and later outcomes, few have been longitudinal and fewer still have investigated IQ-based subgroups/phenotypes using data-driven or clinicallybased clustering strategies. A first study to isolate IQ-based subgroups using data-driven methods idenified four unique groups based on IQ level and relative strength of verbal versus non-verbal abilities in 2-5 1/2 year olds (Munson et al., 2008). Two subsequent studies employed clinical grouping methods. The first examined a prospective longitudinal cohort of 85 children assessed at 2, 3, and 19 years (n = 85). They used age-19 IQ to group participants into VIQ<70 and VIQ>70 sub-groups who did and did not retain their diagnoses.
Eighty five percent of the group remaining intellectually disabled could be identified from early IQ scores. Participants losing their autism diagnosis received more early intervention and exhibited early reduction in restricted and repetitive behaviors . The second study using a clinical grouping approach assessed participants at ages 2 and 13 years, assigned children to a best outcomes (IQ > 80 with no diagnosis of autism by the second assessment; 16%), more able (IQ > 80 throughout, 20%), and more challenged (IQ<80%; 63%) groups (Zachor & Ben-Itzchak, 2020). The more challenged group showed decreased cognitive ability and increased social and repetitive behavior severity over time.
To the best of our knowledge, a study by our group has been the only prospective longitudinal study to use an empirical data-driven approach to isolate developmental trajectories of intellectual functioning in children as young as ages 2-8 years old (Solomon et al., 2018). Four distinct groups were identified. Two had persistent intellectual disability (ID) (43% of the sample), 1 had IQs starting in the intellectual disability range that then increased by at least 2 standard deviations (35%), and 1 had IQs remaining in the average or better range over time (22%). Communication and social adaptive functioning lagged IQ in all autism but not non-autistic groups. While internalizing symptoms decreased over time for all groups, externalizing symptoms declined only for the group experiencing substantial increases in IQ.
The current study aims to extend our past investigation of trajectories of IQ development in one of the few relatively large, cognitively heterogeneous, and recent longitudinal cohorts-the Autism Phenome Project (APP)-by adding a third data point from our middle childhood assessment and by investigating additional developmental issues pertinent to the preadolescent developmental period. We again isolate phenotype groups based on IQ and characterize them based on autism symptoms, communication adaptive functioning, and problem behavior symptoms including internalizing and externalizing. To gain insight on predictors of later childhood/pre-adolescent outcomes, we then investigate variables assessed at or before T1 and changes between variables assessed at T1 and T2. These analyses focus on group differences in potential predictors for children who remained in the ID group versus those who did not by T3.
METHOD Participants
Participants were members of the longitudinal APP cohort, which began recruiting both autistic and typically developing children
Key points
� In this study of the intellectual development of autistic individuals from early childhood through age 12, we found there were three IQ trajectories-a group with intellectual disability from early childhood through preadolescence (ID; 45%), a group whose IQs increased at least 1 standard deviation referred to as Changers (CHG; 39%) and a group whose IQs were in the average or above range through the period (P-High; 16%).
� Although autistic youth exhibited lower adaptive functioning than would be expected based on IQ. By preadolescence, there were no significant group differences between adaptive communication in the CHG or P-High groups.
� Early correlates for being in the CHG or P-High groups versus the ID group, included stronger early VABS communication scores and lower ADOS CSS.
� Improved communication adaptive functioning and decreased externalizing between T1 and T2 was a marker of becoming a member of CHG versus ID, while reduced ADOS-2 CSS and improved adaptive communication were predictive of being in P-High versus ID.
� Findings suggest that early communication adaptive functioning and may be a stronger prognostic marker than IQ scores, and that communication adaptive functioning and externalizing symptoms may be treatment targets that are associated with later improvements in intellectual ability levels.
through an internal data base and advertisements placed with local providers and other organizations and groups known to be involved with young autistic children and their families starting in 2006.
Baseline assessments were conducted in children at 2-5 years of age, followed by longitudinal assessments across childhood. Four total assessments have been completed. A fifth is in progress and the cohort has been expanded. To increase female representation within the APP cohort, we initiated the Girls with Autism-Imaging of Neurodevelopment (GAIN) study in 2014. All participants in the GAIN study are automatically included in the APP dataset. This explains why the gender ratio in new participants is enriched for females. Inclusion criteria for autism were based on the NIH Collaborative Programs of Excellence in Autism as described in our prior study (Solomon et al., 2018). Although the full cohort included TD children, we examined IQ trajectory classes within the autistic group and thus excluded TD participants from analyses. IQ/DQ assessments were completed at three of these assessment points, which we refer to as T1 (mean age = 3.0 years, SD = 0.5, n = 373); T2 (mean age = 5.6 years, SD = 0.9, n = 154); and T3 (mean age = 11.5 years, SD = 0.9, n = 116). One hundred and eighty-two autistic participants had IQ data only at T1, 112 participants had data at two timepoints (T1 and T2: 75, T1 and T3: 37), and 79 participants had data at all three timepoints. See Table 1 for a summary of demographic and clinical characteristics including the IQ scores of the entire sample across the three assessments. We included all autistic APP participants with IQ data at T1 in our analyses. Supplementary Table S1 compares the demographic and clinical characteristics of the participants with complete data versus those with only 1 follow-up visit and those with only baseline data to illustrate their similarity to the entire sample. We did not find a systematic pattern of IQ differences for children having fewer visits as compared to those with complete data. The only other observed characteristic significantly related to missingness was sex, because the most recent participants were from the GAIN cohort. Thus, sex was included as a covariate in all models.
Statistical analyses
We first identified distinct IQ-based subgroups and their differential developmental trajectories by conducting a latent class growth analysis (LCGA) of autistic participants' full-scale IQ scores using Mplus 8 (Muthen, 2017). All participants with at least one timepoint were included (n = 373), and both linear and quadratic age-based models were evaluated for best fit. Models were estimated using full-information maximum likelihood, which permitted us to include the participants with missing data, under the missing-at-random assumption. Information-heuristic (e.g., information criterion values) and inferential (e.g., likelihood ratio tests) relative fit comparisons were used to select the best-fitting solution. Information-heuristic indices include the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and sample size-adjusted BIC (SBIC), for which lower values indicate better fit, as well as the approximate Bayes Factor (BF) (Wasserman, 2000). BF compares a larger model with a smaller one and a higher score indicates the larger model is the more probable correct model (values between 1-3 represent weak, 3-10 moderate, and >10 strong evidence for the larger model). As an inferential index, we used the approximate correct model probability (CMP) (Schwarz, 1978), which compares a single model versus all other models under consideration; models with a CMP >0.10 should be considered as candidate models. We used the highest posterior probability from the best fitting model to assign each participant to their most likely subgroup.
Next, we examined differences in trajectories of clinical characteristics for the identified subgroups using linear mixed effects models (Laird & Ware, 1982) performed 100 times (i.e., for each draw) and results were combined across draws using standard methods for multiple imputation for missing data (Rubin, 1987). The same strategy was employed to examine the robustness of the predictors of trajectory membership.
RESULTS
At T1, a significant proportion of autistic participants achieved the lowest possible MSEL standard score so verbal, nonverbal, and full- "Persistently High IQ" [P-High]) presented a trajectory that demonstrated relative stability with a gradual increase during childhood.
See Figure 1. The average assignment probabilities for the subgroup classes were 0.80, 0.85, and 0.86, respectively. Group membership was very similar to that identified in our previous manuscript using data from T1 and T2 only (Table S3). Demographic and clinical characteristics for all subgroups across the three timepoints are presented in Table S4.
To affirm that the IQ increases of the CHG and other groups did not simply reflect language acquisition and the consequent increase in VIQ, we examined those participants with FSIQ changes of 15 points or more (1 standard deviation) from T1 to T3. Notably, for CHG, 89% also showed increases in NVIQ, while 84% experienced changes in both VIQ and NVIQ (Table S5). We also completed trajectory analyses using NVIQ and VIQ. Here we found that 87.8% of those categorized in CHG in the current analysis would continue to be if NVIQ were used. These percentages were 55.9% for P-High and 82.6% for ID. Values were all over 80% when VIQ was used (Table S6).
(2) ADOS-2 Calibrated Severity Score (CSS): Parameter estimates for all mixed-effects models fitted to clinical variables and adjusted for sex are summarized in Table S7. In the CHG and P-High groups, ADOS-2 CSS decreased from T1 to T2 although it returned to the T1 levels by late middle childhood/ preadolescence (T1 vs. T3, CHG: p = 0.95; P-High: p = 0.94). For the ID group, ADOS-2 CSS scores remained consistent from T1 to T3 (p = 0.98). By T3, the three groups did not differ in ADOS-2 CSS. See Figure 2A.
The developmental pattern of autism symptom severity change has been studied previously by our group in a smaller sample not including all data points (Waizbard-Bartov et al., 2022). The current results did not overlap with this other study given that there were no significant associations between IQ trajectory membership and their three groups (defined by increasing, decreasing, and stable calibrated ADOS-2 CSS). See Table S8.
(3) Communication Adaptive Functioning: From T1 to T3, CHG significantly increased in VABS communication score (p = 0.03), while ID decreased and P-High remained relatively stable (ID: p < 0.001, P-High: p = 0.11; Figure 2B). Thus, while differences between the three subgroups were present at T1, CHG and P-High showed no communication score differences by T3 (p = 0.66), despite their being significantly higher than ID (both p < 0.001).
(4) Internalizing and Externalizing Symptoms: The three autistic subgroups had similar CBCL internalizing subscale scores at T1.
By T3, the score for the ID group decreased, although this reduction was not significantly different than that found in the other groups, and there were no group differences in scores at T3 (after adjusting for multiple comparisons, all p > 0.06, Figure 2C). On the externalizing subscale, the three groups also had comparable scores at T1. The CHG group showed a significant externalizing score decline from T1 to T3 (p < 0.001), however, here too, none of the groups differed on this variable at F I G U R E 1 IQ trajectories of the three full-scale IQ subgroups: Changers (CHG), persistently high IQ (P-High) and persistent intellectual disability (ID).
(5) Demographic Characteristics, and Loss of Diagnosis: The three autism subgroups did not differ in sex composition or maternal and paternal age at childbirth ( Sensitivity analysis results (Supplementary Tables S9 and S10) supported the primary analyses. While the magnitude of the estimates generally slightly decreased after accounting for uncertainty in group assignment, all primary analysis findings remained significant.
DISCUSSION
We extended the study of the trajectories of intellectual development of autistic individuals into late middle childhood/preadolescence in the cognitively heterogeneous APP cohort. Consistent with our prior work, autistic participants were assigned a group with intellectual disability from early childhood through preadolescence (ID; 45%), a group whose IQs increased substantially during early childhood referred to as Changers (CHG; 39%) or a group whose IQs were in the average or above range through the period (P-High; 16%). Unlike our prior study where P-High ADOS-2 CSS scores declined, the new groups did not differ with respect to autism severity at T3. Between middle childhood and preadolescence, VABS communication scores increased in CHG, decreased in ID, and stayed the same in P-High, such that there were no T3 group differences between CHG and P-High. T1-T3 externalizing declined significantly for CHG, although, there were no T3 group differences for internalizing or externalizing.
T1 correlates for CHG and P-High versus ID group membership at T3 included higher VABS communication and lower ADOS-2 CSS. A T1 to T2 increase in VABS communication scores and a decline in externalizing predicted CHG versus ID group membership at T3, while a T1 to T2 improvement in VABS communication and a reduction in ADOS-2 CSS predicted P-High versus ID group membership.
The rapid IQ gains in the CHG group we found in prior work slowed after middle childhood. While this is not consistent with two recent studies that report average mean IQ improvements through adolescence (Prigge et al., 2021;Simonoff et al., 2020), these studies examined mean differences versus trajectories, and Prigge et al.
investigated only intellectually able participants. Also noteworthy is that the positive T1-T2 autism symptom severity and communication adaptive functioning changes in CHG and P-High also slowed between T2 and T3. Waizbard and colleagues observed a similar pattern when they focused on autism symptom severity (Waizbard-Bartov et al., 2022). While we cannot entirely rule out that the reversion back to original scores was a statistical artifact, this pattern was not present for all measures or groups, providing support for a true reversion. Perhaps the complexity of social and cognitive developmental tasks of early adolescence expose more autism related traits, resulting in relative skill declines. In fact there is a growing consensus that the period of transition to school may be a turning point in autistic development (Georgiades et al., 2022) with age 6 representing a time of plateauing in early symptom improvement.
Only the CHG group experienced significant reductions in externalizing symptoms between T1 and T3. While internalizing scores in P-High and CHG did not increase with the beginning of adolescence as might be expected (Solomon et al., 2012), the ID group experienced some reduction in these symptoms, as has been found by others (Edirisooriya et al., 2021). However, it is not clear that internalizing symptoms, and especially anxiety, can be well measured for children with intellectual disability (Kerns et al., 2021), so these findings must be interpreted with caution. (Duncan & Bishop, 2015). In fact, in our sample, although T1 IQ and the VABS communication were highly correlated overall (r = 0.70), correlations between the VABS and IQ differed substantially for the trajectory groups, ranging from r = 0.5 for ID; 0.46 for P-High; to 0.25 for CHG.
Another clinically interesting observation with prognostic implications was that, contrary to popular clinical belief, language acquisition and VIQ change were not the sole drivers of overall intellectual development. Instead, we found that for CHG, 89% showed increases in NVIQ, while 84% experienced changes in both VIQ and NVIQ (Table S4). Thus, NVIQ did not become stable by age 3 in most participants, and even individuals with moderate mental disability could become members of CHG.
A second set of clinically and potentially intervention-relevant markers were those associated with T1-T2 changes. Here we found that T1-T3 increases in VABS communication scores rendered the CHG and P-High groups equivalent by T3 and distinguished both from ID. We also found that T1-T2 increases in externalizing symptoms were more characteristic of the CHG versus the ID group, and that T1-T2 decreases in ADOS CSS were more characteristic of participants in the P-High group. Although the precise cause and effect associations between IQ, adaptive functioning, externalizing, and autism symptoms remain unclear, our results suggest that each of these areas can improve. Furthermore, they may be critical treatment targets that drive the development of intellectual functioning, and fortunately, effective interventions in these areas have been developed (Kenworthy et al., 2014;Kim et al., 2021;Solomon et al., 2008).
This study had several limitations. First, some have shown that the DAS and MSEL are not entirely comparable, especially in the middle IQ ranges (Farmer et al., 2016). Although, others find no systematic differences (Bishop et al., 2011). Additionally, neither measure does a good job of assessing profound intellectual disability, requiring us to use floor scores for 28 participants. Second, while the LCGA and linear mixed-effects models used were able to handle missing data and produce valid results in the presence of data missing at random, their results may be biased if missingness depends on the missing values themselves. While formally testing whether the assumption of missingness at random (MAR) holds would require data from non-responders, our examination of missingness thus far suggests that MAR may be a plausible assumption here. Finally, by focusing on IQ, we adopted a very narrow definition of future outcomes. Recent studies have rightly encouraged broadening the meaning of outcomes Mason et al., 2021).
In conclusion, we showed that autistic youth from our middle childhood assessment continued to display IQ trajectories that were similar to those we observed earlier in childhood. We identified early and ongoing correlates of late middle childhood/preadolescence outcome which hold the potential to provide critical information related to prognosis and treatment development. | 2023-02-02T16:19:23.529Z | 2023-01-31T00:00:00.000 | {
"year": 2023,
"sha1": "d15d803fbcf7bd45f98713223545c5d950ca75c5",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcv2.12127",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4af4ff0cff513137d04ef8596a6751ed3ac0c969",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18247453 | pes2o/s2orc | v3-fos-license | On Non-central Stirling Numbers of the First Kind
It is shown in this note that non-central Stirling numbers s(n,k,a) of the first kind naturally appear in the expansion of derivatives of the product of a power function and a logarithn function. We first obtain a recurrence relation for these numbers, and then, using Leibnitz rule we obtain an explicit formula for these numbers. We also obtain an explicit formula for s(n,1,a), and then derive several combinatorial identities related to these numbers.
Introduction
We are dealing here with a special kind of numbers introduced by D. S. Mitrinović in his note [4]. In the paper [5] tables are given for the numbers which we called non-central Stirling numbers of the first kind. Following [3], they will be denoted by s(n, k, α). Several other names are in use for these numbers. One of them is r-Stirling numbers, as in [1]. The definition in this paper is restricted to the case when α is an nonnegative integer, and α ≤ n. L. Carlitz [2] used the name: weighted Stirling numbers. In the well known encyclopedia [6] they are called the generalized Stirling numbers. Here we use the name and the notation from the book [3]. For instance, (α) n = α(α − 1) · · · (α − n + 1) are falling factorials, and s(n, k) are Stirling numbers of the first kind.
We shall investigate derivatives of the function obtaining them in two different way. Theorem 1. Let α be real, and n nonnegative integer. Then where s(n, i, α), (0 ≤ i ≤ n) are polynomials of α with integer coefficients.
Proof. The assertion is true for n = 0 if we take s(0, 0, α) = 1. Taking we see that the assertion is true for n = 1. Suppose that the assertion is true for n ≥ 1.
Taking derivative in (1) we obtain Replacing i + 1 by i in the second sum we obtain It follows that the assertion is true if we take s(n + 1, n + 1, α) = s(n, n, α).
The preceding equation are well-known recurrence relations for non-central Stirling numbers of the first kind [3, pp.316]. Note 1. It is obvious that s(n, i, 0) = s(n, i)) are Stirling numbers of the first kind.
By the use of Leibnitz formula we shall obtain an explicit expression for s(n, k, α). The following equation holds: First we have Using induction it is easy to prove that: Taking particulary f (t) = t β we obtain: Replacing these in (3) we have the following.
Some combinatorial identities
Taking i = 1 in (4) we obtain the following: Corollary 1. Let α be a real number, and n be a positive integer. Then For s(n, 1, α) we have the following recurrence relation: Particulary, we have s(2, 1, α) = −2α − 1.
We shall now prove that polynomials r(n, 1, α), (n = 1, 2, . . .) defined by: satisfy the above recurrence relation. For n = 1 it is obviously true. Using two terms recurrence relations for Stirling numbers of the first kind we obtain: Since s(n − 1, 0) = 0, by replacing k + 1 instead of k in the first sum on the right we have: Furthermore, a well known property of Stirling numbers implies: We thus obtain that r(n, 1, α) satisfies (5). In this way we have proved the following identity: Theorem 3. Let α be a real number, and n ≥ 1 be an integer. Then: where s(n, k + 1) are unsigned Stirling numbers of the firs kind.
Some particular values for α in (6) gives several interesting combinatorial identities. For α = −1 we obtain an identity expressing factorials in terms of Stirling numbers of the first kind.
For α = 1 we obtain a formula for the sum of reciprocals of natural numbers. (k + 1)s(n, k + 1). | 2009-01-17T18:20:46.000Z | 2009-01-17T00:00:00.000 | {
"year": 2009,
"sha1": "dea307af7f8b31f53c30f24284888d8420aabf10",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dea307af7f8b31f53c30f24284888d8420aabf10",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
267140371 | pes2o/s2orc | v3-fos-license | THE IMPACT OF DIGITAL TRUST ON CUSTOMER SATISFACTION AND LOYALTY (THE CASE OF DIGITALIZATION IN CONTAINER SHIPPING SERVICES IN INDONESIA)
: The COVID-19 pandemic has driven digital transformation in many industries. Digitalization is also developing and affecting non-technology industries, including the cargo shipping industry with containers using ships in Indonesia which is experiencing digitalization changes in its services and operations. Indonesian container shipping companies have prepared strategies to maintain long-term relationships by increasing customer satisfaction and loyalty through digitalization. The purpose of this study was to determine how the effect of digital trust compared to service quality on customer satisfaction and loyalty. Qualitative data collection was conducted across 152 freight forwarding companies with respondents working in operations, customer service, sales, and marketing departments. The hypothesis is tested using a structural equation model (SEM). The research found that digital trust has no direct relationship with customer loyalty; But when mediated by customer satisfaction, digital trust affects customer loyalty. Service quality remains a determinant of customer satisfaction and loyalty. This research presents the influence of customer loyalty not only through traditional service quality but also digital trust in the modern era, which is still relatively new in the container shipping industry in Indonesia.
INTRODUCTION
The container shipping industry uses a B2B business model with customers from EMKL or freight forwarding companies providing services to freight owners.According to Kalafatis and Cheston (1997), cited by Balci dan Cetin, (2017), B2B business models are more complex than B2C in the container shipping industry in Turkey.Customer loyalty depends not only on the value of the product or service, but also on the value of the profits that customers get.Russo and Confente, (2017) explained that in the B2B business model, the customer's focus is on the company's financial performance, and customer loyalty is the main target of service providers.Several factors influence customer loyalty in B2B business models, according to Russo and Confente, (2017), Includes large switching costs, customer satisfaction, service quality, trust, and commitment to continue using service providers.
The rapid growth in ocean cargo shipping is encouraging container shipping industry businesses to purchase larger capacity container carriers.This continuous purchase of container vessels leads to oversupply, creating an imbalance between supply and demand, resulting in fierce competition and decreased profitability Glave, T., Joerss, M., and Saxon, S., (2014).Pressure on the sales division to meet ship capacity triggered hasty decisions on shipping rates, causing price competition to the detriment of the industry (Glave et al., 2014).
To maintain profitability, container shipping companies need to maintain and win customer loyalty by providing superior service.Increasing customer satisfaction is also key in responding to intensive competition Balci et al., (2019); (Chao and Chen, 2015)
A. The Effect of Service Quality on Customer Satisfaction
The first hypothesis in this study implies that service quality has a significant impact on customer satisfaction in the context of corporate container service.The assessment of customer satisfaction by respondents is based on the service experience that is considered quality from the container shipping company.The results of the hypothesis test showed a critical t-values of 10,046 and p-values of 0,000.This finding is consistent with the results of previous studies that confirmed that service quality affects customer satisfaction, especially in the service industry, B2B services, logistics industry, shipping, and specifically, the container shipping industry in various countries.The implication is that container shipping companies need to prioritize and improve their service quality to achieve optimal customer satisfaction levels.(Akıl and Ungan, 2021;Chen et al., 2009;Fachmi et al., 2020;Kang and Kim, 2009;Lie et al., 2019;Roh et al., 2021;Yadav and Rai, 2019;Yorulmaz and Taş, 2022;Yuen and Thai, 2015).
Service quality plays a crucial role in the service industry, especially in the context of container shipping services.The study notes the importance of service quality in influencing customer satisfaction, not just in one particular generation, but covering the entire spectrum of generations, including baby boomers, Gen X, and Gen Y.The majority of respondents in the age range of 31-50 years gave the impression that service quality indicators, such as sales, customer service, and digitalization, contribute positively to customer satisfaction These findings support previous research, as expressed by Hirata, (2019), that the three strongest characteristics of Service Quality are sales, customer service, digitalization.Inseparable from the container shipping industry in Indonesia, Service Quality is also one of the most important factors for Customer Satisfaction, because the Quality of Service obtained from container shipping companies, determines the speed of customers in providing shipping services, which will also provide efficiency for the delivery services provided by customers to the owner of the goods.
B. The Effect of Service Quality on Customer Loyalty
The second hypothesis in the results of the hypothesis test confirms that service quality also has an effect on customer loyalty in the context of container shipping companies.Respondents who experience quality service tend to show loyalty to the company.Analysis of the hypothesis test resulted in a critical t-values of 3.161 and p-values of 0.002.Although some research, such as in the banking industry (Fattah Al-Slehat, 2021; Yadav dan Rai, 2019), and in the container shipping industry also shows that there is an influence of Service Quality on Customer Loyalty (Balci, 2021a;Balci et al., 2019;Gil-Saura et al., 2018;Subaebasni et al., 2019).However, with a p-value result of 0.002, the significance of Service Quality to Customer Loyalty is not as strong as Service Quality to Customer Satisfaction.These results slightly corroborate research conducted in other industries, according to Lie et al., (2019) on his research on user-based transport applications, and research by Fachmi et al., (2020) In the insurance industry, which states that the quality of service does not always have a significant influence on customer loyalty.The implication is that container shipping companies need to understand that while service quality can affect customer satisfaction, it does not always directly create customer loyalty.Additional efforts may be needed to reinforce other factors that influence customer loyalty within the context of this industry.
C. The Effect of Digital Trust on Customer Satisfaction | 821
The Impact Of Digital Trust On Customer Satisfaction And Loyalty (The Case Of Digitalization In Container Shipping Services In Indonesia) The third hypothesis asserts that digital trust affects customer satisfaction in the context of container shipping companies.Respondents who experience the benefits of digital information or applications are likely to be satisfied with the company's services.Analysis of the hypothesis test yielded a critical t-value of 6.285 and a p-value of 0.000.Trust in digital information and the benefits of this application provide efficiency in customer operations.This finding is also supported by direct interviews with customers, where they state that trust in digital applications has saved them operational costs.Previous research has also shown that Digital Trust has a positive impact on customer satisfaction (insert name of previous research).The implication is that container shipping companies need to continue to increase customer digital trust by strengthening and optimizing the use of digital information and applications in order to provide more significant benefits for customers, so as to increase the overall level of customer satisfaction Balci, (2021a) in the container shipping industry, also significantly affects Customer Satisfaction.
D. The Impact of Digital Trust on Customer Loyalty
From the results of the hypothesis test on the fourth hypothesis, it shows that Digital Trust does not affect Customer Loyalty.Having trust in digital applications, according to respondents, does not affect Customer Loyalty.The results of the hypothesis test on this fourth hypothesis produce a critical t-values of 0.657 and have a p-value of 0.511.The results of this fourth hypothesis test contradict the results of research conducted on the container shipping industry in Turkey by Balci, (2021a) which states that Digital Trust has an influence on Customer Loyalty.However on research by Balci, (2021a) it shows that the Digital Trust variable has a moderate level of significance to Customer Loyalty.In addition, it is known that 50% of respondents include baby boomers and Gen X, this certainly affects Digital Trust related to Customer Loyalty.According to Chee, ( 2023) that there is a digital divide that occurs in the baby boomers generation which causes difficulties adapting to technology.There are components that must be owned to get sustainable intentions for baby boomers and Gen X so that it can become a habit or become dependent (Santosa et al., 2021).
E. The Effect of Customer Satisfaction on Customer Loyalty
The fifth hypothesis in the results of the hypothesis test shows that Customer Satisfaction has an influence on Customer Loyalty.Respondents who feel satisfaction will encourage customers to continue making transactions with container shipping companies.This is shown from the results of the hypothesis test which produces a critical t-values of 5.929 and has a p-value of 0.000.The results of this fifth hypothesis test further strengthen research on the effect of Customer Satisfaction on Customer Loyalty in the service industry (Fachmi et al., 2020;Gecit dan Taskin, 2020;Lie et al., 2019;Uyar, 2019), and the effect of Customer Satisfaction on Customer Loyalty in the container shipping industry (Akıl and Ungan, 2021;Balci, 2021a;Gil-Saura et al., 2018;Subaebasni et al., 2019;Wen, 2020).In the container shipping industry in Indonesia by getting Customer Satisfaction, it turns out that it can also maintain Customer Loyalty.
F. The Effect of Service Quality on Customer Loyalty mediated by Customer Satisfaction
The sixth hypothesis in the results of the hypothesis test shows that there is a mediating effect of Customer Satisfaction on Service Quality on Customer Loyalty.Respondents who get Quality Service from service providers will provide satisfaction to customers then for this satisfaction, container shipping companies will get Customer Loyalty.In the results of the hypothesis test that produces a critical t-values of 5.040 and has a p-value of 0.000.The results of this hypothesis test further corroborate previous research conducted on the service industry, which has proven that Service Quality mediated by Customer Satisfaction, affects Customer Loyalty (Fachmi et al., 2020;Giao et al., 2020;Lie et al., 2019;Yadav dan Rai, 2019).
G. The Effect of Digital Trust on Customer Loyalty mediated by Customer Satisfaction | 822
Andy WU 1 *, Kandi Sofia Senastri Dahlan 2 The seventh hypothesis in the results of the hypothesis test shows that there is a mediating effect of Customer Satisfaction on Digital Trust on Customer Loyalty.Respondents who have Digital Trust will become loyal to container shipping companies if they get service satisfaction.In the results of the hypothesis test that produces a critical t-values of 4.462 and has a p-value of 0.000.The results of this hypothesis test further enrich the existing literature, especially in the container shipping industry.The results of the hypothesis test of the effect of mediation of Customer Satisfaction on Digital Trust on Customer Loyalty also corroborate the results of the study Balci, (2021a).
CONCLUSION
This research analyzes how the effect of Service Quality and Digital Trust on Customer Satisfaction and Customer Loyalty as a case study on container shipping companies in Indonesia.Customer Satisfaction is also analyzed as a mediating factor between Service Quality and Digital Trust on Customer Loyalty.The survey conducted was limited to the segmentation of freight forwarding customers, namely Marine Cargo Expeditions (EMKL) or freight forwarders located in DKI Jakarta, Indonesia.
The analysis technique used is SEM-PLS to test conceptual models.Validity tests and reliability tests were carried out to determine how valid and reliable the indicators used in this study were.There is one indicator that is not valid, namely the indicator on the Digital Trust variable (DT1), which, according to alleged, the statement on this indicator does not specifically show a correlation between the statement and the Digital Trust variable, causing ambiguity and causing respondents to give a neutral response.This indicator was decided not to be used further in the analysis process, in order to obtain a more accurate analysis of this study.The R square test showed good values, while the results of hypothesis testing on the seven hypotheses proposed in this study, there was one hypothesis that was rejected.
The influence of Service Quality on Customer Satisfaction in the container shipping industry.Service Quality from three foundations, namely representative sales, representative customer service, and digitalization affects Customer Satisfaction.Based on the correlation relationship between indicators, here are 4 indicators that are most crucial to Customer Satisfaction, namely the ease of contacting customer service representatives, having a good relationship with sales representatives, quickly getting responses from sales representatives and customer service who can provide solutions.
Based on the conclusions of research results and discussion on the effect of Service Quality and Digital Trust on Customer Loyalty mediated by Customer Satisfaction.Sales representatives and customer service become very vital to Customer Satisfaction.Indonesian container shipping companies are advised to provide continuous training to sales representatives to develop relationship relationships with customers, fast and responsive to communication with customers, and customer service representatives to be able to provide excellent service, quickly respond to customers, and always be able to provide solutions.
To get Customer Loyalty, sales representatives play the most important role.It is recommended that Indonesian container shipping companies need to have a regular schedule for approach by sales representatives specifically to customers and carried out on an ongoing basis so as to make customers continue to be loyal to container shipping companies.Indonesian container shipping companies are also advised to improve their organizational and structural approach to continue to gain Customer Loyalty.
The limitations of the research used as a research variable only use four theoretical studies, namely Service Quality, Digital Trust, Customer Satisfaction, and Customer Loyalty.This research is limited to 135 respondents domiciled in Jakarta with a focus on forwarder segmentation, namely EMKL
RESULTS AND DISCUSSION Hipotesis T Statistics P Values Hypothesis test results H1:
Balci, G.,| 819The Impact Of Digital Trust On Customer Satisfaction And Loyalty (The Case Of Digitalization In Container Shipping Services In Indonesia) | 2024-01-24T18:12:36.009Z | 2023-12-28T00:00:00.000 | {
"year": 2023,
"sha1": "ee610db58d3cd0f0ea24af55a22969896e6b90bb",
"oa_license": "CCBYSA",
"oa_url": "https://opsearch.us/index.php/us/article/download/87/83",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "82c6f8e88133bbe32662041ae315cf7a185ed7d5",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
227164878 | pes2o/s2orc | v3-fos-license | Impaired NDRG1 functions in Schwann cells cause demyelinating neuropathy in a dog model of Charcot-Marie-Tooth type 4D
Mutations in the N-myc downstream-regulated gene 1 (NDRG1) cause degenerative polyneuropathy in ways that are poorly understood. We have investigated Alaskan Malamute dogs with neuropathy caused by a missense mutation in NDRG1. In affected animals, nerve levels of NDRG1 protein were reduced by more than 70% (p< 0.03). Nerve fibers were thinly myelinated, loss of large myelinated fibers was pronounced and teased fiber preparations showed both demyelination and remyelination. Inclusions of filamentous material containing actin were present in adaxonal Schwann cell cytoplasm and Schmidt-Lanterman clefts. This condition strongly resembles the human Charcot-Marie-Tooth type 4D. However, the focally folded myelin with adaxonal infoldings segregating the axon found in this study are ultrastructural changes not described in the human disease. Furthermore, lipidomic analysis revealed a profound loss of peripheral nerve lipids. Our data suggest that the low levels of mutant NDRG1 is insufficient to support Schwann cells in maintaining myelin homeostasis.
Degenerative neuropathies caused by mutations in
humans, classified as Charcot-Marie-Tooth type 4D (CMT4D) [1] , Greyhound show dogs [2] and Alaskan Malamute dogs [3] . Cases of Alaskan Malamute polyneuropathy (AMP) were first described in Norway in the 1980 s [4] and the disease was believed eradicated due to breeding programs, but re-emerged in Scandinavia several decades later [3] . AMP is inherited in an autosomal recessive manner and associated with a missense mutation in NDRG1 (p.Gly98Val) [3] . Clinically, the disease is slowly progressive and characterized by tetraparesis, pelvic limb ataxia, exercise intolerance and inspiratory stridor with onset of clinical signs in adolescence [3][4][5] .
The NDRG1 protein is not specific for peripheral nerves and is detected in a wide variety of human, rodent and dog tissues with the highest levels in epithelial cells and myelinating glial cells [6][7][8] . Still, how NDRG1 mutations lead to neuropathies without clinical signs from other body systems, as well as the specific function of NDRG1 in the peripheral nervous system, are not clear [8 , 9] . The protein is functionally diverse being involved in several cellular processes, such as vesicular transport [10][11][12] , microtubule dynamics [13] , centrosome homeostasis [14] and lipid metabolism [15 , 16] . The posttranslational processing of NDRG1 is complex and tissue-and cell-specific [7] . Notably, in myelinating Schwann cells high levels of phosphorylated NDRG1 localize to the abaxonal cytoplasm and outer parts of the Schmidt-Lanterman clefts [7 , 17] . In addition to its role in neuropathies, the NDRG1 protein is also reported to be involved in carcinogenesis [18] , metastasis suppression [19] and counteracts epithelial-mesenchymal transition [20] .
Charcot-Marie-Tooth disease (CMT) denominates the most frequent forms of inherited neuropathies in humans. This is a heterogeneous group of diseases, further classified into subtypes based on clinical and pathological phenotype, mode of inheritance, nerve conduction velocity and causative gene [21] . The CMT4 subgroup includes demyelinating neuropathies with autosomal recessive inheritance [22] . One of them, CMT4D, also known as Hereditary motor and sensory neuropathy-Lom (HMSNL), is a primary demyelinating neuropathy with onion bulb formation, accumulation of pleomorphic material in the Schwann cell cytoplasm and secondary axonal loss [23] . In contrast, the NDRG1 -associated polyneuropathy of Greyhound show dogs was reportedly dominated by axonal changes [2] , while descriptions from Alaskan Malamutes are differing [3 , 4] . However, in-depth studies of nerves from affected Alaskan Malamute dogs have not previously been performed.
Naturally occurring neuropathies in dogs are increasingly recognized as models for human neuropathies [24 , 25] . As opposed to experimental rodents, dogs naturally develop similar diseases to humans. Dogs also have a larger body size and a life-expectancy that is more comparable to this species. Furthermore, dogs share environmental conditions and lifestyle with humans. Together this makes them excellent translational disease models [2] . The fact that dogs can be investigated with sophisticated standardized neurological and electrophysiological tests is a further advantage, as it allows for a detailed characterization of the disease phenotype . Note: All cases except case 1 and 2 were included in [5] . Furthermore, case 3, 5 and 6 were included in [3] .
The aim of this study was to describe in detail the morphology of AMP nerves and discuss these changes in relation to the cell biology of NDRG1 and the overall clinical presentation. Furthermore, studying Alaskan Malamutes with a NDRG1 mutation is relevant to understand more about the involvement of NDRG1 in human diseases.
Animals
Nineteen privately owned pure-bred Alaskan Malamute dogs (14 affected dogs and 5 controls free from clinical signs of polyneuropathy) were included in the study ( Table 1 , detailed information in Suppl. Table A.1. Number of dogs analyzed with the methods and age of depicted animals are also provided in the figure legends). Sixteen out of nineteen were genotyped for the NDRG1 -allele using the previously described TaqMan assay [3] ; Twelve dogs ( n = 12) were classified as homozygous mutants (mut/mut) and four dogs were homozygous wild type (wt/wt) ( n = 4). Whether genotyped or not, all affected dogs ( n = 14) were closely related to each other and presented with neurological signs classically associated with AMP. All samples for the study were collected by veterinarians after written consent from the dog owners. No ethics committee approval was required as all samples were taken as part of the standard diagnostic procedures, in vivo ( n = 7) and/or postmortem ( n = 15), and the investigation did not interfere or impede other tests. Information regarding sex, age at sampling, results from electrodiagnostic testing (electromyography (EMG) and motor nerve conduction velocity (MNCV)), and clinical course was collected from the medical records.
Tissue sampling
Biopsies from the common fibular nerve and the cranial tibial, biceps femoris and gastrocnemius muscles were taken under general anesthesia as part of the diagnostic workup. Formalin-fixed and fresh samples from both nerve and muscles were shipped by courier to diagnostic laboratories for evaluation. Fixed nerve biopsies were resin-embedded and evaluated in semithin sections (1 μm), while fixed muscle biopsies were paraffin-embedded and routinely stained with hematoxylin and eosin. Unfixed biopsies were transported on cold packs and evaluated cryohistologically with a standard panel of histochemical stains and reactions [26] .
Postmortem examinations were carried out shortly after pentobarbital-euthanasia. Samples for immunohistochemistry and immunofluorescence were fixed in 10% buffered formalin and subsequently paraffin-embedded. Samples for Western blotting and RT-qPCR were snap frozen in isopentane, transferred to liquid nitrogen and stored at −80 °C until analysis. Samples for electron microscopy and nerve fiber teasing were gently separated into individual fascicles and fixed in 2.5% glutaraldehyde in Sorensen's phosphate buffer (0.1 M, pH 7.4) for 4 h at room temperature. For details about sampled nerves from individual dogs see Suppl. Table A.1. In addition, a routine postmortem examination was performed, including sampling from cranial tibial, biceps femoris and gastrocnemius muscles.
Western blotting
Nerve samples from four NDRG1 mut/mut and four NDRG1 wt/wt Alaskan Malamutes were thawed, and the epineurial fat removed. Western blotting was performed as previously described [7] . Protein transfer efficiency and protein loading were assessed by staining total protein on the PVDF membranes by SYPRO R Ruby Protein Blot Stain (Molecular Probes, Thermo Fisher Scientific). Band signals were quantified with ImageQuant TL (GE Healthcare) and statistical analyses performed with a non-parametric test (Mann Whitney U-test) in GraphPad Prism (GraphPad Software, San Diego, California, USA).
Statistics (Mann Whitney U-test) were performed in GraphPad Prism.
Processing for transmission electron microscopy and nerve fiber teasing
Processing for transmission electron microscopy and nerve fiber teasing were performed as previously described [27] .
Antibodies
Details about the antibodies used in the different analyses are specified in Table 2 .
Immunofluorescence and immunohistochemistry
Sections of 3-4 μm from formalin-fixed and paraffinembedded tissues were placed on glass slides (Superfrost Plus R , Menzel Gläser, Thermo Fisher Scientific) and stored at 4 °C until staining. Previously described protocols were used for immunofluorescence [7] and immunohistochemistry [27] .
Morphometry
Images from semithin sections of n. fibularis communis ( n = 8) or n. tibialis ( n = 3) were evaluated by Image-Pro Plus (Media Cybernetics, Rockville, Maryland, USA). The area of the nerve fibers and axons were measured. Thereafter, the diameters of these were derived from the area of a circle of equivalent area [28] , and the g-ratios calculated. Statistics (Mann Whitney U-test) were performed in GraphPad Prism. An example of the image analysis is provided in Supplementary Fig 1.
Extraction and analysis of lipids from peripheral nerves
Lipid extraction from nerves and non-targeted lipid analysis were performed as previously described [27] . Briefly, snap frozen nerve tissue (50 mg) was homogenized with a bead homogenizer and lipids were extracted using mixture of chloroform and methanol. The lipid analysis was carried out using the supercritical fluid chromatographic system Acquity UPC2 R coupled to a quadrupole time-of-flight mass spectrometer SYNAPT G2-S HDMS (both Waters, Milford, Massachusetts, USA). The method used allows detection of TG, DG, MG, Cer, HexCer, HexCer(OH), SM, FC, CE, PG, PC, LPC, PE and LPE. Non-targeted data were processed with Progenesis QI enabling export of list of compounds found along with their abundances. Data were further filtered using an in-house developed script collecting total abundances for each individual lipid class. Response factors (Rf) were used for correction of raw abundances to show semi-quantitative composition of lipid classes in the studied samples. Rf were determined experimentally by comparing abundances of lipid standards, one representative per each class, of equal concentration. Two-tailed independent t -test was performed to evaluate statistical difference in lipid class distribution between the groups.
Long-term clinical course and electrodiagnostic examination
Four of the 14 affected dogs were euthanized, at the owner's request, in conjunction with the diagnosis of AMP. Ten of the affected dogs were allowed to survive this disease stage and followed up (median 44 months, range 12-100 months) by repeated examinations or contact with clinicians in the research group. In three of these 10 dogs, the clinical signs gradually progressed until euthanasia. In one other dog, the clinical signs progressed in the two years following diagnosis, but the dog was subsequently lost to follow up. In the remaining six dogs followed up, both the gait abnormalities and the exercise intolerance slowly improved during the months after nadir and then stabilized. However, none of the dogs returned to normal and the inspiratory stridor persisted. At a later stage (at the age of 3 and 6 years, respectively), two affected dogs presented with regurgitation due to development of megaoesophagus (Suppl. Fig. 2 ) and were then euthanized. Eleven of the 14 affected dogs were subjected to postmortem examination and autopsy confirmed megaoesophagus in three dogs (including the two dogs with regurgitation).
MNCV in the fibular ( n = 4, mean 23.13 m/s, SD 14.24, reference 79.8 ±1.9 [29] ) and ulnar nerves ( n = 10, mean 37.5 m/s, SD 12.73, reference 58.9 ±1 [29] ) were decreased in all the examined dogs (Suppl. Table 1 ). In two dogs, MNCV could not be determined as no compound muscle action potential (CMAP) was produced by stimulation. EMG revealed spontaneous activity in several muscles in all dogs tested ( n = 10). In two dogs, repeated MNCV measurements were performed. For case three, the MNCV in the ulnar nerve was 30, 40 and 56.3 m/s at the age of three, eight and nine years, respectively. Furthermore, the MNCV in the fibular nerve was 31.4 m/s at the age of three years, but not possible to measure at the age of nine. For case four, the MNCV in the fibular nerve was 29 m/s and 26.9 m/s at the age of two and five years, respectively.
Levels of NDRG1 protein and mRNA
Nerves from affected dogs had reduced NDRG1 protein levels compared to controls ( Figs. 1 A, C). The intensity of the 42 kDa band, corresponding to the full length protein, as well as the bands with molecular weights between 32 and 40 kDa, were reduced by approximately 70% in the NDRG1 mut/mut dogs ( p = 0.029). Additionally, there was a significant reduction in signal intensity from the band corresponding to NDRG1 phosphorylated at residue Thr346 (pNDRG1) in this group of dogs ( p = 0.029, Figs. 1 B, C). In contrast, the mRNA levels in nerves of NDRG1 mut/mut dogs normalized to GAPDH were not significantly different from the controls ( p = 0.2, Fig. 1 D). It should be noted that GAPDH has not been validated as a reference gene for mRNA expression analysis in nerves of dogs, thus the result should be interpreted with caution.
Teased nerve fibers
In nerves from affected dogs examined by nerve fiber teasing, internodal lengths and myelin thickness varied ( Fig. 2 ). Demyelinated segments and short internodes with reduced myelin thickness (intercalated internodes), consistent with remyelination, were present. The changes had a multifocal distribution, and severely affected internodes intermingled with internodes without observable changes. This distribution is typical for a demyelinating disease [30 , 31] . In some cases, paranodal retraction and widening of the nodal gap were evident. Focal thickenings of the nerve fibers were present, mostly internodally, occasionally Fig. 1. A, B. Western blot of nerve lysate from Alaskan Malamute controls ( n = 4) and cases ( n = 4) with antibodies against NDRG1 ( A ) and phospho-NDRG1 ( B ). C. Semiquantification of NDRG1 protein in nerve lysates based on band intensity in Western blot (mean + SD). There is a significant reduction in the levels of both total NDRG1 ( p = 0.029) and pNDRG1 ( p = 0.029) in NDRG1 mut/mut Alaskan Malamutes compared to controls. D. Relative expression of NDRG1 mRNA in nerve samples (mean + SD) from Alaskan Malamutes controls ( n = 4) and cases ( n = 4). Relative expression levels were calculated using the Relative Standard Curve method with standard curves obtained from a random sample, and NDRG1 expression was normalized to GAPDH . The difference between the groups was not significant ( p = 0.2).
close to the Schwann cell perikaryon ( Fig. 2 d), but paranodal localization was also observed. At this level, it was not possible to ascertain whether the swellings derived from the axon, the Schwann cell or both. Wallerian-like axonal degeneration was observed in only a few fibers (not shown).
Light microscopy
Nerves from NDRG1 mut/mut Alaskan Malamutes exhibited a loss of large myelinated fibers ( Fig. 3 A) accompanied by a concurrent increase in endoneurial connective tissue. These changes varied inter-and intraindividually from only mild affection to severe loss of fibers with concomitant fibrosis. As shown morphometrically for the common fibular nerve, there was a shift in the distribution of myelinated nerve fibers towards smaller diameter fibers ( Fig. 3 B). As the same shift was observed in axonal diameter-frequency histograms, this shift is probably caused by a combination of loss of large myelinated fibers and reduced myelin thickness. While the fibular nerves from the NDRG1 wt/wt Alaskan Malamutes had the expected bimodal diameter distribution of myelinated fibers [32] , the distribution in nerves from some of the cases approached unimodality (for example case four). For case three and five, biopsies taken at different ages allowed assessment of a potential disease progression. The investigations showed a shift towards thinner fibers at greater age. The myelinated fiber density was not significantly different between the groups ( p = 0.1429. Controls: n = 2, mean: 4798.2 MF per mm 2 , SD 959.2. Cases: n = 6, mean: 6058.9 MF per mm 2 , SD 1021.4). When nerves were studied at higher magnification, many of the remaining fibers had thin myelin sheaths in relation to the axonal size ( Figs. 4 E-G), in agreement with results from the study of teased fibers and our finding that the gratios of the NDRG1 mut/mut Alaskan Malamutes were shifted towards higher values compared to the control (Suppl. Fig. 3 ). Presumptive regenerative clusters were observed in some of the nerves ( Fig. 4 E). Swollen nerve fibers were present in the nerves from the NDRG1 mut/mut Alaskan Malamutes ( Figs. 4 F, G) and studied more closely at the ultrastructural level (see Section 3.3.3 ).
Lesions were observed in both proximal (for example nerve roots and sciatic nerves) and distal nerve segments (such as tibial, fibular and recurrent laryngeal nerves), long (for example recurrent laryngeal nerve) and short nerves (obturator nerve), and involved both mixed and purely sensory nerves (superficial radial nerve).
In skeletal muscle, angular atrophy of myofibers (varying from scattered singular to small and large groups) were present ( Fig. 4 A), in accordance with denervation atrophy following axonal loss. The angular atrophied fibers were of both fiber types as shown by the ATPase reaction ( Fig. 4 B). The normal mosaic pattern of muscle fiber types was regionally absent in some cases with fiber type grouping, supporting attempts at reinnervation ( Fig. 4 B).
Ultrastructural pathology
The ultrastructural examination confirmed the presence of thinly myelinated nerve fibers and small onion bulbs (not shown). Onion bulbs and thinly myelinated nerve fibers suggest repeated episodes of demyelination and remyelination. Macrophages with intracytoplasmic vacuoles were present around demyelinated nerve fibers and also observed intratubary (not shown). The presence of Iba1 +macrophages in the endoneurium was confirmed by immunohistochemistry (see Section 3.3.5 ).
A frequent finding was accumulation of filamentous material in the cytoplasm of myelinating Schwann cells. This material was observed in the adaxonal Schwann cell cytoplasm or in the inner part of dilated Schmidt-Lanterman clefts ( Figs. 5 A). Occasionally, the Schmidt-Lanterman clefts were disrupted and then associated with dyscompacted myelin sheaths mixed with a pleomorphic, coarsely granular osmiophilic material ( Figs. 5 B, C) dispersed between the sheets. This morphologically heterogenous material probably consists of a mixture of the aforementioned filamentous material and lipids from myelin degradation as it intermingled with fragments of periodically structured lamellae [33] .
Focally folded myelin was often observed consisting of infoldings derived from the inner part of the myelin sheaths ( Figs. 5 D, E, F). The folds evolved from the Schmidt-Lanterman clefts ( Fig. 5 D) and occasionally seemed to subdivide the axon into pockets ( Fig. 5 E, F). This resulted in several axonal structures enclosed by the same myelin sheath, separated by thin myelin septa derived from the adaxonal part of the sheath ( Figs. 5 B, D-F). Degenerating organelles were present in the myelin-enclosed axonal pockets ( Figs. 5 B, D-F), suggestive of disrupted axonal transport and early axonal degeneration. Despite an overall increase in nerve fiber diameter, the diameter of the axon was often reduced and the axonal outline distorted in the segments with focally folded myelin, seemingly compressed by the myelin infoldings and adaxonal Schwann cell material ( Figs. 5 B, E). Artefactual changes can be produced by delayed fixation, however, as the ultrastructural changes reported were also present in nerve biopsies fixed immediately after surgical removal, we consider it unlikely that the reported changes are artefacts.
Immunofluorescence
As structures resembling Hirano bodies, containing actin and actin-related proteins [34] , have been described in the Schwann cell cytoplasm of rodents with Ndrg1 mutations [9] , immunofluorescence was performed with antibodies against β-actin and neurofilament. In nerves from the NDRG1 mut/mut Alaskan Malamutes, β-actin-positive aggregates were present multifocally in myelinating Schwann cells ( Fig. 6 , Suppl. Fig. 4 ). More specifically, the β-actin signal was present in thin strands and occasionally formed circular or semi-circular structures. The diameter of the neurofilament-positive axon was reduced in these areas, but axonal swellings were present in adjacent segments. Occasionally, the actin-positive material surrounded small axonal structures only coupled to the main axonal structure through thin connections. Table 3 Relative lipid class distribution in peripheral nerves from Alaskan Malamute polyneuropathy cases ( NDRG1 mut/mut ) and controls ( NDRG1 wt/wt ).
Immunohistochemistry
Infiltration and/or proliferation of macrophages, T-and Blymphocytes in the nerves were investigated with antibodies against Iba1, CD3 and CD79, respectively. While increased numbers of Iba1 + cells in the endoneurium were observed in NDRG1 mut/mut Alaskan Malamutes compared to NDRG1 wt/wt , no difference between the genotypes was observed for CD3 and CD79 (not shown).
Lipid analysis
Analysis of peripheral nerve lipid composition revealed significant decreases in hexosylceramides (HexCer) and sphingomyelins (SM) in the relative lipid class distribution in the NDRG1 mut/mut Alaskan Malamutes compared to NDRG1 wt/wt (Suppl. Fig. 5 and Table 3 ).
Discussion
Neuropathies can be caused by malfunctions at either end of the axo-glial communication axis; i.e., be primary axonal or primary glial cell disorders. This distinction is important for understanding the etiology and molecular pathology of a given disease, but can be difficult to ascertain due to overlapping clinical and pathological features, regardless of the primary defect [35][36][37] . Since NDRG1 is expressed in Schwann cells and not axons, polyneuropathies associated with mutations in NDRG1 are expected to result from compromised Schwann cell functions. In accordance with this, human CMT4D patients show demyelinating changes in childhood, rapidly followed by axonal loss [9 , 38] and severe clinical signs [38 , 39] . In mice models of this disease, demyelination is the dominant feature, with less pronounced axonal loss [9] . In this report, we provide a detailed characterization of the NDRG1 -associated Alaskan Malamute polyneuropathy, revealing previously unrecognized features. 6. Immunofluorescence on nerve sections (cases n = 3, controls n = 2). Representative images from case 5 (3 years old) with antibodies against beta-actin (green) and neurofilament (red), two different nerve fibers are shown. Nuclei are stained with DAPI (blue). Aggregates of actin are present in the nerves of affected Alaskan Malamutes. Note the difference between the aggregates and the sparse amount of actin normally present in the Schmidt-Lanterman clefts (arrowheads). Although the axonal diameter was reduced in the areas with aggregates, the axon was swollen adjacent to these segments (asterisk). Note the small axonal structures within the actin-positive areas only coupled to the main axon through thin connections (arrows).
The changes observed in the nerves of affected Alaskan Malamutes in this study indicate a demyelinating disease with remyelination, characteristic axonal changes and eventually loss. Thus, Alaskan Malamutes with NDRG1 mutations are apparently more similar to humans with CMT4D than the rodent models where axonal involvement is milder [9 , 23 , 39 , 40] . Our findings contrast with previous reports from dogs [2 , 3] . In a study of Greyhounds lacking NDRG1 [2] , and in a previous report on the same AMP as presented here [3] , it was concluded that the disease was predominantly axonal or mixed due to the presence of degenerative axonal changes in segments without concurrent myelin abnormalities. In the Greyhounds, thinly myelinated (i.e. remyelinated) nerve fibers, dyscompaction of the adaxonal myelin sheath and granulofilamentous inclusions in the Schwann cell cytoplasm were also observed [2] . Thus, the changes in nerves of humans, rodents and dogs with NDRG1 abnormalities share certain similarities, and AMP is indeed a new model for human CMT4D, replicating both the demyelination and axonal changes, in both motor and sensory nerve fibers, present in the human disease [23 , 40] .
A progressive polyneuropathy is described in human CMT4D patients with gait disturbance in their first decade, upper limb involvement in their second and sensorineural deafness in their third decade of life [38] . Disease progression in affected Alaskan Malamutes was documented with the diameter shift observed in morphometric analyses of semi-thin nerve sections from a few dogs. Results from electrodiagnostic examinations were in agreement with a polyneuropathy involving motor nerve fibers, but in Case 3, serial measurements revealed improved MNCV with increasing age -in accordance with her clinical development during adulthood. The observed increase in MNCV is in accordance with remyelination of previously demyelinated internodes, through which the nerve conduction velocity might recover to at least 60% of normal [41] . The remyelinated internodes remain thinner than normal, explaining the reduction in myelinated fiber diameter observed by morphometry in the same dog.
The filamentous material present in the adaxonal cytoplasm and inner part of the Schmidt-Lanterman cleft resembles inclusion material reported from the same location in nerves of human CMT4D patients and rodent models of this disease [9 , 23 , 39 , 40] . In humans with CMT, this material is seemingly specific for the 4D subtype [40 , 42] , however, to the best of our knowledge, the content of the material has not been ascertained. From studies in rodents, the inclusions have been proposed to represent Hirano bodies based on morphological criteria [9] , but in human CMT4D nerves, a similar material did not have the structured morphology of true Hirano bodies [23] . Hirano bodies are described as paracrystalline inclusions consisting of sheets of parallel actin filaments [34] . The filamentous material observed in the nerves of affected Alaskan Malamutes lacked the paracrystalline structure reported from rodents [9 , 43] , but otherwise resembles the inclusion material reported from humans and rodents by its ultrastructural morphology and localization. Furthermore, we confirm its richness in actin by immunofluorescence.
Actin polymerization occurs in Schwann cells in both health and disease. Actin remodelling drives the membrane extension during normal myelination [44] as well as in conditions with excessive myelin growth [45] . Furthermore, actin polymerization occurs in Schmidt-Lanterman clefts during Wallerian degeneration [46] , and recently, signaling from injured axons were shown to trigger formation of constricting actin spheres in Schwann cells, important for swift removal of the degenerating axon [47] . Actin polymerization was also found in the Schmidt-Lanterman clefts and adaxonal Schwann cell cytoplasm of Tibetan Mastiffs with Inherited Hypertrophic Neuropathy, where the filaments ultimately caused distension of the Schwann cell cytoplasm and subdivision of the axon [31 , 48 , 49] , strongly resembling the apparent division of the axon regionally within one myelin sheath observed in the AMP nerves. This change is also described from Greyhounds lacking NDRG1 [2] , but not reported in CMT4D [9 , 23 , 39 , 40 , 50-52] . It remains to investigate the relationship between NDRG1 functions and actin polymerization in Schwann cells and the possible role of filaments in the intrusion of the Schwann cell into the axon. The intrusions with myelin infoldings could represent focal hypermyelination, caused by reduced NDRG1 activity in Schwann cells, and the actin aggregates observed in the Schmidt-Lanterman clefts of AMP nerves could conceivably be an early stage in the uncontrolled membrane growth ultimately leading to formation of myelin folds [45] and axonal degeneration.
We have previously shown that phosphorylated NDRG1 preferentially localized to the abaxonal cytoplasm and outer aspects of the Schmidt-Lanterman clefts in myelinating Schwann cells of normal dogs by immunofluorescence, while no pNDRG1 signal was observed in an affected Alaskan Malamute [7] . Phosphorylated NDRG1 (Thr346) has been suggested to participate in termination of myelination, as loss of serum glucocorticoid kinase 1 (Sgk1), with less pNDRG1 as a sequel, caused hypermyelination in mice [17] . We did not observe decreased g-ratios in the NDRG1 mut/mut Alaskan Malamutes, as would be expected in a condition with diffuse hypermyelination. However, in conditions with focal hypermyelination, a reduced g-ratio may not be found [53] , as most cross-sections of nerves with focally folded myelin will be represented in a semithin section by a nerve segment with normal myelination. Further studies are needed to investigate whether the NDRG1 mutation disrupts signalling in the Schwann cells by affecting the phosphorylation of the encoded protein, either directly or indirectly.
Analysis of peripheral nerve lipid composition revealed several differences between the genotype groups. Loss of NDRG1 function can conceivably affect lipid composition of the nerves directly, as NDRG1 participated in vesicular recycling of the low-density lipoprotein receptor (LDLR) in epithelial cells [12] and regulated lipid metabolism in breast cancer cells [16] . In the latter, silencing of NDRG1 resulted in increased triacylglycerol levels. However, unspecific changes caused by loss of myelin and axons in the AMP nerves precludes interpretation of changes specifically related to loss of NDRG1 function. These include for example the observed significant reduction in the levels of sphingomyelins and glycolipids [54] . Thus, a specific contribution to the observed differences in lipid composition from loss of NDRG1 function cannot be ruled out, but needs further investigation.
The Western blots showed that the nerve levels of NDRG1 in the affected Alaskan Malamutes were significantly reduced, but not completely lost. In contrast, Greyhounds with NDRG1 -associated neuropathy were reported to have a total NDRG1-deficiency [2] , just as in humans with CMT4D caused by a nonsense mutation [9] . The incomplete loss of NDRG1 function in the Alaskan malamutes results in a later onset and milder clinical course of AMP as compared with the neuropathies in Greyhounds and the stretcher mouse model [9] in which there is complete loss of NDRG1.
In conclusion, Alaskan Malamutes with NDRG1 mutations is a unique spontaneous model, which demonstrates morphological features resembling the human CMT4D, but also reveals some previously undescribed changes. | 2020-11-26T14:44:10.853Z | 2020-11-26T00:00:00.000 | {
"year": 2021,
"sha1": "3ca6b9cfa953b9787711b4f782cbe2f0b7de5777",
"oa_license": "CCBY",
"oa_url": "http://www.nmd-journal.com/article/S0960896620306751/pdf",
"oa_status": "HYBRID",
"pdf_src": "Elsevier",
"pdf_hash": "3ca6b9cfa953b9787711b4f782cbe2f0b7de5777",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253458048 | pes2o/s2orc | v3-fos-license | Problematic Internet Use among Adolescents 18 Months after the Onset of the COVID-19 Pandemic
Studies in recent years and especially since the beginning of the COVID-19 pandemic have shown a significant increase in the problematic use of computer games and social media. Adolescents having difficulties in regulating their unpleasant emotions are especially prone to Problematic Internet Use (PIU), which is why emotion dysregulation has been considered a risk factor for PIU. The aim of the present study was to assess problematic internet use (PIU) in adolescents after the third wave (nearly 1.5 years after the onset in Europe) of the COVID-19 pandemic. In the German region of Siegen-Wittgenstein, all students 12 years and older from secondary-level schools, vocational schools and universities were offered a prioritized vaccination in August 2021 with an approved vaccine against COVID-19. In this context, the participants filled out the Short Compulsive Internet Use Scale (SCIUS) and two additional items to capture a possible change in digital media usage time and regulation of negative affect due to the COVID-19 pandemic. A multiple regression analysis was performed to identify predictors of PIU. The original sample consisted of 1477 participants, and after excluding invalid cases the final sample size amounted to 1268 adolescents aged 12–17 (x = 14.37 years, SD = 1.64). The average prevalence of PIU was 43.69%. Gender, age, digital media usage time and the intensity of negative emotions during the COVID-19 pandemic were all found to be significant predictors of PIU: female gender, increasing age, longer digital media usage time and higher intensity of negative emotions during the COVID-19 pandemic were associated with higher SCIUS total scores. This study found a very high prevalence of PIU among 12- to 17-year-olds for the period after the third wave of the COVID-19 pandemic, which has increased significantly compared to pre-pandemic prevalence rates. PIU is emerging as a serious problem among young people in the pandemic. Besides gender and age, pandemic-associated time of digital media use and emotion regulation have an impact on PIU, which provides starting points for preventive interventions.
Introduction
The access to information and communication technologies (ICT) has constantly risen among adolescents over the past years. Due to the deprivation of normal activities, social isolation, lockdown and home-schooling during the coronavirus pandemic (COVID- 19), the frequency of ICT use and its ramifications are of vital importance, now more than ever [1][2][3]. Specific, pronounced internet-related problems concomitant with this development seem to have appeared and additionally increased in recent years: gaming disorder, social network Regarding previous literature, gender differences in PIU are still unclear and culturally determined [37]. No differences in PIU between boys and girls were found with the CIUS-9 [38].
The relation between PIU, generalized social beliefs and emotional problems plays an important role in the treatment for PIU (cognition-based intervention strategies for reducing PIU), especially since the COVID-19 outbreak [39]. Children who experienced quarantine in other pandemics report higher depressive and stress symptoms. Stress can have negative consequences on emotional and cognitive functions, such as decreases in self-esteem over time [40,41]. Therefore, internet consumption increases when people feel lonely, depressed, anxious, or desire emotional support [42]. Family involvement can thereby serve as a protective factor against PIU [43].
Especially during the COVID-19 pandemic, the risk of developing PIU in youth has increased [44,45]. In Taiwan, for example, the prevalence rate among adolescents was 17.4% before the pandemic, while it increased to 24.4% during the pandemic [46,47].
The aim of the present study was to assess the prevalence of PIU in a representative cohort of adolescents aged 12-17 after one and a half years of the COVID-19 pandemic. Additionally, we examined the influence of age and gender on the PIU prevalence. Furthermore, we investigated the influence of pandemic-related changes in digital media usage time and emotion regulation strategies (restlessness, irritability, anger, anxiety, or sadness) on PIU when digital media could not be used.
Study Design
The present study is part of a pilot project conducted in the Siegen-Wittgenstein region from July to September 2021 (the actual data collection took place in the period from 30 July to 30 September 2021). The study was conducted by the University of Siegen with the vaccination center Siegen in collaboration with other investigators: University of Saarland (Department of Clinical Pharmacy), University Children's Hospital Bochum, Saarland University Hospital Clinic for General Pediatrics and Neonatology and Saarland University Hospital Clinic for Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy.
Adolescents and their caregivers were offered a prioritized SARS-CoV-2 vaccination (BNT162b2 by Biotech/Pfizer). This was announced in local newspapers, radios and on the homepage of the University of Siegen. All interested adolescents and families could participate.
In this context, supplementary data were collected in a survey, and participation in the survey was independent of receiving the vaccination. The survey included questions on sociodemographic data, history of COVID-19 infection and the vaccination status of the adolescent. Further, participants were invited to complete questions on PIU, on changes in their temporal usage behavior and on regulation of negative feelings during the pandemic. The results of further questions on mental health and health-related quality of life have already been published [48].
All participants and their caregivers were informed and provided written consent prior to their participation in the survey. Compliance with the Declaration of Helsinki and approval of the Ethics Committee of the Medical Association of Westphalia-Lippe and the Westphalian Wilhelms University were obtained (file number: 021-372-f-S).
Participants
The present sample was an ad-hoc sample. Inclusion criteria for this study were as follows: (a) minimum age of 12 years, (b) attending a secondary school or vocational school in the district of Siegen or a matriculation at the University of Siegen, (c) COVID-19 vaccination and (d) voluntary participation in the survey. Exclusion criteria were based on contraindications to the vaccine, such as an age under 12 years because of the lack of an approved vaccine for this age group as well as a known hypersensitivity to any vaccine ingredient (for all contraindications see the RKI information leaflet on vaccination against COVID-19 in the version of 29 November 2021). From an initial 1477 participants, 49 (3.3%) were excluded because of their age of 18 years and older, 16 (1.1%) showed a missing value in age, 30 (2.0%) had a missing value in gender and 114 (7.7%) were excluded due to 1 or more missing values in SCIUS (see chapter measures). Thus, the final sample size was 1268 participants, 85.9% of the initial sample. Participant flow can also be taken from Figure 1. The recruitment and education about the study took place in the vaccination center. The questionnaires were voluntarily filled out online on a tablet provided by the study leaders or via a QR-Code or internet address on the participants' own mobile terminal in the medically determined waiting period after the vaccination.
The present sample was an ad-hoc sample. Inclusion criteria for this study were as follows: (a) minimum age of 12 years, (b) attending a secondary school or vocational school in the district of Siegen or a matriculation at the University of Siegen, (c) COVID-19 vaccination and (d) voluntary participation in the survey. Exclusion criteria were based on contraindications to the vaccine, such as an age under 12 years because of the lack of an approved vaccine for this age group as well as a known hypersensitivity to any vaccine ingredient (for all contraindications see the RKI information leaflet on vaccination against COVID-19 in the version of 29 November 2021). From an initial 1477 participants, 49 (3.3%) were excluded because of their age of 18 years and older, 16 (1.1%) showed a missing value in age, 30 (2.0%) had a missing value in gender and 114 (7.7%) were excluded due to 1 or more missing values in SCIUS (see chapter measures). Thus, the final sample size was 1268 participants, 85.9% of the initial sample. Participant flow can also be taken from Figure 1. The recruitment and education about the study took place in the vaccination center. The questionnaires were voluntarily filled out online on a tablet provided by the study leaders or via a QR-Code or internet address on the participants' own mobile terminal in the medically determined waiting period after the vaccination.
Measures
The Compulsive Internet Use Scale (CIUS) is a validated self-report questionnaire measuring the severity of internet addiction on a continuum of 0 to 56 points. The scale consists of 14 items and the response options are presented in a 5-point Likert scale. The CIUS shows a good factorial stability, a high internal consistency and therefore good reliability as well as good validity [16,[49][50][51][52]. Furthermore, there are some reliable short forms of the CIUS such as the short French version of the CIUS with 9 items [38] and valid 5-, 7-and 9-item versions of the Lithuanian CIUS [53].
The SCIUS (Short Compulsive Internet Use Scale) used in this study is a short form of the CIUS. It consists of 5 of the original 14 items (see Table 1) rated with a 5-point Likert scale with the response options:"0 = never, 1 = seldom, 2 = sometimes, 3 = frequent, 4 =
Measures
The Compulsive Internet Use Scale (CIUS) is a validated self-report questionnaire measuring the severity of internet addiction on a continuum of 0 to 56 points. The scale consists of 14 items and the response options are presented in a 5-point Likert scale. The CIUS shows a good factorial stability, a high internal consistency and therefore good reliability as well as good validity [16,[49][50][51][52]. Furthermore, there are some reliable short forms of the CIUS such as the short French version of the CIUS with 9 items [38] and valid 5-, 7-and 9-item versions of the Lithuanian CIUS [53].
The SCIUS (Short Compulsive Internet Use Scale) used in this study is a short form of the CIUS. It consists of 5 of the original 14 items (see Table 1) rated with a 5-point Likert scale with the response options:"0 = never, 1 = seldom, 2 = sometimes, 3 = frequent, 4 = very frequent". The scale assesses PIU as a short screening procedure. Regarding the test quality, the SCIUS has an acceptable reliability with a Cronbach's Alpha of 0.77 for internal consistency. In addition, there is no significant deviation from the original CIUS regarding specificity and sensitivity [54]. As a cut-off for PIU, the value 7 (sensitivity = 0.95, specificity = 0.86 for females and 0.87 for males) or for a higher specificity (0.96) the value 9 (sensitivity = 0.76 for females and 0.78 for males) could be used according to the manual [54]. In our analysis, we used only the cut-off value 9 for a higher specificity. Furthermore, to capture the change due to the COVID-19 pandemic, 2 more items were introduced. The first item captures digital media usage time: "To what extent does the COVID-19 pandemic change the time you use digital media (smartphone, smartwatch, tablet, laptop, stationary/portable game console)?" The response options were in the form of a five-point Likert scale: Since then I have been using digital media..."much less/much shorter", "somewhat less", "same amount/unchanged", "more often/longer", "much more often/longer". The second item asks about the intensity of negative feelings due to the COVID-19 pandemic: "To what extent does the COVID-19 pandemic change negative feelings (e.g., restlessness, irritability, anger, anxiety, or sadness) when you do not have the opportunity to use digital media?" Again, the response options were given with a five-point Likert scale: I have been reacting in the intensity of my negative feelings since then..."much less strongly/severely", "less strongly/severely", "equally/unchanged", "more strongly/more severely", "much more strongly/severely".
Exploratory Factor Analysis
The structure of the short form of the Compulsive Internet Use Scale (SCIUS) to assess problematic internet use (PIU) (5 items) was examined using an exploratory factor analysis on the sample of 12-to 17-year-old adolescents. Both Bartlett's test (Ø 2 (10) = 1250.15, p ≤ 0.001) and the Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO = 0.78 (middling according to Kaiser and Rice [55])) indicated that the variables were suitable for a factor analysis. Thus, a principal component analysis with varimax rotation was performed. The result was one factor with eigenvalue > 1.0. The graphical representation in the form of a scree plot (see Figure 2) also suggested a one-factor solution, which explains 48.89% of the variance. The factor appears interpretable according to the criteria of Guadagnoli and Velicer [56], since 4 items load >0.60 on the factor. In terms of content, naming this factor "PIU" is conceivable, as it elicits topics such as dysfunctional internet use or negative consequences of internet use. Further information can be found in Table 1.
Item Analysis and Reliability
Following Ebel and Frisbie [57], items 1-4 of the SCIUS showed a very good discrimination power and item 5 a reasonably good discrimination power. The item difficulty was in the acceptable range for all items. Cronbach's α is 0.73 and thus acceptable according to George and Mallery [58]. Detailed item statistics are shown in Table 1.
Descriptive Prevalence Statistics: SCIUS
In total, 43.69% of the participants showed problematic internet use (PIU). It should be emphasized that more girls than boys were above the cut-off value (female = 49.38%, male = 37.50%). Furthermore, there was a small difference between the group of early adolescents (12-14 years) (42.06%) and the group of late adolescents (15-17 years) (45.58%). In terms of digital media usage time, PIU was found in 53.27% of those who had a higher digital media usage time in the pandemic, but only in 29.81% of those who used digital media less in the pandemic than they did before.
Moreover, 66.67% of the participants who reported a higher intensity of negative emotions during the COVID-19 pandemic, if there was no possibility to use digital media, reached the criteria for PIU, but only 39.38% of those reporting a lower intensity of negative emotions during the COVID-19 pandemic did. See Table 3 for further information.
Descriptive Statistics: COVID-19 Items Usage Time and Emotion Regulation
Out of 1268 participants, 104 (8.21%) indicated that their digital media usage time was much less/much shorter during the COVID-19 pandemic, whereas 734 (57.89%) indicated having a longer or much longer digital media usage time during the COVID-19 pandemic. Regarding the intensity of negative emotions during the COVID-19 pandemic, 160 (12.61%) mentioned that they have been reacting in the intensity of their negative feelings less strongly or much less strongly, and 303 (23.9%) more strongly or much more strongly during the COVID-19 pandemic. See Table 4 for further data.
Item Analysis and Reliability
Following Ebel and Frisbie [57], items 1-4 of the SCIUS showed a very good discrimination power and item 5 a reasonably good discrimination power. The item difficulty was in the acceptable range for all items. Cronbach's α is 0.73 and thus acceptable according to George and Mallery [58]. Detailed item statistics are shown in Table 1.
Descriptive Prevalence Statistics: SCIUS
In total, 43.69% of the participants showed problematic internet use (PIU). It should be emphasized that more girls than boys were above the cut-off value (female = 49.38%, male = 37.50%). Furthermore, there was a small difference between the group of early adolescents (12-14 years) (42.06%) and the group of late adolescents (15-17 years) (45.58%). In terms of digital media usage time, PIU was found in 53.27% of those who had a higher digital media usage time in the pandemic, but only in 29.81% of those who used digital media less in the pandemic than they did before.
Moreover, 66.67% of the participants who reported a higher intensity of negative emotions during the COVID-19 pandemic, if there was no possibility to use digital media, reached the criteria for PIU, but only 39.38% of those reporting a lower intensity of negative emotions during the COVID-19 pandemic did. See Table 4 for further information.
Descriptive Statistics: COVID-19 Items Usage Time and Emotion Regulation
Out of 1268 participants, 104 (8.21%) indicated that their digital media usage time was much less/much shorter during the COVID-19 pandemic, whereas 734 (57.89%) indicated having a longer or much longer digital media usage time during the COVID-19 pandemic. Regarding the intensity of negative emotions during the COVID-19 pandemic, 160 (12.61%) mentioned that they have been reacting in the intensity of their negative feelings less strongly or much less strongly, and 303 (23.9%) more strongly or much more strongly during the COVID-19 pandemic. See Table 3 for further data. Note: "Shorter digital media usage time" = sum of the values of the categories "much less/much shorter" and "somewhat less". "Longer digital media usage time" = sum of the values of the categories "more often/longer" and "much more often/longer". "Lower intensity of negative emotions during the COVID-19 pandemic" = sum of the values of the categories "much less strongly/severely" and "less strongly/severely". "Higher intensity of negative emotions during the COVID-19 pandemic" = sum of the values of the categories "more strongly/more severely" and "much more strongly/severely".
Influence of Gender, Age, Usage Time and Emotion Regulation on SCIUS
Gender was limited to male and female, as the group of X-genders was too small, with 10 cases. Girls (M = 8.50, SD = 4.10, n = 650) scored higher SCIUS total scores than boys (M = 7.57, SD = 3.88, n = 608). A Mann-Whitney U-test indicated that this difference was statistically significant (U(n girls = 650, n boys = 608) = 171,952.50, z = −3.99, p ≤ 0.001). The effect size according to Cohen [59] was Pearson r = 0.11, and it corresponds to a small effect.
Adolescents with a longer digital media usage time (answered "more often/longer" or "much more often/longer") (M = 8.99, SD = 3.94, n = 734) showed higher SCIUS total scores than adolescents with a shorter digital media usage time (answered "somewhat less" or "much less/much shorter") (M = 6.53, SD = 3.98, n = 104) (see Table 4). This distinction was significant, as determined by a Mann-Whitney U-test (U(n longer digital media usage time = 734, n shorter digital media usage time = 104) = 24,962.00, z = −5.73, p ≤ 0.001). The effect size was Pearson r = 0.20. Following Cohen [59], this is a small effect.
Adolescents with a higher intensity of negative emotions during the COVID-19 pandemic (answered "more strongly/more severely" or "much more strongly/severely") (M = 10.40, SD = 4.03, n = 303) scored higher SCIUS total scores than adolescents with a lower intensity of negative emotions during the COVID-19 pandemic (answered "less strongly/severely" or "much less strongly/severely") (M = 7.28, SD = 3.77, n = 160) (see Table 4). A Mann-Whitney U-test indicated that this difference was statistically significant (U(n higher intensity of negative emotions = 303, n lower intensity of negative emotions = 160) = 13,690.50, z = −7.72, p ≤ 0.001). In this case, the effect size is Pearson r = 0.36, which, according to Cohen [59], is equivalent to a moderate effect.
Regression Analysis
A multiple regression analysis was used to predict the SCIUS total score from gender (limited to male and female, as the group of X-genders was too small, with 10 cases), age, digital media usage time and intensity of negative emotions during the COVID-19 pandemic. The model explained a statistically significant amount of variance in the SCIUS total score, F(4, 1111) = 53.16, p ≤ 0.001, R 2 = 0.16, R 2 adjusted = 0.16. All were significant predictors: gender (ß = −0.09, t = −3.40, p = 0.001), age (ß = 0.09, t = 3.18, p = 0.002), digital media usage time (ß = 0.28, t = 9.95, p ≤ 0.001), intensity of negative emotions during the COVID-19 pandemic (ß = 0.20, t = 7.05, p ≤ 0.001). Therefore, the final predictive model was: SCIUS total score = 0.93 − 0.76 (gender) + 0.22 (age) + 1.22 (digital media usage time) + 1.00 (intensity of negative emotions during the COVID-19 pandemic). Female gender, increasing age, longer digital media usage time and higher intensity of negative emotions during the COVID-19 pandemic were associated with higher SCIUS total scores. The R 2 for the overall model indicates a moderate goodness of fit according to Cohen [59], f 2 = 0.19 (medium effect). See Table 5 for further multiple regression results.
Discussion
The present study aimed to assess problematic internet use (PIU) among adolescents after the third wave of the COVID-19 pandemic. In an online study, the SCIUS was used to measure PIU among adolescents aged 12 to 17. In addition, changes in digital media usage time and intensity of negative emotions during COVID-19 were assessed. Multiple regression revealed that gender, age, digital media usage time and intensity of negative emotions during the COVID-19 pandemic were all significant predictors of PIU. Thus, female gender, increasing age, longer digital media usage time and higher intensity of negative emotions during the COVID-19 pandemic were associated with a higher PIU.
In summary, the research questions of this study were to determine the prevalence of PIU and to investigate the influence of gender and age on the prevalence of PIU. Additionally, the influence of pandemic-related changes in digital media usage time and emotion regulation strategies (restlessness, irritability, anger, anxiety, or sadness) on PIU when digital media could not be used were examined.
Prevalence of PIU
A growing prevalence of excessive internet use has been described in most industrialized countries (especially Asian, European and North American countries).
One of the main findings of our study is that one and a half years after the start of the COVID-19 pandemic, a prevalence of PIU of 43.69% was found in adolescents aged 12-17 years. This value is very high compared to other studies, conducted before the pandemic, with a prevalence range of 10-24% [60][61][62][63] (data collection period 2017-2018). Overall, prevalence rates in adolescents start from 1.5% in Greece, 10.7% in South Korea to 11.6% in Latin America [64][65][66]. In Hong Kong, prevalence rates among secondary students are estimated to even reach up to 20% [67]. Comparably high values were found in a Spanish study: 24% of adolescents between 14 and 18 used the internet in a problematic way, with the intensity being highest among those between 16 and 17 years [68]. Chandrima et al. also reported that 24.0% of adolescents from Bangladesh were problematic internet users and 2.6% had severe PIU [23].
In summary, data from various studies conducted in different countries prior to the pandemic show prevalence rates of about 25% of PIU. One interpretation of the prevalence of PIU of 43.69% in adolescents aged 12-17 years after the third wave of the COVID-19 pandemic is that the data collections of the other studies almost all occurred before the COVID-19 pandemic. In contrast, the data of the present study were collected after the third wave of the pandemic, which has been in progress for one and a half years. Thus, the context of the pandemic (lockdown, school closures, reduction in real-world social contact with peers, increase in time spent using the internet) could have led to a significant increase in the prevalence of PIU, as shown by several other studies conducted during the pandemic with prevalence rates between 24% and 28% [47,69,70]. Oka et al. reported, in their study on internet gaming disorder and problematic internet use before and during COVID-19, that during the pandemic, the prevalence of IGD increased 1.6-fold and the prevalence of PIU increased 1.5-fold compared to before the pandemic [71]. An overview of a selection of previous PIU prevalence rates with the distinction "before" vs. "during" the pandemic, compared to the prevalence found in our study, can be found in Figure 3.
A major problem in estimating the prevalence of PIU is the lack of consensus on criteria and definitions, cut-off values and a unified terminology. It is also conceivable that the use of different measurement instruments to measure PIU (CIUS, SCIUS, IAT) plays a role. Regardless, the present study found an enormously high prevalence of 43% of PIU in a representative sample of adolescents one and a half years after the start of the pandemic.
A complementary explanation for the increase in the prevalence of PIU (at least for the increase in Germany) could be due to the policy of internet use in Germany.
On the one hand, the digitalization campaign currently being conducted by the German government creates a framework for a significant increase in the use of digital screen media. The "DigitalPakt Schule" (Digital Pact for Schools) aims to promote digitalization in German schools [72]. This brings many advantages, but at the same time also carries the risk of an increase in the prevalence of PIU, as the use of the internet is becoming more and more self-evident and extensive for children and young people in many areas of their everyday lives. The implementation strategies for shaping the digital transformation in Germany include five fields of action: digital competence, infrastructure and equipment, innovation and digital transformation, society in the digital transformation and modern state [73]. All these fields of action promote the use of digital media and at the same make digital media and the internet more accessible. This indicates the need for clear guidelines on the use of the internet and digital media, such as those issued by the Federal Centre for Health Education (BZgA) [74].
Additionally, the legislation of the Youth Protection Act (Second Act Amending the Youth Protection Act, 2021 [75]) has been assessed by pediatric professional associations to be in need of improvement [76]. The developmentally impairing consequences of excessive use of digital screen media (in terms of duration of use, device use and content) are insufficiently considered in the law. For example, it is urgently recommended to correct the current age rating of digital applications "from 0 years" to "from 3 years". Scientific process support through developmental neurological and psychological research is lacking.
to be in need of improvement [76]. The developmentally impairing consequences of excessive use of digital screen media (in terms of duration of use, device use and content) are insufficiently considered in the law. For example, it is urgently recommended to correct the current age rating of digital applications "from 0 years" to "from 3 years". Scientific process support through developmental neurological and psychological research is lacking.
Gender and PIU
Gender predicted PIU. More girls than boys were above the cut-off for PIU (female = 49.38%, male = 37.50%). Girls scored higher on PIU than boys.
On the one hand, findings consistent with the results of this study, with females showing significantly higher PIU than males, were found by Laconi et al. and Mihara et al. [78,79]. In addition, differences were also found between males and females in terms of school-or work-related internet use, with female participants showing higher usage in this area [80].
On the other hand, many studies [64,81] show contrary findings, with males having significantly higher PIU than females and comparably higher prevalence rates for boys regarding IA [82][83][84][85][86][87] as well as a higher propensity for IA [88]. Especially male adolescents with low life satisfaction and low academic performance are more at risk for PIU [49].
However, there are also studies that found no correlation between gender and the prevalence of PIU for either internet use disorder [17], IAT [89][90][91] or CIUS [92].
Given these ambiguous findings, it can be speculated that the female dominance in PIU scores found here may be due to the significantly increased social media use during the pandemic, as several previous studies note a significant preference for computer games among boys and a preference for SM use among girls [93][94][95][96][97][98]. Nevertheless, further investigations of moderating effects regarding gender differences in PIU are needed.
Age and PIU
There was a small difference in the PIU prevalence between the group of early adolescents (12-14 years old) with 42.06% and the group of late adolescents (15-17 years old) with 45.58%. Age significantly predicted PIU. Late adolescents, defined as age 15-17 years, showed higher PIU than early adolescents (12-14 years).
In contrast, the study by Schimmenti et al. [99] did not find age to be a predictor of PIU. To summarize the findings to date on this topic, there have been inconsistencies across the literature, as some studies claim to have found an effect of age on PIU [47,82] and others that there is no such effect [100]. Another interesting aspect is the different possible understandings of PIU. Perhaps a meta-analysis summarizing and comparing the findings and aligning them with their exact diagnostic criteria could shed some light on the different findings on age.
In order to precisely pinpoint which age group is most prone to PIU, further research should focus on a large sample across multiple age groups. This could be helpful in determining at what age preventive measures should be taken and when they are most effective. In general, the finding of increased PIU among older youths compared to younger ones remains plausible based on the overall higher time use of the internet with increasing age, as well as a better availability of media devices and less regimentation by parents.
Digital Media Usage Time during COVID-19 Pandemic
PIU was found in 53.27% of those who had a higher digital media usage time during the pandemic, but only in 29.81% of those who used digital media less during the pandemic than before. Digital media usage time serves as a predictor of PIU. Adolescents with a longer digital media usage time during the pandemic scored higher on PIU than adolescents with a shorter digital media usage time during the pandemic.
This finding replicates prior work by Schimmenti et al. [99], who found time spent online being a predictor of PIU, too. Time of use and PIU are significantly related, but this cross-sectional study cannot clarify what is the cause and what is the effect. Nevertheless, this finding may help in both prevention and treatment of PIU. Furthermore, Lai et al. [82] confirmed that the amount of time spent online is a risk factor for IA, going so far as to say that one additional hour spent online already increases addiction or problematic behavior. Independent of the pandemic and specific to gaming, Gentile et al. showed in a longitudinal design that more time spent gaming is a significant predictor of a subsequent gaming disorder (GD) [101]. In contrast, Yildiz Durak found no significant correlation between duration of social media usage and problematic social media usage [102]. However, it should be noted that this study refers specifically to problematic social media usage and not to PIU in general, which could explain the different results.
In the face of the pandemic, these results concerning digital media usage time become relevant due to the increasing amount of time children and adolescents spend online, with significantly higher rates of dependency following [103]. Related to the higher digital media usage time during the COVID-19 pandemic found in the study presented here, Eales et al. found a significant increase in screen media use and problematic media use among children in the United States during the COVID-19 pandemic [104]. Drouin et al. noted an increase of SM usage during COVID-19 [105]. The increase in the time adolescents spend using games and SM increases the risk of problematic patterns of use [106]. It would be of great interest to undertake further research on prevention measures, e.g., time limitation.
Intensity of Negative Emotions during the COVID-19 Pandemic
As a transdiagnostic construct, emotional dysregulation (ED) encompasses the inability to regulate the intensity and quality of emotions. Regulation of one's own emotions is important in eliciting adequate emotional responses, dealing with excitability, mood instability and emotional over-reactivity, as well as returning to an emotional baseline [31]. A total of 66.67% of the participants who reported a higher intensity of negative emotions during the COVID-19 pandemic (if there was no possibility to use digital media) reached the criteria for PIU, but only 39.38% of those reporting a lower intensity of negative emotions during the COVID-19 pandemic did. This increased intensity of negative emotions during the COVID-19 pandemic predicted PIU: adolescents with a higher intensity of negative emotions during the COVID-19 pandemic scored higher on PIU than adolescents with a lower intensity of negative emotions during the COVID-19 pandemic. Therefore, one might conclude that the internet is used to regulate negative emotions and/or that PIU promotes ED, such as difficulties in recognizing emotions.
Morahan-Martin et al. found that internet utilization increases when people are feeling lonely [42]. Several studies found associations between PIU, anxiety, psychological distress and depression during COVID-19 [45,77,107]. Adolescents with higher levels of anxiety are more likely to increase their internet use [105]. Equally, Mamun et al. found an association between PIU and loneliness as well as psychological distress [108]. Social distancing and greater anxiety due to the COVID-19 pandemic led to an increased technology and social media use among children as well as parents [105]. Moreover, stronger emotional distress [109] and negative affectivity [99] were found to be predictors for PIU. Especially children with emotion dysregulation are at risk of problematic technology use, and in turn, PIU may lead to emotion dysregulation [31]. The correlation between emotion dysregulation and internet addiction has been identified several times across the literature [110]. Adolescents with PIU have more difficulties in identifying and describing emotions, understanding emotional reactions and controlling spontaneous behavior when negative emotional experiences occur [22].
This may result in a vicious cycle, making both the PIU and the emotion dysregulation more severe. Paulus et al. also prescribe different studies that all identify different aspects of emotion dysregulation as predictors of gaming disorder across different age groups [32]. They clearly outline the predictors of poor impulse control, and a lack of social skills as one of those predictors. Other studies suggest PIU being responsible for emotion dysregulation rather than vice versa [111]. They also specify that PIU only affects the more complex forms of emotion dysregulation such as pursuing life goals rather than impulse control. These findings about emotion dysregulation and PIU call for a closer investigation of these variables in the period of the current pandemic, as this is known to affect both variables independently [2,28].
The pandemic has led to significant stress and an increase in emotional disorders as well as an increase in PIU. PIU is closely related to emotion regulation strategies: those who reported having more negative emotions when they cannot use digital media during the pandemic also had significantly higher levels of PIU.
Limitations
A relevant issue is the range of possible measurement instruments to measure PIU. In this study, only a short version of the CIUS (SCIUS [54]), which consists of 5 of the original 14 items, was used. Whereas in other studies, for example, different CIUS versions or the IAT were applied [4]. The use of varying instruments to measure the same concept (PIU) could affect the results, which may be a possible explanation for the wide range of prevalence rates of PIU found across different studies. Second, this study only collected data from 12-to 17-year-old adolescents. However, as far as the relationship between age and PIU is concerned, there are still inconsistencies in research. To gain a better overview of which age groups are particularly affected by PIU, it would be necessary to conduct a study with a large sample across multiple age groups. Third, the data collected in this study are cross-sectional, hence the study design precludes conclusions about the extent to which the pandemic itself impacted PIU for the study sample. In addition, this study examined PIU in general. To gain a more in-depth insight into the relationship between gender and PIU, a more detailed survey on sub-areas of PIU (specific PIU) such as problematic gaming or problematic social media use would be useful. Furthermore, the SCIUS is a selfreport questionnaire. Therefore, response biases such as wrong answers, socially desirable answers or under-and over-statements cannot be ruled out. As the questionnaires were also filled out by parents, exaggerations would also be possible here, since the behavior could be ubiquitous for the parents in everyday life and associated with negative feelings.
Moreover, although we searched thoroughly, the studies we found on PIU prevalence rates (pre-/post-COVID) do not claim to be exhaustive.
Outlook
Prevention on the three levels of preventive interventions: universal, selected and indicated prevention [112], should be reinforced to reduce PIU, including school-based preventive interventions. Regarding future prevention, limiting the time spent on internet activities could be a promising approach. Children prone to difficulties in dealing with negative emotions should be restricted in their internet use or monitored more closely. At the same time, more adequate coping and action alternatives should be offered that both act as an adaptive strategy for dealing with negative emotions as well as contribute to the experience of positive emotions [32].
Specific, pronounced internet-related problems or disorders such as gaming disorder or social network disorder, excessive internet shopping or internet pornography should be treated as well. These treatments could include focusing on a variety of non-screen time activities, as the lack of face-to-face activities offered leads to an increase in internet usage [103]. Measures can come from either the adolescents themselves, their parents, as well as their schools [113]. Adolescents can make sure to have a structured routine or to learn about emotional regulation strategies on their own. Parents could reduce general access to the internet, especially to younger children, or restrict certain websites to protect their children from harmful information. Schools that nowadays teach online could try to reduce screen time by assigning offline homework and by including mental health education in their templates.
The effects of the digitalization of society on child development have not yet been investigated.
Conclusions
The present study found a very high prevalence of 43.69% of PIU among 12-to 17-year-olds for the period after the third wave of the COVID-19 pandemic, which has increased significantly compared to pre-pandemic prevalence rates. Beside gender and age, pandemic-associated time of digital media use and emotion regulation were found to have an impact on PIU: the time criterion and emotional dysregulation could be assumed to be risk factors for the development of PIU.
What is needed is a consistent and competent education of parents, children and young people by medical, psychological and psychotherapeutic professionals about the health consequences of too-early and too-excessive media consumption. A reflective, critical, self-determined and meaningful use of electronic/digital media (devices) and the content accessible through those devices regarding media literacy (and not just media competence oriented towards feasibility) is the desirable and primary goal. | 2022-11-12T06:18:16.265Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "c5a4088f043938b842d4ffbdd5803932f2fb810c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2c2eac90d855fce4dc66aee938decb9c87045a36",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85519847 | pes2o/s2orc | v3-fos-license | Z/2-equivariant and R-motivic stable stems
We establish an isomorphism between the stable homotopy groups of the 2-completed motivic sphere spectrum over the real numbers and the corresponding stable homotopy groups of the 2-completed Z/2-equivariant sphere spectrum, in a certain range of dimensions.
Introduction
This paper is a sequel to [3], where we computed some of the stable homotopy groups of the 2-completed motivic sphere spectrum over the ground field R. Here we explain that in a certain range these groups agree with the analogous Z/2equivariant (but non-motivic) stable homotopy groups.
There is an equivariant realization functor from R-motivic stable homotopy theory to Z/2-equivariant homotopy theory, induced by assigning to every scheme X over R the associated analytic space X(C) with complex conjugation [9,Section 3.3]. This induces a mapπ R * , * →π Z/2 * , * (1.1) of bigraded rings, where the domain is the stable homotopy ring of the 2-completed R-motivic sphere spectrum, and the target is the stable homotopy ring of the 2completed Z/2-equivariant sphere spectrum. Each groupπ Z/2 s,w is finitely-generated, so the 2-completion on the right is very mild. The reader should beware that the stable homotopy groups of the 2-completed motivic sphere are not necessarily the same as the algebraic 2-completions of the stable homotopy groups of the uncompleted motivic sphere. One must account for η-completion as well [5,Theorem 1]. For the purposes of this paper,π R * , * andπ Z/2 * , * can be defined as the objects to which the R-motivic and Z/2-equivariant Adams spectral sequences converge, respectively.
The Z/2-equivariant stable homotopy groups were computed in a range by Araki and Iriye [1,6], although the method of computation and statements of results are difficult to navigate. A goal of the work begun in [3] is to better understand the Araki-Iriye results by lifting as much as possible back to R-motivic homotopy theory via the map (1.1). The present paper demonstrates that this is possible in a range.
1.2. Equivariant homotopy groups. Recall that R 1,1 denotes the real line with the sign representation of Z/2, whereas R 1,0 denotes the real line with the trivial representation. For p ≥ q one sets R p,q = (R 1,0 ) ⊕(p−q) ⊕ (R 1,1 ) ⊕q , and S p,q is the one-point compactification of R p,q . These are the bigraded Z/2equivariant spheres, and we write π Z/2 p,q for the Z/2-equivariant stable homotopy group [S p,q , S 0,0 ]. These groups were computed by Araki-Iriye in the range p ≤ 13, although the calculations for p = 12 and p = 13 were announced without proof [1] [6].
One way to understand the global structure of π Z/2 * , * is to break the calculation into pieces as follows. The nth Z/2-equivariant Milnor-Witt stem is the collection of groups ⊕ p π Z/2 p+n,p . The 0th Milnor-Witt stem is a subring of π Z/2 * , * , and the nth Milnor-Witt stem is a module over this subring. Table 1 at the end of the article gives some partial results about Z/2-equivariant stable homotopy groups, arranged so that the groups in each row belong to a common Milnor-Witt stem.
We will give a global picture of the current knowledge of Z/2-equivariant stable homotopy groups. One piece of the global structure relates to the fixed-point map φ : π Z/2 s,w → π s−w from the equivariant to the non-equivariant groups. This map is known to be split for s ≥ 2w [2, p. 284], and is an isomorphism for s < 0 [1, Proposition 7.0]. These splittings are represented by copies of π s−w in Table 1.
The second piece of global structure consists of periodicity, for each fixed s, of the kernel of φ : π s, * → π s− * . Note that when * > s this is π s, * itself, whereas when s ≥ 2w it is a summand (by the preceding paragraph). There are two difficulties with the periodicity phenomenon. First, the orders of the periodicities and the values of the periodic groups are rather complicated. See Table 2 of [6] for a complete description in the range s ≤ 13, and beware that the indexing in that table differs from ours: the correspondence is given by the equations s = p + q and w = p. Second, there are exceptions to the periodicity in the range 2w ≥ s ≥ w − 1 [1,Proposition 4.8]. These exceptions are shown in red in Table 1. Note, however, that some of the groups in the range 2w ≥ s ≥ w − 1 actually do assume the periodic values.
The groups π Z/2 p,0 and π Z/2 p,1 are also computed in [10] for p ≤ 13 using the equivariant Adams spectral sequence based on Borel cohomology.
1.3. Motivic homotopy groups. The motivic setup [9] is similar to the equivariant setup. Now S 1,0 is the simplicial circle, S 1,1 is the scheme A 1 − 0, and S p,q is the appropriate smash product of copies of S 1,0 and S 1,1 . We use the same notation S p,q for motivic spheres and equivariant spheres. Equivariant realization sends one to the other, so this abuse of notation generally does not lead to confusion.
We write π R p,q for the R-motivic stable homotopy group [S p,q , S 0,0 ]. The nth R-motivic Milnor-Witt stem is the collection of groups ⊕ p π R p+n,p . As in the equivariant case, the 0th Milnor-Witt stem is a subring, and the nth Milnor-Witt stem is a module over the 0th Milnor-Witt stem. Morel's connectivity theorem [8] shows that the negative Milnor-Witt stems are zero. Moreover, Morel has calculated the 0th Milnor-Witt stem in terms of Milnor-Witt K-theory [7, Section 6].
Morel's calculation gives an explicit description of π R −1,−1 , but it turns out to be a complicated uncountable group. In order to carry out further calculations, we find it convenient to work with the stable homotopy groups of the 2-completed R-motivic sphere. One could complete at odd primes as well, but we do not address that here.
We will now set aside the R-motivic stable homotopy ring π R * , * , and instead work with the stable homotopy ringπ R * , * of the 2-completed R-motivic sphere. This ring splits into Milnor-Witt stems as before. The 2-complete negative Milnor-Witt stems are still zero, and the 2-complete 0th Milnor-Witt stem can be easily described with generators and relations. Moreover, the first, second, and third Milnor-Witt stems have been completely described [3]. The authors have preliminary data on the nth Milnor-Witt stems for n ≤ 15; these results will appear in a future article.
1.4. The comparison. The map (1.1) is not an isomorphism in general. We know that the negative R-motivic Milnor-Witt stems vanish, whereas Table 1 shows that in the Z/2-equivariant context the negative Milnor-Witt stems are non-trivial. In the 0th Milnor-Witt stems, the map (1.1) is an isomorphism when p ≤ 4 but not in general [1,Theorem 12.4(iii)]. Likewise, the computations of [3] show thatπ R * , * vanishes in the first Milnor-Witt stem for weights larger than 2, whereas the Z/2equivariant analog of this is false.
Nevertheless, we find that the map (1.1) is an isomorphism in a certain range. The following is the main result of the paper.
s,w is an isomorphism in the range s ≥ 3w − 5 or s ≤ −1.
In Table 1 the range from the above theorem is shaded. All of the groups in that region coincide, up to 2-completion, with their 2-completed R-motivic analogues. Example 1.6. We computed in [3] thatπ R 7,4 contains an element of order 32. Theorem 1.5 implies thatπ Z/2 7,4 also contains an element of order 32. This is somewhat surprising because the classical image of J in the 7-stem has order 16. In fact, this phenomenon is already apparent in the results of Araki and Iriye [1]. This observation calls strongly for a more careful study of the motivic and equivariant images of J.
We note two immediate consequences of Theorem 1.5. First, consider the map π R s,w →π s−w induced by taking fixed points of equivariant realization. Theorem 1.5 implies that this map is an isomorphism in the range s ≤ −1 and a split surjection for s ≥ max{3w − 5, 2w}, based on the analogous facts for φ : π Z/2 s,w → π s−w . Secondly, the known periodicity phenomena in the π Z/2 s, * groups can now be transplanted into the R-motivic context, as in Corollary 1.7. Corollary 1.7. For fixed s in the range s ≥ max{3w − 5, 2w}, the complementary summands ofπ s−w inπ R s,w are periodic in w. We do not give the periods in Corollary 1.7, but specific formulas for these are known from the equivariant context. Corollary 1.7 describes a qualitative property of R-motivic stable homotopy groups that deserves further study and is related to τ 2 n -periodic families in the R-motivic Adams spectral sequence (see [3] for an introduction to this basic phenomenon). We expect to return to the topic of motivic periodicity in future work.
The proof of Theorem 1.5 is straightforward. Equivariant realization induces a map from the R-motivic Adams spectral sequence to the Z/2-equivariant Adams spectral sequence. The R-motivic and Z/2-equivariant Steenrod algebras agree in a range of dimensions. This gives an isomorphism on cobar complexes in a range, which shows that R-motivic and Z/2-equivariant Ext groups agree in a range. In other words, the Z/2-equivariant and R-motivic Adams E 2 -pages agree in a range. Finally, this induces an isomorphism in homotopy groups in a range. The only complications arise as matters of bookkeeping.
1.8. Notation. For the reader's convenience, we record here notation used in the article.
• M R 2 is the R-motivic homology of a point with F 2 coefficients.
• A R is the dual R-motivic Steenrod algebra. We grade elements in the form (t, w), where t is the internal Steenrod degree and w is the motivic weight.
is the cohomology of the R-motivic Steenrod algebra. We grade elements in the form (s, f, w), where s = t − f is the stem, f is the Adams filtration, and w is the motivic weight.
2 ) is the cohomology of the Z/2-equivariant Steenrod algebra. We grade elements in the form (s, f, w), where s = t − f is the stem, f is the Adams filtration, and w is the equivariant weight.
•π R * , * is the stable homotopy ring of the 2-completed R-motivic sphere. We grade elements in the form (s, w), where s is the stem and w is the motivic weight.
•π Z/2 * , * is the stable homotopy ring of the 2-completed Z/2-equivariant sphere. We grade elements in the form (s, w), where s is the stem and w is the equivariant weight. For sake of tradition, we refer to A R and A Z/2 as Steenrod algebras. More pre- is a non-trivial A Z/2 -module.
When we build the cobar complex, we will use the augmentation ideal A R of A R , i.e., the kernel of the augmentation map A R → M R 2 . Observe that A R is also free as a left M R 2 -module, with the same basis as for A R except that the monomial 1 is excluded.
We will now recall an explicit description of M Z/2 2 [4, Proposition 6.2]. It contains M R 2 as a subring, but also contains a "dual copy" in opposing dimensions. Figure 2 gives a complete description of M Z/2 2 . Every dot denotes a copy of F 2 , vertical lines represent multiplication by τ , and diagonal lines represent multiplication by ρ. in bidegree (t, w) consists of a copy of F 2 when: (1) t ≥ 0 and w ≥ t + 2, or (2) t ≤ 0 and w ≤ t. The element in bidegree (0, 2) is called θ, and the other elements in the "dual copy" are typically named θ τ k ρ l for k ≥ 0 and l ≥ 0. This naming convention respects the product structure, although one must remember that neither τ nor ρ is actually invertible. Any two elements of the form θ τ k ρ l multiply to zero. These details about the product structure will not be needed in our analysis.
The dual Z/2-equivariant Steenrod algebra A Z/2 has the same description as the Proof. The bidegree of each ξ i satisfies the inequality t ≤ 3w. Similarly, if i ≥ 1, then the bidegree of τ i also satisfies t ≤ 3w. Therefore the bidegree of τ ǫ ξ n satisfies the inequality t ≤ 3w if ǫ 0 = 0.
Remark 2.2. In fact, one can make a much stronger statement about the bidegrees of the elements τ ǫ ξ n . In general, the bidegree of such an element satisfies the inequality t ≥ 2 c+1 − c − 2, where c = t − 2w is the "Chow degree". However, this stronger inequality does not end up yielding a stronger result about stable homotopy groups. Likewise, the result in the following lemma is non-optimal-but the slope of 3 is chosen precisely because it interacts well with the bound from the previous lemma. Proof. In Figure 2, the elements of the form θ ρ k τ l all lie on or above the line t = 3w − 6.
Cobar complexes and Ext groups
Next we proceed to the cobar complexes of A R and A Z/2 , respectively. These cobar complexes are differential graded algebras whose homologies give the R-motivic and Z/2-equivariant Ext groups. We will obtain an isomorphism of Ext groups in a range by establishing an isomorphism of cobar complexes in a range.
Let C * R and C * Z/2 be the R-motivic and Z/2-equivariant cobar complexes. By definition, C f R is equal to The map C f R → C f Z/2 is: • an injection in all degrees.
• an isomorphism in degrees satisfying t ≤ f − 1.
Remark 3.3. The inequalities in Lemma 3.2 are sharp in the following sense. The element θ[τ 0 τ 1 |τ 0 τ 1 | · · · |τ 0 τ 1 ] of C f Z/2 lies on the line t − f = 3w − 6, and it does not belong to the image of C f R . Also, the element θ[τ 0 |τ 0 | · · · |τ 0 ] of C f Z/2 lies on the line t = f , and it does not belong to the image of C f R . The following lemma from homological algebra will let us deduce an Ext isomorphism from the cobar isomorphism of Lemma 3.2. The two parts are dual, and the proofs are simple diagram chases. We will use the grading (s, f, w) for Ext groups, where s is the stem, f is the Adams filtration, and w is the weight. An element of degree (s, f, w) occurs at Cartesian coordinates (s, f ) in a standard Adams chart. Recall that s = t − f , where t is the internal Steenrod degree.
Proof. The claims follow immediately from Lemmas 3.2 and 3.4 because Ext can be computed as the homology of the cobar construction.
Proposition 3.6. In degree (s, f, w), the map Ext R → Ext Z/2 is an isomorphism if s ≤ −1.
Proof. Lemmas 3.2 and 3.4 imply that the map is an isomorphism if s ≤ −2 and is a surjection if s = −1. In order to obtain the isomorphism for s = −1, we need to investigate the cobar complex a little further.
In degrees satisfying s = 0, i.e., t = f , the cokernel of the map C * R → C * Z/2 of cobar complexes consists elements of the form θ τ a [τ 0 |τ 0 | · · · |τ 0 ]. All of these elements are cycles in the Z/2-equivariant cobar complex. A diagram chase now shows that Ext R → Ext Z/2 is an isomorphism if s = −1.
The following finiteness condition for Ext R implies that there are only finitely many Adams differentials in any given degree. We will need this fact in Section 4 when we analyze the Adams spectral sequence.
Lemma 3.7. In each degree (s, f, w), the group Ext (s,f,w) R is a finite-dimensional F 2 -vector space.
Proof. As described in [3,Section 3], there is a ρ-Bockstein spectral sequence converging to Ext R . It suffices to show that the E 1 -page of this spectral sequence is finite-dimensional over F 2 in each tridegree. In degree (s, f, w), this E 1 -page consists of elements of the form ρ k x, where k ≥ 0 and x belongs to the C-motivic Ext group in degree (s + k, f, w + k).
The C-motivic Ext groups have a vanishing plane, as described in [3,Lemma 2.2]. In this case, the vanishing plane implies that k ≤ s + f − 2w if x is non-zero. Since k is non-negative this means there are only finitely-many values of k that contribute to the E 1 -page of our spectral sequence in degree (s, f, w).
Finally, the C-motivic Ext groups are degreewise finite-dimensional. This follows from the fact that the E 1 -page of the motivic May spectral sequence is degreewise finite-dimensional.
Homotopy groups
We now come to our main results comparing R-motivic and Z/2-equivariant homotopy groups. • an isomorphism if s ≥ 3w − 5.
Proof. Proposition 3.5 gives an isomorphism (in a range) between the E 2 -pages of the R-motivic and Z/2-equivariant Adams spectral sequences. Inductively, Lemma 3.4 gives isomorphisms (in a range) between the E r -pages of the spectral sequences for all r. The finiteness condition of Lemma 3.7 guarantees that for each degree (s, f, w), there exists an r such that the E ∞ -page is isomorphic to the E r -page. Therefore, we obtain an isomorphism of E ∞ -pages in a range.
The E ∞ -pages are associated graded objects of the stable homotopy groups. This implies that the stable homotopy groups are isomorphic as well.
The same style of argument applies to the claim about injections.
Theorem 4.2. The mapπ R s,w →π Z/2 s,w is an isomorphism if s ≤ −1. Proof. The argument from Theorem 4.1 implies that the map is an isomorphism for s ≤ −2 and a surjection for s = −1. In order to obtain the isomorphism for s = −1, we have to investigate the Adams E 2 -pages slightly further.
Recall from the proof of Proposition 3.6 that in degrees satisfying s = 0, the cokernel of the map C * R → C * Z/2 of cobar complexes consists of elements of the form θ τ k [τ 0 |τ 0 | · · · |τ 0 ]. Therefore, in degrees satisfying s = 0, the cokernel of the map Ext R → Ext Z/2 consists of elements of the form θ τ k h i 0 . These elements are all permanent cycles in the Z/2-equivariant Adams spectral sequence. In other words, there is a one-to-one correspondence between R-motivic and Z/2-equivariant Adams differentials from the 0-stem to the (−1)-stem.
A diagram chase now establishes that the R-motivic and Z/2-equivariant E ∞pages are isomorphic for s = −1. This passes to an isomorphism of stable homotopy groups.
We restate Theorem 4.1 in an equivalent form that is useful from the Milnor-Witt degree perspective. Proof. This is a straightforward algebraic rearrangement of Theorem 4.1, using that n = s − w. Table 1 summarizes some of the calculations of Araki and Iriye [1,6]. The indices across the top indicate the stem s, while the indices at the left indicate the Milnor-Witt degree s − w. The R-motivic and Z/2-equivariant stable homotopy groups are isomorphic in the shaded region, as described in Theorem 1.5.
Equivariant stable homotopy groups
For compactness, we use the following notation to indicate abelian groups: (1) ∞ = Z.
(3) n · m = Z/n ⊕ Z/m. (4) n k = (Z/n) k The symbols π k indicate that the classical stable homotopy group π k splits via the fixed point map. Table 1 is a companion to [6, Table 2], which gives the values of the periodic summands. The red symbols in Table 1 are exceptions to the periodicity. Table 1. Some values of π Z/2 s,w | 2016-03-30T18:42:26.000Z | 2016-03-30T00:00:00.000 | {
"year": 2016,
"sha1": "73a082f5de4c1ec7841ec46cdae41a7969a5d1d4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "73a082f5de4c1ec7841ec46cdae41a7969a5d1d4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
54210737 | pes2o/s2orc | v3-fos-license | Large Wood Volume and Longitudinal Distribution in Channel Segments Draining Catchments with Different Land Use , Chile
The storage, longitudinal distribution and recruitment processes of in-stream large wood (LW) were studied comparing channel segments draining four Chilean mountain catchments with different land use. The segments were divided into relatively uniform reaches of different lengths and surveyed for LW (piece dimensions, position in the channel, orientation to flow and aggregation) and stream morphology (slope and bank full channel width and depth) characterizations. LW volume stored in the Pichun, El Toro and Vuelta de Zorra study channels are within the range informed in international researches from streams draining catchments with similar forest covers. However, the 1057 m3/ha of LW stored in Tres Arroyos is extremely high and in the same order of magnitude than the reports from old-growth forests in the Pacific Northwest of USA. The size of the area that can potentially provide wood to streams depends on the wood supply mechanisms within any catchment, and the LW stored in the study segments increases as the size of this area increases. This study aims to contribute to the knowledge on the effects of LW in mountain channels, gathering new information and expanding investigations developed in Chile since 2008. This research was carried out within the framework of Project FONDECYT 11106209.
Introduction
Large wood (LW, wood pieces with diameter ≥ 0.1 m and length ≥ 1 m) is an important component of fluvial systems.When deposited within the channel bed and in floodplains and islands wood pieces exert significant morphologic and ecologic influences in riverine ecosystems [1].The amount and characteristics of LW, and consequently their eco-morphologic role in rivers, depend on the location, magnitude, and timing of logs recruitment, and on the log transport through reaches.Wood budgets at a reach scale depend on the balance between wood supply from the fall of trees from the riverine vegetation, the input of wood pieces recruited from landslides and tributary channels, in-situ decay and transport during floods [2].Wood budgets and dominating wood recruitment processes are strongly dependent on the history of land use within the catchments, although the occurrence of episodic disturbances such as wildfires, pest outbursts or volcanic eruptions can dramatically change the conditions and relative contributions of the areas that supply wood into the streams [3].
Longitudinal transfer of LW along a stream is influenced by the characteristics of the wood pieces (storage and diameter, length, and wood density), the morphology of the channel (dimensions of bankfull channel, slope, bed forms and roughness) and flow conditions [4] and [5].The amount and longitudinal distribution of wood have been studied in many rivers of the world, and findings demonstrate that wood distribution depends on land use at the basin scale, size of the basin, fire disturbance, flow regime, riparian vegetation, and morphology of the river [4]- [6].
LW has been investigated in Chile since 2005 [7] with further developments as reported by [8]- [12].This study adds to existing information on LW volume and wood piece characteristics in Chilean streams, as four stream channels draining mountain catchments with different histories of land uses have been surveyed for this scope.Besides, LW longitudinal distribution patterns along each channel were analyzed as well as the relation between stored wood volumes and the dimensions of the catchments, the supply mechanisms and the surface of the areas that can potentially supply wood to streams.Existing and new information is gathered to expand a research that has been developed since 2008 in four catchments located both in the Coastal and Andes mountain ranges, southern Chile.
Study Sites
The study was carried out in four catchments located in the Coastal and Andes mountain ranges of southern Chile, namely Pichún, El Toro, Tres Arroyos and Vuelta de Zorra (Figure 1).
Pichún (37˚30'12''S; 72˚45'54''W) is located in the Coastal Mountain range.The 84% of the 431 ha catchment area is covered by Eucalyptus globulus plantations established during 2005-2008, and the 16% are forest roads and riparian vegetation.The original native forests of the area were almost completely clear cut at the end of the 1800's and transformed into crop lands, and since mid-1950's the area was re-afforested with Pinus radiata plantations.These plantations were managed with rotations of ~ 25 years and after two consecutive pine rotations this species was replaced by Eucalyptus spp.The dominant soil type is a clayey to loamy Luvisol and homogeneous schist forms the geological basement and parental material which is overlaid by a loamy-sandy saprolite.The stream has a pluvial regime associated to mean annual rainfalls of 1150 mm concentrated between April and September.Large wood is supplied into the main channel only from the riparian area (some native but mainly P. radiata trees from the previous rotations) through natural mortality, windthrowing or bank erosions.Further details on this site can be found in [10] and [11].
El Toro (38˚09'11''S; 71˚48'12''W) is located on the Andes range.The entire catchment area is part of the Malleco Forest Reserve managed and protected by the National Forest Service (CONAF).The 1750 ha catchment area was originally covered by a Coigüe-Raulí-Tepa (Nothofagus dombeyi, N. alpina and Laureliopsis philippiana) evergreen forest type.During the 2001-2002 fire season, catastrophic fires burned ~ 20000 ha of temperate forests in the Andean areas of the Araucania region of Chile, and near the 98% of the El Toro forest cover was severely affected.The vegetation in these burnt sites is regenerating naturally.Soils derive from volcanic ashes, and the geology is characterized mainly by basaltic and andesite rocks but also with the presence of granitic rocks.The stream has a mainly pluvial regime dominated by annual rainfalls exceeding 2500 -3000 mm at lower elevations and 4000 -5000 mm with snowfall participation at higher altitudes.In-stream large wood is supplied by the fall of trees associated to natural mortality, windthrowing, or bank erosions, and the last years by the falling of burnt dead trees.Further information on this study site is provided in [8].
Tres Arroyos (38˚27'57''S; 71˚33'44''W) is located on the foothills of the Andes range.The 64% of the 907 ha catchment area is covered with old grown native forests belonging to the Araucaria (Araucaria araucana) and Roble-Raulí-Coigüe (N.obliqua, N. alpina and N. dombeyi) forest types, the 6.5 are exotic conifer plantations established in the 1970's, 23% are herbs and shrubs near the tree line and 6.5% are volcanic ashes.Soils are sandy-rich and the geology of the area is characterized by a Miocenic formation with pyroclastic rocks such as andesite breccias, tuffs and ignimbrites, lavas and sedimentary layers.The stream has a mainly pluvial regime dominated by annual rainfalls exceeding 2500 mm, with snowfall occasionally occurring.Forests of the area were partially eliminated during the 1930-1940 by big fires initiated to open land for grasslands but the upper 600 ha of the catchment was not affected by these fires.The elimination of the forest cover, the steepness of the terrains and the climate characteristics triggered massive landslides and debris flows which forced CONAF to initiate in early 1970's the re-afforestation with exotic conifers.The catchment area is part of the Malalcahuello-Nalcas Forest Reserve managed and protected by CONAF.In-stream large wood is supplied by the fall of trees associated to natural mortality, windthrowing, diseases or bank erosions, especially in the upper part of the study segments where the main stream flows through old native forests having massive N. dombeyi trees.However, [7] report peaks of LW in the main channel associated with steep debris flow tributaries and landslides.More information on this study site is provided in [7] and [8].
The fourth study catchment is Vuelta de Zorra (39˚58'12''S; 73˚34'13''W), which is located in the Coastal mountain range.The 75% of the 587 ha study catchment area is covered by a 150 -200 years old second growth evergreen native rainforest about, 24% by E. nitens plantations and the remaining by natural regeneration of native shrubs.Soils are clayed-rich on the eastern side of the main channel and sandy-rich on the west, and the geology of the Coastal mountain range has a basement of Paleozoic metamorphic rocks and minor Cretaceous granitoids with one of these appearing in the study area.The stream has pluvial regime dominated by annual rainfalls exceeding 2300 mm.Forests of the area were affected by the wood industry developed since the 1800's associated to the exploitation of Fitzroya cupressoides and other native tree species, and then by a project that began in 2000 intended to replace native forests by Eucalyptus spp.plantations.This project did not last and since mid-2003 the catchment is part of The Nature Conservancy owned Valdivian Coastal Reserve.In-stream large wood to the main channel is supplied only by the fall of trees associated to natural mortality, windthrowing, diseases or bank erosions.Additional information of this study site can be found in [9]- [11].
Characterization of the Channel Segments
Between November 2008 and March 2009, several channel reaches of the Vuelta de Zorra, Tres Arroyos, El Toro and Pichun third order stream segments were surveyed for large wood and channel morphological characterization.Every reach was defined by uniformity of slope, channel width or LW abundance, as in [9].
Overall, the length of the study segments were 1004 m (12 reaches), 2188 m (17 reaches), 2070 m (22 reaches) and 1557 m (16 reaches), and for the Pichún, El Toro, Tres Arroyos and Vuelta de Zorra streams, respectively.The study segments represented the 31, 28, 39 and 45% of the total length of the Pichún, El Toro, Tres Arroyos and Vuelta de Zorra main streams.
A laser distance meter with an inclinometer was used to calculate reach length and channel slope, and reach mean channel bank full width (W bf ) and depth (H bf ) were calculated averaging measurements from cross-sections less than 15-m apart.Unit stream power for bank full conditions was calculated by means of a simplified version of the traditional equation as suggested by [13], using: where ω is unit stream power(W/m 2 ), γ the specific weight of water (N/m 3 ), g the acceleration due to gravity in (m/s 2 ), H bf and W bf the mean reach bank full depth and width (m), and S c the mean reach channel slope (m/m).
All individual or jam forming wood pieces with diameter ≥0.1 m and length ≥1 m located within the bank full channel were measured for length and mid-diameter.The volume of each wood element was calculated from its mid-diameter and length assuming a solid cylindrical shape, and during the field campaigns the geometric dimensions of jams (length, width and height) were measured and the wood pieces were classified according to its type, position in the channel, source, orientation to flow and aggregation (single elements or jam forming logs), following [7] and [9].
Surface of Riparian Areas That Can Provide Large Wood to Streams
The surface of the riparian areas that can potentially provide wood to streams was calculated considering the large wood recruitment processes within each river segment.
At the Vuelta de Zorra, El Toro and Pichun channel segments the main recruitment process is associated to the fall of trees into the stream by natural causes (toppling because of mortality or windthrowing, deseases, or bank cuttings).In the Vuelta de Zorra and El Toro channels, the recruitment area is limited at both sides of the channel by the height of the tallest trees (35 m for Vuelta de Zorra and 37 for El Toro).In Pichun, this area is restricted by the width of the riparian area (25 m), considering that the rest of this catchment is covered by plantation forests that are clear cut regularly at the end of every rotation.In Tres Arroyos, adding to the fall of trees from the riparian area limited at both sides of the channel by the height of the tallest trees (37 m) is the LW recruitment process associated to the transport of wood into the main channel by very active torrential tributaries.In this case, the surface of the recruitment areas of these streams (limit of 37 m at both sides of the channel) is totaled with the one of the main channel.In all cases, the area that can potentially provide wood to streams is expressed in hectares per 100 m of the study segment.
Statistical Analyses
Regressions were used to examine longitudinal distribution patterns of LW along the channels, using reach mean channel bank full width and depth, and slope and unit stream power as independent variables and LW reach mean diameter, length, volume (m 3 per hectare of bank full channel) and piece abundance (number of pieces per hectare of bank full channel) as dependent variables.The potential relationships between total segment LW volume (m 3 per hectare of bank full channel) with catchment size (ha) and the surface (ha/100m) that can potentially provide wood to streams were studied using linear regressions.The SAS ® package was used and regressions were considered statistically significant if P ≤ 0.05.
Reach Channel Characteristics
The main characteristics of each stream reach, a reach channel length, slope, bankfull depth (H bf ) and width (W bf ) and unit stream power are provided in
Abundance and Characteristics of In-Stream Large Wood
A total of 113, 776, 2636 and 487 large wood pieces were found in the Pichún, El Toro, Tres Arroyos and Vuelta de Zorra study segments.Considering the channel bankfull area as reference, the abundance of wood pieces in number of wood pieces per bankfull area (n˚/ha) is relatively similar for Pichún, El Toro and Vuelta de Zorra (235, 269 and 295 n˚/ha, respectively), but much higher in Tres Arroyos with a total of 1310 n˚/ha.Reach large wood abundance is highly variable (Figure 2).In Pichun reach large wood abundance varies from 0 to 440 n˚/ha, between 82 and 741 n˚/ha at El Toro, from 444 and 2702 n˚/ha in Tres Arroyos and between 82 and 914 n˚/ha in the Vuelta de Zorra study channel.As reported by [13] studying 13 channels in the Italian Alps, large wood abundance can be also much variable (from 200 to 2500 n˚/ha among reaches), but in this case they measured wood pieces with diameters ≥ 5 cm and lengths ≥50 cm.Reach scale dimensions (mid-diameter and length) of these wood pieces are summarized in Figure 2. Mean (cross within the box) and median (horizontal line within the box) diameters are much similar for Pichun and Vuelta de Zorra, higher for El Toro and still much higher for Tres Arroyos.El Toro and Tres Arroyos store the larger wood pieces with diameters up to 1.3 m.Mean and median wood piece lengths are similar among the different segments, although some 28 m-long pieces were found lying on the Tres Arroyos channel.
The ratio of mean piece diameter to mean segment bankfull depth is approximately 0.23, 0.18, 0.36 and 0.2 for Pichún, El Toro, Tres Arroyos and Vuelta de Zorra, while the ratio of mean piece length to mean segment bankfull width is 0.8, 0.3, 0.4 and 0.4 for the same segments.According to [14]- [16], pieces tend to be stable when piece length is greater than bank full width in smaller rivers, or when piece diameter is higher than bank full depth.According to the values of these ratios for the four study segments, LW mobility might be highly possible even under normal peak flows.
Almost all the wood pieces in El Toro and Tres Arroyos (100 and 99.7% respectively) were logs, and this percentage is reduced to 91% for Pichun and Vuelta de Zorra.In these two last segments the rest of the wood pieces are rootwads, boles with rootwads, full trees and branches.
At the El Toro, Tres Arroyos and Vuelta de Zorra the majority of the LW (94%, 84% and 75% respectively) were found within the channel or at the bank full level (Figure 3).Wood pieces on the stream margins (lying on the floodplains but at least with part of the element in contact with the bank full channel) or spanning the channel are important in Pichun and Vuelta de Zorra, and log-steps were found in all the study segments.One log-step was found both at the Pichún and El Toro segments and 78 and 14 at the Tres Arroyos and Vuelta de Zorra, respectively.The majority of the LW pieces in El Toro, Tres Arroyos and Vuelta de Zorra segments were classified as having been transported from upstream locations (see also Figure 3).This trend is different in Pichun, were most of the in-stream wood was classified as residues and recruited as trees falling from the margins by natural causes or bank erosions.Residues from the forest activities are important in the Pichun and Vuelta de Zorra segments reflecting the conditions of land uses of their catchments.
The 41% of the wood pieces within the Pichun and Vuelta de Zorra segments were found lying parallel to the stream flow.The majority of the LW was found at El Toro oblique (42%) and in Tres Arroyos orthogonal (36%) to the flow.The orientation of the logs respect to the stream flow is a consequence of the transport dynamics of the wood pieces, and [17] and [18] inform that wood pieces oriented at 45˚ and 90˚ to the flow are more mobile while [19] indicate that fluvially deposited wood pieces tend to be oriented parallel to flow.
Most of the LW was found as single elements and the rest lying in contact with at least one other wood piece, but the proportion is different among segments.Pichun features only a few accumulations, and in average one logjam each 251 m, while the mean distance between logjams is very similar for El Toro and Vuelta de Zorra (58 and 63 m, respectively).At the Tres Arroyos, one logjam occurs in average each 15 m and this short distance should be associated to the very high volume of LW stored in this channel and the dimensions of the wood elements recruited from the margins that act as key pieces contributing to create stable logjams.Considering logjams and logsteps, in average each 201, 61, 10 and 38 m large wood structures are influencing the morphology of the Pichun, El Toro, Tres Arroyos and Vuelta de Zorra channels.
Reach Volume and Longitudinal Distribution of In-Stream Large Wood
Considering the channel bank full area as reference, LW volume is 56, 202, 1057 and 109 m 3 /ha for Pichun, El Toro, Tres Arroyos and Vuelta de Zorra, respectively (Table 2).The volume stored in Pichun is within the range of mean values from 4 to 127 m 3 /ha reported by [3] [20] and [21] in streams draining pine forested catchments in New Zealand, Spain, Russia and the NW USA.LW volume at El Toro and Vuelta de Zorra (202 and 109 m 3 /ha) can be compared with the range from 100 and 200 m 3 /ha informed by [22] from channels bordered by broadleaved mature forests.The 1057 m 3 /ha of LW stored in Tres Arroyos is extremely high.For this channel [7] reported ~ 710 m 3 /ha of LW but their data was obtained from a shorter segment and using a sample of wood pieces to calculate volume.The value reported in this research is among the highest reported in the international literature and similar to the reports from old-growth forests in the Pacific Northwest of USA where according to [22] a wood storage up to 1000 m 3 /ha can be reached.
LW volume in the different segments is representative of the current and historical land uses within the catchments.The El Toro and Vuelta de Zorra catchments are covered with similar forest types, but the higher LW volume found in the channel of the former might be indicating an increase of wood supply associated to the fall of fire-killed trees during the 2002 wildfire.The lower volume at Pichun can be reasonably due to the fact that the original forests were eliminated more than a century ago to create grass and crop lands and then managed since the 1950's under a scheme of successive forest plantation rotations, then the recruitment of large wood is limited to a narrow fringe of riparian vegetation composed mainly by small native trees and P. radiata stems left behind from the previous final harvest.LW in Tres Arroyos is extremely high; the upper study segment flows through very old native forests with massive N. dombeyi trees and downstream it receives peaks of LW from steep tributaries.
LW storage is extremely variable among reaches (see Table 2), which is consistent with other studies [13].Reach LW volume varies between 0 and 130 m 3 /ha in Pichún, from 42 to 502 m 3 /ha at El Toro, from 418 to 2300 m 3 /ha in Tres Arroyos and finally between 3 and 380 m 3 /ha at the Vuelta de Zorra channel.
Reach LW volume longitudinal distribution is only explained (statically significant, P = 0.02) for the Vuelta de Zorra channel by differences of reach mean bankfull width, while reach LW abundance longitudinal distribu- tion is not explained in any of the study segments by any of the independent variables used in the analyses (reach mean channel bankfull width and depth, and slope and unit stream power).Longitudinal distribution of reach mean LW diameter is explained only in the El Toro channel by differences in reach mean channel bankfull width and depth, and slope and unit stream power (P ≤ 0.05).Finally, longitudinal distribution of reach mean LW length is explained for Tres Arroyos and Vuelta de Zorra by differences of reach mean bankfull width (P≤ 0.05), and additionally for the Tres Arroyos channel by differences in reach mean unit stream power (P≤ 0.05).When examining the results of the regressions analyses between LW reach mean diameter, length, volume and piece abundance as dependent variables against reach mean channel bankfull width and depth, and slope and unit stream power, it is not straightforward to identify general longitudinal distribution patterns of LW along the channels.However, this agrees with the findings by [13] from their study in several channels in the Italian Alps.
Relationships between Total Segment LW Volume with Catchment Size and the Area That Can Provide Wood to Streams
The relationship between total segment LW storage (in m 3 /ha) and catchment size (ha) is presented in Figure 4.
There is a weak trend showing an increase of LW storage with increasing catchment size.Previous studies have shown the same trend especially the conceptual model proposed by [23] which predict increasing LW storage for increasing drainage area in Japanese river systems, whereas others have not such as those by [13] and [24] from some Italian and North American streams presenting a negative correlation.Segment large wood volume (in m 3 /100m) is significantly correlated (P ≤ 0.05) with the size of the area (ha/100m) that can potentially provide wood to streams (Figure 4).Large wood volume increases as the size of this area increases, and the dimension of this area is linked with the wood supply mechanisms within any catchment.Although the limitations of this linear model especially associated to the reduced number of study catchments, it could be used as a first approach to estimate large wood loads in Chilean forested environments.
Conclusions
LW volume stored in the Pichun, El Toro and Vuelta de Zorra study segments fall within the range informed in other researches from streams draining catchments with similar forest covers, but the 1057 m 3 /ha of LW stored in Tres Arroyos is extremely high and similar than the reports from old-growth forests in the Pacific Northwest of USA.The size of the area that can potentially provide wood to streams depends on the wood supply mechanisms within any catchment which are very much associated to differences in characteristics and history of land use changes.In particular, landslides and debris-flows triggered by land use changes can increase the recruitment zone from the nearby riparian sector to the whole basin scale.The simple linear model correlating LW volume with the size of this area could be used as a first approach to estimate large wood loads in Chilean forested environments.
This study aims to contribute to the knowledge on the effects of LW in Chilean mountain channels.However, the abundance of LW in the study catchments, and the dimensions, origin, position in the channel and orientation to stream flow of the wood pieces suggest that large wood mobility might occur even under normal peak flows thus indicating the need to further study LW mobility.
Figure 1 .
Figure 1.In white circles the location of the study sites.In black circles the position of main cities.
Figure 2 .
Figure 2. Box plots of reach LW abundance (left), diameter (center) and length (right).The cross and line within each box indicate the mean and median values, box ends are 25 th and 75 th percentiles, whiskers are the 10 th and 90 th percentiles, and dots are outliers.
Figure 3 .
Figure 3. Position in the channel (above) and origin (below) of large wood pieces.a)Pichun; b) El Toro; c) Tres Arroyos; d)Vuelta de Zorra.
Figure 4 .
Figure 4. Relationships between total segment LW storage and catchment size (left) and between total segment LW storage and the surface of the area that can potentially provide wood to streams (right).
Table 1 .
Tres Arroyos and Pichún are the steeper (0.098 and 0.097 m/m
Table 1 .
Reach morphologic characteristics in the study segments.
respectively) and Vuelta de Zorra and El Toro the flatter (0.04 and 0.053 m/m respectively) segments.At the reach level, Tres Arroyos and Pichún also feature the steeper reaches (up to 0.249 m/m) whereas in Vuelta de Zorra and El Toro reach slopes are generally lower than 0.1 m/m.Reach and segment bank full depth (H bf ) are of the same order of magnitude for Vuelta de Zorra, Tres Arroyos and Pichún but much higher at El Toro.Reach and segment bankfull width (W bf ) are similar for Vuelta de Zorra and Tres Arroyos, while Pichún is the narrowest (4.8 m of mean segment bankfull width) and El Toro the widest (12.9 m of mean segment bankfull width) among the study channels.Mean segment unit stream (ω in
Table 1 )
is relatively similar for El Toro and Tres Arroyos (3836 and 3551 W/m 2 respectively), but lower in Pichún (2215 W/m 2 ) and even much lower in Vuelta de Zorra (1102 W/m 2 ).
Table 2 .
LW volume stored in the different reaches and segments. | 2018-12-02T01:26:31.938Z | 2014-04-04T00:00:00.000 | {
"year": 2014,
"sha1": "c48ba97825f6eb8ba6fd99ddf014e31294cc100e",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=45436",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c48ba97825f6eb8ba6fd99ddf014e31294cc100e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
15832110 | pes2o/s2orc | v3-fos-license | Three‐dimensional post‐glacial expansion and diversification of an exploited oceanic fish
Abstract Vertical divergence in marine organisms is being increasingly documented, yet much remains to be carried out to understand the role of depth in the context of phylogeographic reconstruction and the identification of management units. An ideal study system to address this issue is the beaked redfish, Sebastes mentella – one of four species of ‘redfish’ occurring in the North Atlantic – which is known for a widely distributed ‘shallow‐pelagic’ oceanic type inhabiting waters between 250 and 550 m, and a more localized ‘deep‐pelagic’ population dwelling between 550 and 800 m, in the oceanic habitat of the Irminger Sea. Here, we investigate the extent of population structure in relation to both depth and geographic spread of oceanic beaked redfish throughout most of its distribution range. By sequencing the mitochondrial control region of 261 redfish collected over a decadal interval, and combining 160 rhodopsin coding nuclear sequences and previously genotyped microsatellite data, we map the existence of two strongly divergent evolutionary lineages with significantly different distribution patterns and historical demography, and whose genetic variance is mostly explained by depth. Combined genetic data, analysed via independent approaches, are consistent with a Late Pleistocene lineage split, where segregation by depth probably resulted from the interplay of climatic and oceanographic processes with life history and behavioural traits. The ongoing process of diversification in North Atlantic S. mentella may serve as an ‘hourglass’ to understand speciation and adaptive radiation in Sebastes and in other marine taxa distributed across a depth gradient.
Introduction
While recent studies continue to amass evidence for notable patterns of population structure in the oceans (Sala-Bozano et al. 2009;Therkildsen et al. 2013;Silva et al. 2014), the long-standing paradigm of limited geographic barriers, high dispersal potential and large effective population size continues to hold when comparing oceanic with continental environments (Hauser & Carvalho 2008). Despite interpretive frameworks that help explain marine population structure based on geographical distance (Palumbi 1994;Weersing & Toonen 2009), oceanographic features (Gaither et al. 2010;Galindo et al. 2010;Selkoe & Toonen 2011) and life histories (Hare et al. 2005;Riginos et al. 2014), the reconstruction of connectivity patterns and recent evolutionary processes in oceanic species remains a mighty task, due to the aforementioned 'marine paradigm' (see Waples 1998;Avise 2004;Hauser & Carvalho 2008) and the inherent difficultycompared to terrestrial and freshwater habitatsof even locating populations and collecting representative samples over vast areas.
All of the above is further compounded by a conspicuous, yet seldom investigated, factor: depth. Depth physically multiplies the expanses across which individuals can disperse and populations can spread, over both ecological and evolutionary timescales. Depth is also associated with a wide variety of environmental factors, including temperature, salinity, hydrostatic pressure, food resources and light, which, independently or in combination, influence physiology, behaviour and adaptations (Somero 1992;Hyde et al. 2008;Vonlanthen et al. 2008;Irwin 2012;Roy et al. 2012;Yancey et al. 2014). As more studies document the emergence of depth-related patterns, it becomes apparent that this factor plays a role in the formation of distinct biological units, even in large, near-continuously distributed populations (Doebeli & Dieckmann 2003;Vonlanthen et al. 2008;Knutsen et al. 2009;Stef ansson et al. 2009a;Ingram 2010;Roy et al. 2012;Jennings et al. 2013). Sea level fluctuations during the last glacial cycle have been shown to have a role in shaping aquatic environments (Blanchon & Shaw 1995) and are thought to have contributed to the isolation and divergence of marine organisms (Ludt & Rocha 2015). Furthermore, the distribution of oceanic species over vast geographical areas is impacted by complex, large-scale ocean circulation processes, interplaying with bathymetric features along continental margins and mid-ocean ridges (Hemmer-Hansen et al. 2007;Knutsen et al. 2009). The full understanding of the role of depth in the context of physical oceanography, complex life histories and the historical backdrop appears as one main topical challenge in marine population biology, which may also have significant consequences for conservation and management.
One taxonomical group that has played a pivotal role in shaping our understanding of adaptive divergence along depth gradients in marine environments is the genus Sebastes (Alesandrini & Bernardi 1999;Hyde & Vetter 2007;. While most species of Sebastes are limited to the North Pacific (n > 100), with two occurring in the Southern Hemisphere, the four 'young' North Atlantic (NA) species are thought to have descended from an ancestor of the North Pacific Sebastes alutus lineage that invaded the NA under 3 my ago, when the Bering Strait opened as a result of warming Arctic waters (Raymo et al. 1990;Love et al. 2002;Hyde & Vetter 2007). Sebastes spp. are ovoviviparous (i.e. internal fertilization), long-lived, slow-growing and late-maturing species with low natural mortality; such unique life history characteristics influence complex population structuring . The beaked redfish, S. mentella, widely distributed throughout the boreal waters of the North Atlantic and Arctic Oceans, exhibits complex depth-associated patterns of substructure (Stef ansson et al. 2009a;Shum et al. 2014). While a demersal unit is recognized along the Icelandic continental shelf and slopes , S. mentella exhibits a primarily pelagic, open-ocean behaviour, occurring in assemblages at different depth layers, with a widely distributed shallow-pelagic group inhabiting depths between 200 and 550 m, and a deep-pelagic group between 550 and 800 m, mostly circumscribed to the northeast Irminger Sea (Stef ansson et al. 2009a, b;Planque et al. 2013;Shum et al. 2014).
Almost three decades of research investigating the geographic patterns and genetic structuring of S. mentella have resulted in intense debates Makhrov et al. 2011) regarding the number of genetically distinct populations, how they are spatially structured and connected, and how and when they have achieved their current distribution. All efforts so far had been based on traditional sampling campaigns mostly focused on geographical/fishery coverage Shum et al. 2014), with a lack of adequate standardized sampling and replication across depth layers where different beaked redfish assemblages occur (Bergstad 2013;Planque et al. 2013). Furthermore, previous studies lacked either the appropriate molecular and analytical toolkit (Stef ansson et al. 2009b) or the sampling coverage (Shum et al. 2014) to robustly test specific phylogeographic hypotheses.
Here, we present the first extensive sampling and phylogeographic investigation of pelagic S. mentella in the North Atlantic using mitochondrial and nuclear genetic markers to resolve the interplay between longitudinal, latitudinal and vertical gradients in shaping the structure of this oceanic species. Specifically, we aim to (i) examine the phylogeographic history of oceanic S. mentella across the northeast Atlantic, with special attention to the main two clades identified in Shum et al. (2014), (ii) assess the degree to which mitochondrial lineage distribution fits with the currently perceived S. mentella stock structure and (iii) reconstruct the most realistic scenario for the evolution of this species in the North Atlantic.
Sample collection
Samples were collected as part of the Icelandic Marine Research Institute 2013 survey, conducted from 11th June to the 6th July 2013 throughout the Irminger Sea. Trawling for redfish involved collecting replicated samples along a survey transect from three geographical zones at seven sites including shallow (above 500 m) and deep (below 500 m) habitats ( Fig. 1, Table 1). A MultiSampler unit with three separate codends was attached to the end of the pelagic trawl enabling the collection of several independent discrete samples at three different depth strata in one trawl haul (Eng as et al. 1997). The opening and closing of the nets were programmed to specific depths using a timing system with approximate sampling depths ranging between 200 and 350 m, 350 and 550 m, and 550 and 900 m. These replicated sampling sites were targeted to explore the genetic variance associated with geography (latitude, longitude) and depth for Sebastes mentella.
Between five and 30 specimens were collected and sampled from each haul section, for a total of 143 dorsal fin clip tissue samples of S. mentella collected at seven sites distributed throughout the Irminger Sea and stored in 100% ethanol. The data set was also complemented by the 50 mtDNA control region and 22 rhodopsin sequences that first revealed the existence of two depth-associated S. mentella clades in the Irminger Sea (Shum et al. 2014; GenBank Accession nos: mtDNA, KM013849-KM013898; rhodopsin, KM013899.1-KM013920.1) and by 68 randomly selected archived tissue samples (previously genotyped at 12 microsatellite loci by Stef ansson et al. Table 1). Additional archived tissue samples of eight Sebastes norvegicus and seven Sebastes viviparus were also analysed for downstream analysis as out-groups, as well as 36 tissue samples of Sebastes fasciatus collected on board the Irish research vessel RV Celtic Explorer in May 2012 off Newfoundland, Canada. The sampling scheme and sample sizes per site are summarized in Table 1.
Data analysis
Mitochondrial DNA variation and structure. Molecular diversity indices, including nucleotide (p) (Nei 1987) and haplotype (h) (Nei & Tajima 1981) diversities, were estimated using DNASP v5.10 (Librado & Rozas 2009). Haplotype genealogies were constructed in the program HapView, following a method described by Salzburger et al. (2011), based on a maximum-likelihood tree implemented in PHYLIP v3.695 (Felsenstein 1989) for mtDNA. To gauge the level of population differentiation among collections, ARLEQUIN v.3.5.1.2 (Excoffier & Licher 2010) was used to calculate pairwise population Φ ST and F ST , with significance of pairwise differences at the level of 0.05 assessed with 10 000 permutations. As a correction for multiple tests, P-values were adjusted according to the modified false discovery rate method (Narum 2006). The relationship among all sample collections was visualized by multidimensional scaling (MDS) of pairwise Φ ST and F ST using the MASS (Venables & Ripley 2002) package in the R programming environment (Team RDC, 2005). Spatial analysis of molecular variance (SAM-OVA 1.0) (Dupanloup et al. 2002) was used to identify distinct groups of populations within which (F SC ) genetic variance is minimal and among which (F CT ) it is greatest. This method partitions the populations into a specified number of groups, and the partition scheme (k) that maximizes differences between F SC and F CT is selected. The optimum number of groups was determined by running SAMOVA with two to eight groups with 100 annealing steps for each run.
Distribution of geographically restricted alleles SAShA -We tested the extent to which haplotypes are randomly distributed in the NA, as implemented in the program SAShA (Kelly et al. 2010). Assuming haplotypes are identical by descent, nonrandom distribution of haplotypes can indicate departures from panmixia and occurrences of the same haplotypes in different locations can be considered evidence of gene flow. SAShA generates observed distribution (OM) of geographic distances among instances of each haplotype which is compared to the null distribution (EM; i.e. allele distribution under panmixia) generated from the same data. An OM significantly less than EM indicates that haplotypes are underdistributed and that gene flow is somewhat restricted. For mtDNA, we tested for significant deviation of the arithmetic mean of OM from EM (Dg) using 1000 nonparametric permutations of the haplotype-bylocation and haplotype-by-depth data set.
Demographic changes. First, Tajima's D T (Tajima 1983) and Fu's Fs (Fu 1997), calculated in DnaSP, were tested to examine deviations of the mitochondrial site frequency spectrum expected under the neutral expansion model for major clades (see Results). Significant and negative D T and Fs values indicate population size changes or directional selection (i.e. selective sweep; Aris-Brosou & Excoffier 1996). Mismatch distribution analysis was also carried out, as implemented in ARLE-QUIN, whereby a unimodal frequency distribution of pairwise haplotype differences is expected for populations that have experienced recent demographic expansion, and a multimodal frequency distribution is expected for populations at equilibrium. To estimate the divergence time, we used the formula T = Da/2l, where 2l represents a general mtDNA evolutionary rate, commonly assumed to be around 11% per million years for fish mtDNA control region (Patarnello et al. 2007).
To further assess the demographic changes in effective population size since the time of most recent common ancestor (TMRCA) of the identified lineages, Bayesian skyline plots (BSPs) were generated using the software package BEAST v. 1.8 (Drummond et al. 2012). Analysis was performed using the best fit model of nucleotide substitution (GTR+Γ) implemented using MODELTEST3.7 (Hasegawa et al. 1985;Posada & Crandall 1998), and a fixed clock was set using 11%/million years (Myr) (Patarnello et al. 2007). For each analysis, the Monte Carlo Markov chain (MCMC) was set at 150 million steps, which yielded effective sample sizes (ESS) of at least 200. Once the appropriate mixing and convergence was met, the first 10% of the posterior was discarded and the remainder combined for parameter inferences. BSPs were estimated in Tracer v. 1.6 (Rambaut et al. 2014) and plotted using the upper 95% highest posterior density.
Marker comparison and approximate Bayesian computation. To gain an exhaustive picture of genetic variation, we also investigated the nuclear intron-free, gene coding for the rhodopsin pigment, which Shum et al. (2014) reported to also exhibit alternative genotypes associated with depth. Haplotype genealogies for 160 S. mentella rhodopsin sequences were constructed in HapView using a maximum-likelihood tree generated using PHYLIP. MtDNA haplogroup distributions and rhodopsin genotypic frequencies for the shallow-and deep-caught S. mentella groups were compared using v 2 tests (2 9 2 contingency tables). Furthermore, we re-examined microsatellite data at 12 loci, previously genotyped by Stef ansson et al. , and reanalysed them in combination with the newly generated mtDNA sequences. Population structure was assessed by calculating pairwise genetic differentiation through Slatkin's R ST and Weir & Cockerham's F ST , with 9999 permutations carried out to obtain significance levels, using GENAIEX 6.501 (Peakall & Smouse 2006). F ST measures genetic differentiation based on allele identity, whereas its analogue R ST is an allele size measure of differentiation that assumes a strictly stepwise-mutation process. R ST is expected to be larger than F ST if populations have diverged for a sufficiently long time in the case of ancient isolation (Hardy et al. 2003). To visualize the relationship among samples, MDS analysis based on F ST and R ST values was carried out in the R environment, and population structure of individual genotypes was visualized by correspondence analysis (CA) using GENETIX 4.05 (Belkhir et al. 1996).
We tested concordance of pairwise genetic distances using a simple Mantel test among redfish collections from available areas, based on mtDNA Φ ST /F ST vs. microsatellite F ST . To further illustrate the distribution pattern of horizontal and vertical genetic variation, partial Mantel tests were calculated between matrices of genetic distances (using mtDNA and microsatellite markers) and depth (m) while removing the influence of geographic distance (km), and vice versa, using the VEGAN (Oksanen et al. 2011) package in R, with statistical significance evaluated with 9999 permutations.
We then used approximate Bayesian computation (ABC), implemented in DIYABC v2.0 (Cornuet et al. 2008(Cornuet et al. , 2014, to assess the evolutionary relationships among S. mentella lineages in the North Atlantic. We compared six simple scenarios (see Results), to identify the most likely evolutionary events, and the effective population sizes and times of divergence implicated. The ABC approach is used to provide inference about the posterior distribution of the model parameters in order to establish how the summary statistics compare between simulated and observed data sets (Cornuet et al. 2008). We used default priors and generated simulated data sets combining both mtDNA and microsatellite information, with a total of 28 summary statistics, the calculation of which is implemented in DIYABC; for microsatellites: mean number of alleles, genic diversity, size variance and Garza & Williamson's M-ratio per population and F ST pairwise divergence; for mtDNA: F ST pairwise divergence only. Additional summary statistics generated for mtDNA failed to provide satisfactory statistical fit between the observed and simulated data sets (data not shown). Runs consisted of six million simulated data sets (1 000 000 million for each scenario) and were evaluated by principal components analysis (PCA), based on the obtained summary statistics of a subset of simulations, to check the threshold distance of parameter estimates between simulated and observed data. The relative posterior probability was then estimated over each scenario via a logistic regression on 1% of simulated data sets closest to the observed data set. The posterior distribution of parameters for the summary statistics associated with the retained data set was then estimated through locally weighted linear regression.
Mitochondrial DNA lineages
MtDNA sequence data of 261 Sebastes mentella individuals from 16 collections stretching across the North Atlantic produced a 444-bp fragment alignment with a total of 44 polymorphic sites, 25 of which were parsimony informative (Accession nos: KP988027-KP988288). These polymorphisms defined 56 haplotypes with overall haplotype diversity and mean nucleotide diversity of h = 0.897 AE 0.009 and p = 0.005 AE 0.0002, respectively ( Table 1). The haplotypes were organized into two main divergent haplogroups differentiated by a mean net sequence divergence percentage (D a ) of 2.45 AE 0.47 (Fig. 2a). These haplogroups 'clade A' and 'clade B' correspond to the 'shallow' and 'deep' groups previously described by Shum et al. (2014;and Fig. S1, Supporting information).
Among all samples, 91.7% of redfish caught above 500 m (59.5%) were classified as clade A and 8.3% as clade B haplotypes, whereas among redfish caught below 500 m (40.5%), 34% belonged to clade A and 66% to clade B haplotypes. Clade A included 68.3% of the total haplotypes found in 16 collections with 12 shared haplotypes (43%) and 16 singletons (57%). Clade B haplotypes (31.7%) formed a far more circumscribed starburst pattern, with haplotypes all found in eight of the 16 locations with six shared haplotypes (21.1%) and 22 singletons (79%), suggestive of a recent expansion. The clades generally show an association with both location (v 2 = 180.75, P < 0.0001, df = 15) and depth (v 2 = 61.76, P < 0.0001, df = 1), with only a few examples of shared haplotypes between adjacent groups (Fig. 2a). Within the Irminger Sea, clade A haplotypes dominated waters above 500 m (75%) in the northeast, west and southwest, while clade B was prevalent at depths below 500 m (90%) in the northeast and west (Fig. 1).
Overall, estimates of genetic differentiation for all comparisons involving the northeast Irminger Sea, west Faroe and Norwegian Sea samples were significant (Φ ST = 0.055-0.423 and F ST = 0.100-0.635), while the remaining samples show no significant genetic structure and low heterogeneity ( Table 2). The MDS plot (Fig. 3a, b) based on mtDNA (Φ ST & F ST ) pairwise genetic distances shows consistent subdivision of the shallow-pelagic clade A from the deep-pelagic clade B groups along axis 1. Axis 2 shows a separation between the deeppelagic west Faroe from the NE Irminger Sea groups (>500 m).
The results of SAMOVA indicated significant population genetic structure for each number of k groups assumed. F CT in the SAMOVA analysis showed increasing values of k up to 4 and began to progressively decline after that, while F SC drops significantly at k = 4, which resulted in the maximum variance between the two indices at this point. Further increases in the number of k led to a dissolution in group structure, where groups with the larger proportion of private haplotypes were singled out. Thus, we chose four subpopulations as the most parsimonious partition scheme (Fig. 4). At k = 2, SAMOVA recovered a deep mitochondrial split between the combined 'deep-pelagic' groups from northeast Irminger Sea (>500 m) and west Faroe Island and everything else. At k = 3, the northeast Irminger Sea (>500 m) group and the west Faroe group became separated. At k = 4, redfish from the majority of clade A caught above 500 m depth formed subpopulation SI (see Table 3), the northeast Irminger Sea and west Faroe groups (deeper than 500 m), representing the core of clade B, formed subpopulation SII and SIII, respectively; finally, some redfish from the northeast Irminger Sea (<500 m; n = 11, clade A: 45%, clade B: 55%) formed subpopulation SIV. Table 3); (b) three rhodopsin haplotypes ordered as shallow and deep groups. Haplotypes are coloured according to SAMOVA groupings, and the size of each circle is proportional to the frequency of haplotypes. The lengths of the connecting lines reflect the number of mutations between them.
A hierarchical AMOVA showed the molecular variance attributed to variance among groups to account for 37.54% (F CT = 0.375, P < 0.001), while differentiation between collections within the same group (F SC ) was found not to be significant (F SC = 0.013, P = 0.123). The analysis using SAShA revealed the average distance between co-occurring haplotypes was 951.9 km (geographic distance) and 178.3 m (depth), with evidence that haplotypes were geographically restricted as the observed distribution (OM) was significantly different from the expected (EM) under the assumption of panmixia for geographic distance (Dg = 80, P = 0.018) and depth (Dg = 11.6, P = 0.039).
Demographic analysis on a significantly expanded data set showed similar results to Shum et al. (2014): unimodal distributions with negative and significant Tajima's D and Fu's F values for clade A (D = À1.67, P < 0.01; F = À20.57, P < 0.001) and clade B (D = À2.14, P < 0.001; F = À27.09, P < 0.001), providing support for recent population expansion (Fig. S2, Supporting information). We find that the 'deep' and 'shallow' lineages split around 22 000 years ago. The coalescent-based Bayesian skyline plot (BSP) provided details on how mtDNA diversity changed through time, indicating a rapid population growth for clade A in the past 10-12 kyr before present (BP) , with a very recent stall/ reduction in effective population size (Fig. 5a). Clade B haplotypes demonstrate a signal of a sustained period of population increase dating back over 15-25 kry BP (Fig. 5b).
Comparative nuclear and mitochondrial data
Rhodopsin sequence data from 160 S. mentella individuals (Accession nos: KR818563-KR818700) produced a 722-bp fragment alignment with a total of two polymorphic sites at positions 208 and 228. These polymorphic sites defined three haplotypes structured into welldefined shallow and deep rhodopsin clades (Fig. 2b). The single nucleotide polymorphism (SNP) T228 was only variable among the 'shallow' group; however, one SNP found at position 208 was fixed for alternative alleles between shallow (G) and deep (A) fish, corroborating Shum et al.'s (2014) initial findings. The distribution of the SNP genotypes at position 208 shows a strong association with mtDNA clade A and clade B haplogroups for the shallow (v 2 = 675.17, P < 0.0001) and deep (v 2 = 37.28, P < 0.0001) collections, respectively, with 91% of shallow 'G' rhodopsin variants also belonging to mtDNA clade A, and 64% of deep 'A' rhodopsin genotypes being assigned to mtDNA clade B.
Microsatellite pairwise F ST values between the nine geographically defined regions ranged from À0.003 to Table 1) coloured according to SAMOVA groupings); (e) correspondence analyses of Sebastes mentella individuals from nine locations, based on microsatellite genotypes. Labelled genotypes showed discordance with mtDNA lineage assignment (see Table S2, Supporting information for further details). 0.040 (Table S1, Supporting information) and were significant for most combinations that included the Irminger Sea (>500 m) and Norwegian international water collections, after FDR adjustments. Overall, 13 of the 36 pairwise comparisons among historical samples resulted in a significant phylogeographic signal (R ST significantly larger than F ST , see Table S1, Supporting information, Fig. 3c, d).
North Atlantic structuring of redfish based on microsatellite markers and results from mtDNA (SAMOVA) consistently divided the 'deep-pelagic' groups [Irminger Sea (>500 m) & west Faroe] into two subgroups. We found concordance between pairwise genetic distances among redfish in nine localities based on mitochondrial Φ ST /F ST vs. microsatellite F ST , showing a significant positive correlation of the measures of genetic differentiation (Fig. S3, Supporting information, tested with 9999 permutations). We found no significant signal of isolation by distance (r = 0.12, P = 0.24) or depth (r = 0.12, P = 0.25) based on microsatellite F ST , using a partial Mantel test where the depth/geographic distance matrix was held constant. For mtDNA, partial Mantel tests showed a significant correlation between genetic distance (F ST ) and depth (r = 0.329, P = 0.019), but not for geographic distance (r = 0.009, P = 0.407). Given the detection of geographically disjunct 'deep' populations in both the Irminger and Faroese seas, we tested six simple historical demographic scenarios involving pairs of 'shallow' and 'deep' populations from these areas (bearing in mind the relatively consistent homogeneity on the 'shallow' populations all across the NA , Figs 3 and 6). The first scenario involved an original split between the 'deep' and 'shallow' lineages followed by subsequent split between the Irminger and Faroe seas within both deep and shallow lineages, leading to the current geographical disjunction. The second scenario involved a parallel, independent origin of deep sea groups from their shallow ancestors, respectively, in the Irminger and Faroese seas. The third differed from the second scenario with an independent origin of the shallow groups from their deep sea ancestors. The fourth, fifth and sixth scenarios involved a simplistic treelike bifurcations occurring from time ta to t1 in the past. Estimations of the posterior probability based on both direct and logistic regression for each scenario provided unambiguous support for the scenario S1, with a probability of 91% and a 95% CI of 90-92, not overlapping with any other scenarios (Fig. 6). Given a generation time between 12 and 20 years (Stransky et al. 2005a,b), the scenario assumes that the 'deep-pelagic' and 'shallow-pelagic' redfish split~4500-7500 years ago (ta median = 374 generations, 95% CI: 93.4-419). The subsequent split between the 'deep-pelagic' Irminger Sea (IrNE DP 06: N e = 8570, 95% CI: 5840-9900) and west Faroe (FW DP 06: N e = 7040, 95% CI: 3160-9800) shares similar divergence times as the split between the 'shallow-pelagic' Irminger Sea (IrSW SP 06: N e = 6610, 95% CI: 3310-9640) and east Faroe (FE SP 06: N e = 5570, 95% CI: 2020-9510), estimated at approximately~1200-2000 years ago (t1 median = 98.3 generations, 95% CI: 27.9-281).
Discussion
Based on greatly expanded geographical screening, sample sizes and analytical toolkit, this study lends strong support to the existence of at least two highly distinct evolutionary units of oceanic Sebastes mentella in the North Atlantic (NA). Nuclear patterns are mirrored and strengthened by mitochondrial evidence, and, perhaps more strikingly, the pattern of divergence is strongly associated with habitat depth (more so than with geographical distance). Coalescent patterns suggest that the emergence of depth-related structure may have originated in the Late Pleistocene, with the two lineages segregating after the recolonization of the Irminger Sea, as the ice sheets retreated at the end of the last glaciation.
Population structure and connectivity
Analyses of mtDNA sequences revealed substantial population structure among depth-defined habitats between two distinct S. mentella mtDNA clades (A: above and B: below 500 m; Figs 1 and 2). Our results expand on recent findings, indicating that the 'shallow' and 'deep' groups in Shum et al. (2014) correspond to clades A and B identified in this study (Fig. S1, Supporting information). We found nonrandom association of haplotypes in the NA, with clade A commonly found above 500 m southwest and west of the Irminger Sea, east of the Faroes, and Norwegian waters, whereas clade B haplotypes are more localized to the central NA, below 500 m, northeast of the Irminger Sea and west of the Faroes (Fig. 1). The overall pattern of mtDNA variation is associated with isolation by depth, mirrored by rhodopsin SNP distribution, similarly detected using microsatellites (Stef ansson et al. 2009a) and to some degree also supported by subtle phenotypic variation (Magn usson & Magn usson 1995;Stransky 2005;Stef ansson et al. 2009a,b;. MtDNA and nuclear data consistently identified the main genetic partitioningshallow vs. deepas well as detecting additional population subdivision, between the 'deep-pelagic' Irminger Sea and west Faroe groups. According to the SAMOVA and MDS analysis, redfish was further divided into four genetically distinct subclades: (SI) southwest and west Irminger Sea/east Faroe/Norwegian shelf and Norwegian international waters; (SII) northeast Irminger Sea (>500 m); (SIII) west Faroe; and (SIV) northeast Irminger Sea (<500 m). However, it is worth mentioning the potential issue of small sample size for some collections as is the case of the final SAM-OVA group (SIV), which probably represents an area of contact between haplogroups and may not reflect a 'true' distinct biological unit but a mere artefact due to low sample size. Support for this group structure is mirrored by close inspection at nuclear markers. The rhodopsin SNP shows a strong association with the mtDNA clade A and clade B haplotypes, suggesting that evolutionary independence between shallow and deep lineages is unambiguous. The two deep-pelagic groups were not statistically distinguishable at microsatellite loci, even using greater sample sizes (Stef ansson et al. 2009a). Yet, analysed at mtDNA, they showed significant genetic structure (Figs 2a and 3). Strikingly, while the average mtDNA-based F ST variance among 'shallow' aggregations is nonsignificant (F ST = 0.020, P = 0.074), suggesting significant large-scale connectivity, the two 'deep' groups, in the Irminger Sea and western Faroes, appear significantly differentiated (F ST = 0.176, P > 0.001), with only three haplotypes shared, indicating that habitat segregation may have shaped diverging behaviours and life history adaptations in these two lineages. Such a segregation of the mitochondrial matrilineage, between deep Irminger and Faroese fish, alongside a more blurred boundary showed by microsatellites, raises the possibility that female S. mentella in the deeper sea layer may exhibit some degree of female philopatry/residency (Petit & Excoffier 2009). The combined use of mitochondrial and nuclear information on a few selected individuals can also shed some further light on the biology and behaviour of these organisms. For example, one fish caught above 500 m from east of the Faroes (FE1 SP 07_8.9) exhibits a clade B haplotype, typical of the 'deep' layer, and its microsatellite genotype falls within the 'deep' group. This highlights the ability of redfish to exhibit a degree of pelagic-demersal mixing between clades from shallow and deep environments (Planque et al. 2013). Most notably, four individuals from the 2013 Irminger (a) (b) Fig. 6 Alternative scenarios tested using an approximate Bayesian computation approach (DIYABC) based on combined microsatellite and mtDNA data for Sebastes mentella. The top section illustrates the six scenarios tested, while the bottom panels indicate the relative likelihoods of the six scenarios compared by (a) direct approach and (b) logistic regression on the 1% (60 000) and 0.008% (500) of the closest simulated data sets, respectively. On the y-axis, the posterior probabilities, and on the x-axis, the number of simulations used to calculate it (1% of total simulations). The DIYABC analysis illustrates scenario 1 as best-supported. See Table 1 for corresponding location codes.
Sea 'deep' sampling (I.D. 17, 30, 37 and 40) exhibited fully shallow mtDNA and rhodopsin types (Table S2, Supporting information), which suggests that 'shallowto-deep' contemporary forays might be more common than those in the opposite direction. The inception of this behaviour may originate during the establishment of juvenile 'site fidelity' during the early stages of redfish development. In the Irminger and Norwegian seas, spawning female redfish are distributed along the larval extrusion areas of the Reykjanes Ridge and the Norwegian continental shelf between March and May (Magn usson & Magn usson 1995;Drevetnyak & Nedreaas 2009). The pelagic-fry drift to nursery areas found along the coast of east/west of Greenland and the Barents Sea where juveniles settle to the bottom until they mature and migrate to distributed populations throughout the North Atlantic. Yet, the complex behaviour, areas of copulation and seasonal migration patterns of this species across shallow-and deeppelagic habitats remain poorly understood (Planque et al. 2013). Hence, this pattern suggests the occurrence of short-time migration from deep-pelagic individuals, supporting the notion of vagrants as individuals move to shallow and deeper waters during their life cycle (Shum et al. 2014). Several examples from the east/west Faroe groups caught in both shallow and deep layers display discordant molecular markers: two samples (FE1sp07_8.5 & FE1sp07_8.6) caught approximately at 400 m display 'deep' haplotypes but a microsatellite genotype typical of the shallow layer. Similarly, five samples caught in deep waters (FW1dp07_7.6,FW1dp07_7.8,FW2dp07_ 7.15,FW2dp07_7.21 & IrNEdp06_29) exhibited clade A haplotypes but possess multilocus microsatellite genotypes falling within the 'deep' group. The nuclear-coding rhodopsin gene is a powerful tool in the identification of closely related fish species (Rehbein 2013;Pampoulie et al. 2015), and here, we found that only seven of 160 specimens (4%) analysed at both mtDNA and rhodopsin yielded ambiguous assignment to their lineage of origin (Table S2, Supporting information). Interestingly, two of these individuals possessed double peaks, or heterozygous indels, in the rhodopsin chromatograms (Fig. S4, Supporting information; Sousa-Santos et al. 2005), which, along with the observed mtDNA/microsatellite mismatches, is best explained by the occasional occurrence of introgressive hybridization between the two genetically distinguishable groups. Roques et al. (2002) indicated significant introgressive hybridization between S. mentella and Sebastes fasciatus in the Gulf of St. Lawrence and surrounding waters; Pampoulie and Dan ıelsd ottir (2008) also detected signatures of hybridization between S. mentella and Sebastes norvegicus. Thus, it is reasonable to expect that intraspe-cific mating between a deep-pelagic clade B male and shallow-pelagic clade A females (and vice versa) would have produced the genotypic ambiguities mentioned above. The shallow-and deep-pelagic groups appear to occupy their preferential depth range in near sympatry. However, spatial overlap is more apparent to the west and northeast of the Irminger Sea and in the Faroese waters; thus, the opportunity for introgressive hybridization may derive from hybrid zones as a result of secondary contact where S. mentella are within their preferred geographic and vertical cruising limits. Pampoulie & Dan ıelsd ottir (2008) have reported significant levels of hybridization in the NA among redfish which may indicate that different S. mentella lineages were allopatric before secondarily coming into contact to form their current sympatric distribution . Despite the potential for interbreeding, however, the shallow and deep types maintain a striking overall integrity, which bears resemblance to some notable cases of parapatric speciation (Allender et al. 2003;Berner et al. 2009;Nosil 2012).
Historical reconstruction
The glaciation and interglacial periods of the Pleistocene have been known to considerably shape the current distribution and connectivity among contemporary populations (Bigg et al. 2008;Finnegan et al. 2013). Evidence from mtDNA haplotypes alone suggests that the shallow-and deep-pelagic clades diverged over 22 000 years ago, in correspondence with the last glacial maximum (LGM) . The Bayesian skyline plot (BSP) provides information on how mtDNA diversity changed through time, back to the most recent common ancestor. The BSP analysis indicate a gradual signature of expansion 15 000-25 000 years ago for the deep-pelagic clade, and a greater, steady increase in population growthcompared to the shallow-pelagic cladewhich shows a much sharper rapid expansion around 10 000-15 000 years ago. The BSPs show contrasting signatures of population growth suggestive of postglacial influence. The expanding sea ice may have driven S. mentella to southerly latitudes as far as the Grand Banks to deeper waters during the LGM, as global sea level fell by 120-135 m and intense calving in the northern hemisphere ice sheets 18-15 kyr BP resulted in massive icebergs advancing into the North Atlantic 40°N (Grousset et al. 1993). As the sea ice retreated, characterized by intense warming and rising sea levels 14 600-13 800 years BP, S. mentella had the opportunity to advance in a northerly direction to glacial refugia south of Iceland before sea ice readvanced during the Younger Dryas (YD, 13-11kyrBP) which forced the Northern Hemisphere into near-glacial conditions (Crucifix & Berger 2002). The onset of the YD may have favoured allopatric conditions between the two clades as the deep-pelagic group occupied deeper refuges, while the shallow-pelagic rapidly spread with the retreating sea ice and rising sea level following the YD into the Norwegian and Barents seas and the Irminger Sea (R ST > F ST ; Table S1, Supporting information). The microsatellite R ST values provide insights on the relative divergence of S. mentella. The eastern NA collections (FE SP 07, NS SP 06 & NIW SP 06) present a significant phylogeographic signal with respect to the Irminger Sea collections (IrSW SP 06 & IrNE DP 06), suggesting that postglacial range shifts and secondary contact events have played a significant role in shaping S. mentella spatial structure.
Overall, SAMOVA, MDS and CA analyses revealed the shallow-pelagic group shows strong homogeneity across the NA, whereas the deep-pelagic groups form rather distinct 'pockets', suggesting their evolutionary distinction also involves different life histories, as shallow-pelagic redfish appear more prone to connectivity, whereas the deep-pelagic redfish appear to be more strictly associated with their local habitats, probably migrating less. We tested which demographic scenario could best explain the genetic patterns among pelagic pairs of shallow and deep populations in the Irminger Sea and west of the Faroes, using an ABC approach. Estimates for the posterior probability for each scenario provided robust support for scenario 1 (Fig. 6). This indicates that the shallow-and deep-pelagic gene pools split approximately 5-7 kyr BP, after the postglacial recolonization of the Irminger Sea. Furthermore, a subsequent split between the deep-pelagic groups emerged approximately 2 kyr BP. Their distributions are found within close proximity of bathymetric features (see Fig. 1) along the west of the Reykjanes Ridge and the Faroe-Shetland Ridge known for complex oceanic conditions at depth (Pedchenko 2005;Olsen et al. 2008). This may act as bathymetric forcing of ocean currents as reported for North Atlantic tusk (Knutsen et al. 2009). The combination of these physical obstacles and the presumably complex and largely unexplored reproductive behaviour of S. mentella may have acted in concert initiating the retention and establishment of reproductive isolation by depth (Shum et al. 2014).
Conclusions
Our study provided the first extensive depth-associated sampling and phylogeographic reconstruction of the North Atlantic oceanic beaked redfish, Sebastes mentella. This species shows a consistent and significant distinction of at least two evolutionary lineages: one widely distributed 'shallow' type, which partially overlaps with local populations of a more sedentary 'deep' type. While evidence exists for localized interbreeding between these two lineages, the rate does not appear notably greater than similar introgressive processes still occurring among the four Sebastes species in the North Atlantic. We find that mtDNA reflects redfish evolutionary history throughout the Late Pleistocene, whereas the integration with microsatellite data serves to better reflect postglacial divergence and the patterns of contemporary gene flow among populations, including the potential detection of sex-biased dispersal. The shallow-pelagic clade shows strong homogeneity across the NA, while the deep-pelagic clade is restricted to the central North Atlantic, segregated by complex oceanic conditions shaped by bathymetric features. Population independence was largely upheld by both nuclear and mitochondrial markers, and it is likely that depth-associated adaptive processes are at play (Shum et al. 2014) to counteract the homogenizing effects of gene flow. Given the paucity of wellcharacterized marine biological systems undergoing diversification processes consistent with speciation, North Atlantic S. mentella should now represent a valuable subject to investigate genomic correlates and mechanisms for the maintenance of bathymetry-associated lineage sorting and, potentially, speciation.
From a practical standpoint, the present results will have cascading impacts on the assessment and management of these commercially valuable fish stocks, which remain hotly debated. Our results are consistent with two genetically distinguishable putative groups separated by depth and following different evolutionary trajectories: this should form the basis to recognize them as distinct evolutionarily significant units. Given the circumscribed local distribution of the 'deep' populations in the Irminger and western Faroese seas, tailored management appears required, if we are to avert the permanent loss of a unique biodiversity component.
S.M. and C.P. conceived the study, with contribution from P.S.; C.P. and K.K. organized the sampling, which was carried out by P.S. and K.K.; P.S. conducted all laboratory work and statistical analyses and drafted the manuscript. S.M. contributed to writing and advised on statistics, with feedback from C.P. and K.K. All authors share interests in the mechanisms of population divergence in open marine habitats and their implications for biodiversity conservation and resource management.
Supporting information
Additional supporting information may be found in the online version of this article.
Table S1
Microsatellite estimates of pairwise genetic differentiation among nine S. mentella collections.
Table S2
Genetic marker discordance among S. mentella samples for mitochondrial control region (mtDNA clades), rhodopsin SNPs and microsatellite genotype.
Table S3
Pairwise F ST genetic differentiation among nine S. mentella collections. mtDNA (below diagonal) and microsatellite (above diagonal) based on 10 000 permutations. | 2015-09-23T00:31:53.000Z | 2015-06-14T00:00:00.000 | {
"year": 2015,
"sha1": "b33104b49304598b846bcdfaeeaafa021b7db63d",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mec.13262",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1263aae2a56025fce261849e53d13aa98eb9e240",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
86675040 | pes2o/s2orc | v3-fos-license | Experimental synthesis of size-controlled TiO2 nanofillers and their possible use as composites in restorative dentistry
Graphical abstract Fabrication of experimental light curing resin from green-synthesized fillers.
Abstract The aim of this work was to obtain an efficient protocol with a green, fast and facile way to synthesize TiO 2 NPs and its application as fillers for enhancement of desired dental properties of light curing dental composites.
A comparative study comprised the fabrication of light curing restorative composite materials with incorporating different fillers with varying wt%, varying resin material composition, to determine optimal dental restoration by focusing on the physical properties of dental materials. It was observed that the as-prepared green synthesized TiO 2 nanohybrid particles contributed to the improvement in physical properties, thus promoting the green and rapid synthesis of nanohybrid fillers. In addition, mechanical values for experimental cured resin materials with bare and surface modified fillers were obtained. The experimental light curing nanocomposites with 5 wt% (wt%) nanohybrid surface modified filler particles with BisGMA (60 wt%), TEGDMA (20 wt%) and UDMA (20 wt%) resin composition provided increased physical strength and durability with higher compressive stress 195.56 MPa and flexural stress 83.30 MPa. Furthermore, the dental property, such as polymerization shrinkage (PS) obtained from volumetric method was decreased up to 3.4% by the addition of nano-hybrid fillers. In addition to this, the biocompatible and antimicrobial nature of TiO 2 and its aesthetics properties such as tooth-like color makes TiO 2 favorable to use as fillers.
Introduction
Tooth decay or dental caries have historically been considered as a common oral problem and is still a public health concern (Prasai Dixit et al., 2013). It occurs mainly due to acid attack from food and bacteria buildup (Featherstone, 2008). 'Silver Filling' i.e., dental amalgam containing Hg, Ag, Sn, and Zn (Bharti et al., 2010) has been used as one of the remedies for tooth decay for more than 150 years for filling the cavities (Rathore et al., 2012). But because of harmful and poor aesthetic properties, amalgam became an unfavorable choice as a dental filling material. It affects the brain and kidney functioning because of the release of mercury vapors (Molin, 1992). In modern dentistry, amalgam materials are being replaced by composite materials such as light curing dental composite materials and are being widely spread due to their numerous advantages (Wu et al., 2014). They comprise mainly polymerizable resins, filler particles that are coated with silane coupling agents and photo-initiators (Karabela and Sideridou, 2011). Continuous research and development are going on in this field to improve the chemical, physical, particularly the mechanical properties (Ashour Ahmed et al., 2016), aesthetics (Yu et al., 2009) of these composites. This can be done by altering the parameters such as the type of resin (Ferracane, 1995;Gajewski et al., 2012), and type of fillers (Habib et al., 2016;Miao et al., 2012b), etc. Enhancement of mechanical properties mainly depends on filler factors such as size, shape, and concentration of fillers. Alternatively, new filler particles can be developed (Chevigny et al., 2011) in this field. Filler size is one of the several parameters affecting the overall properties of composite resins (Rastelli et al., 2012).
Nanotechnology has been introduced in the dental field through the production of functional structures in the range of 0.1-100 nm by various physical or chemical methods (Rinastiti et al., 2011). From the past few years, the commercial materials more often possess nanofillers that are claimed to provide superior mechanical properties (de Oliveira et al., 2012). There are various kinds of light curing restorative materials that are available in the market comprising with varying fillers such as nanofilled composite (Khurshid et al.,2015) macro-sized fillers (Filtek Supreme Ultra TM ), bioactive fillers (Beautifil II TM ), hybrid fillers (Venus pearl TM ) and nano-sized fillers (Tetric N-Ceram Ivoclar TM ). These materials have satisfactory mechanical strength and majorly consist of the filler powders of Si (Balos et al., 2013), Al (Arora et al., 2015), Zn (Sevinc¸and Hanley, 2010), Zr (Guo et al., 2012) etc. with more than 50 wt% content (Torii et al., 1999). However, there is still a need to overcome the problems of composite materials such as erosion, brittleness, moisture sensitivity, etc (McCabe and Walls, 2013). Titania nanopowder was chosen for this research as it supplies high strength (Awang and Wan Mohd, 2018) to the matrix and has a tooth-like color, good antimicrobial properties (Pis ßkin et al.,2013), hydrophilic and self-cleaning nature (Banerjee et al., 2015) to dental materials of the composites. Conventional synthesis methods such as sol-gel (Bessekhouad et al., 2003), thermal decomposition (Moravec et al., 2001), sonochemical (Neppolian et al., 2008) and aerosol formation (Huisman et al., 2003) are time-consuming methods, require harmful chemicals (Tarafdar and Raliya, 2013) and possess higher analysis cost. One of the alternatives to this method is the green synthesis of NPs using renewable sources such as plants (Iravani, 2011). Our work focuses on the efficient method of synthesis of Titania NPs with fruit peel extract and microwave synthesizer to lower the environmentaleconomic impacts and to promote green chemistry in the field of synthesis of nanoparticles.
Citrus aurantifolia, also known as 'Key lime', is a multipurpose fruit and a rich source of phytoconstituents (Gattuso et al., 2007). Phytoconstituents are responsible for the synthesis of nanoparticles (NPs) during reduction reactions (Santhoshkumar et al., 2017). All phytoconstituents synergistically act as reducing agents and capping agents (Madhumitha et al., 2012) that allow the controlled growth, stability and viability of NPs. Citrus peels can be used in the experimental protocols for the synthesis of NPs (Wilson, 1921). The current research work focuses on the synthesis technique of the TiO 2 fillers and their possible use in light curing dental composite materials.
Green and rapid synthesis of TiO 2 NPs
Fruit peel extract of C. aurantifolia was prepared in 10 mL of anhydrous isopropanol by taking 4 g dry powder of the peels. It was then heated to 50°C under constant stirring for the extraction process in a CEM microwave synthesizer (Zhu and Chen, 2014). The pH of the solution was made alkaline by the addition of liquor NH 3 until the pH reached 10. The precursor of Ti i.e. titanium isopropoxide Ti {OCH (CH 3 ) 2 } 4 was added to the plant extract during stirring. The solution was then treated under a CEM closed vessel microwave synthesizer at 60°C for 2 min with the power of 40 W. Evenly dispersed solid particles were produced, indicating the production of NPs. NPs were washed with acetone thrice and calcinated at 450°C to obtain the final product i.e. green synthesized TiO 2 NPs i.e. gTiO 2 NPs.
Surface modification of gTiO 2 NPs
For surface modifications, 1 g solid gTiO 2 NPs was dispersed in an enclosed vessel with anhydrous xylene under sonication for 1 h. The nanoparticle suspension was kept at 40°C under continuous stirring and APTES was added dropwise. The procedure was continued for 5 h. The suspension was then centrifuged following which the filtrate was removed and washed with xylene and finally with acetone thrice. The residue was dried in an oven at 60°C for complete drying. The dried residue obtained was nano-sized ($40 nm) APTES modified gTiO 2 . APTES-modified gTiO 2 NPs were then dispersed in anhydrous THF under sonication and then, GMA was added dropwise under continuous sonication for 15 min. The suspension was refluxed at 60°C for 2 h. The suspension was centrifuged; the filtrate was removed and washed with anhydrous acetone. The process was repeated thrice and then, the residue was dried at 60°C. The schematic for preparation of GMA-modified gTiO 2 NPs (mTiO 2 ) is shown in Fig. 1. The same procedure was repeated with the commercially available titanium dioxide extra pure i.e. microparticles, titanium dioxide nanopowder ($7 nm) i.e. nanoparticles and the mixture of both i.e. microhybrid particles (SRL, Mumbai, India) for comparison.
Fabrication of light curing nanocomposite resin material
Experimental light curing resin materials were fabricated as shown in Fig. 2 by mixing different composition of resins, different types of fillers, different amounts, etc. as shown in Tables 1-5, thoroughly for about 6 h. In addition, CQ photo-initiator (2 wt% of the monomer) was added and mixed with the resin matrix using sonication to remove the air gaps in the resin if any. The mixture was then hand mixed rigorously to achieve complete mixing and sonicated again by wrapping the container with aluminum foil to prevent the exposure to light. The materials were prepared in the dimensions required for conducting the specific tests namely compressive, flexural and percentage polymerization shrinkage (% PS) tests. The specimens were prepared in teflon molds of circular and/or rectangular shape. The resin mixture was filled into the molds and sonicated to remove the air bubbles if any. The surface was made smooth by keeping mylar strips TM on it with a glass chip and cured with halogen light of 400-500 nm waveband (3 M ESPE Elipar TM 2500). The rectangular molds were used for mechanical tests and circular mold was used for polymerization shrinkage.
The light curing machine was held at a distance of 2 mm from the surface of a resin material and cured for 100 s. For samples with a depth of more than 2 mm, the material was cured of both sides for 120 s each to attain complete curing. These samples were prepared in cylindrical molds of dimensions (25 Â 2 Â 2) mm. For %PS tests, different samples in the circular molds of 7 mm height and 11 mm diameter were prepared. The sample dimensions were calculated and all the experimental samples were kept in distilled water for about 7 days (Thakur et al., 2017) before the analysis. Different sets were prepared by changing the parameters that affect the physical properties of the composite materials such as size of the fillers (Foroutan et al., 2011;Shinkai et al., 2018), amount of fillers (Rastelli et al.,2012), resin matrix (Zhang and Matinlinna, 2011) and surface modification of fillers (Wu et al.,2014). Experimental specimens were divided into 4 sets as mentioned below in Table 1.
The experimental samples were used for mechanical studies i.e. highest compressive, flexural stress and %PS.
Characterization
Physicochemical characterization of green synthesized nanoparticles and nanocomposite materials was performed by various techniques, namely, dynamic light scattering (DLS), X-ray diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR) and Electron Microscopic Techniques. For DLS measurements, samples were diluted in EtOH to 0.1 wt% and data were obtained on DLS model Malvern Nano-ZS (mastersizer software) (Manufacturer-Malvern, United Kingdom) automatically at room temperature. XRD data was obtained by using PANalytical X'pert powder diffractometer (Manufacturer-Philips, Almelo, Netherlands) with the 2Theta values ranging from 20 to 80°using a Cu-Ka source of wavelength of 1.54 Å . The crystallite sizes were calculated from the Scherrer formula applied to the major intense peaks (Mahshid et al., 2007). FTIR spectra were recorded on IR Prestige 2, (Manufacturer-Shimadzu, Kyoto, Japan) in the range of 400-4000 cm À1 to determine the chemical bonding in the organic components present over the surface of nanoparticles after surface modification. Morphology and surface images were taken on Transmission Electron Microscope (TEM)-CM200 equipped with Selected Area Electron Diffraction (SAED) (Manufacturer-Philips, Madison, United States), Scanning Electron Microscope (SEM) with Energy Dispersive X-ray Spectroscopy (EDS) (Manufacturer-Zeiss, Stockholm, Sweden) with a focused electron beam to deliver images with information about the sample's topography, size by SEM and elemental composition by EDS.
Experimental light curing specimens were tested for their mechanical properties. The hardness of the particles has a great influence on compressibility. Higher flexural stress that a material can bear was obtained through a three-point bending test in order to figure out the ability of the sample to withstand the bending forces applied (Sfondrini et al., 2014).
Polymerization shrinkage (PS) of resin-composite materials may have a negative impact on the clinical performance (Braga et al., 2005) and thus a reduction in % PS is required (Karaman and Ozgunaltay, 2014). The objective of this study was to obtain an efficient protocol for the green synthesized TiO 2 NPs for the enhancement of physical properties of the light curing resin materials. Compression and flexural tests (Sfondriniet al.,2014) were performed using Universal Testing Machine, Instron 3345 with a capacity of 5000 N having the test speed of 0.5 mm/min on Bluehill 3 software. Compression tests were performed for cylindrical specimens having dimensions of 5 mm diameter and 5 mm height prepared by a teflon mold. The specimen was placed on its end between the plates of the universal testing machine. The compressive load was applied along the long axis of the specimen at a cross-head speed of 0.5 mm/min until the material breaks.
The flexural test was performed with the help of universal testing machine for obtaining maximum flexure load, flexure stress at maximum load, and maximum flexure strain values. The sample was mounted in the testing device using rounded supports at a distance of 20 mm and the beams were loaded until failure using the across-head speed of 0.5 mm/min (Hahnel et al., 2010) and operated until the material breaks. For polymerization shrinkage, % PS values were obtained after weighing the samples and calculated by the following formula: where, V = Volume of resin before (V 1 ) & after (V 2 ) curing M = Mass of resin before (M 1 ) & after (M 2 ) curing D = Density of resin before (D 1 ) & after (D 2 ) curing
DLS
Graph of Amp Vs Rh (hydrodynamic radius) in nanometer range gave the average radius of suspended particles as 133.75 nm that indicates the presence of micro-sized gTiO 2 particles. The polydispersity index of the data obtained is 146.3 nm (Fig. 3).
FTIR
The FTIR spectrum of bare gTiO 2 NPs Fig. 5 (line a) and APTES-coated gTiO 2 NPs (line b) was compared to confirm APTES coating on the surface of gTiO 2 . The FTIR spectrum of bare gTiO 2 NPs (a) showed a wide broadband at 3200-3500 cm À1 due to the stretching vibration of water molecules adsorbed on the surface of hydrophilic gTiO 2 NPs. The small peak at 1650 cm À1 was attributed to AOH bending vibrations. Many peaks below 700 cm À1 were present because of the numerous of TiAOATi bonds in bare gTiO 2 NPs. APTES coated gTiO 2 NPs (b) showed a broad band at 3500-3200 cm À1 due to the presence of AOH molecules on to the surface of gTiO 2 NPs and NAH symmetrical stretching. A weak band at 2927 cm À1 indicates the presence of alkyl groups [A(CH 2 ) n A].
The absorption band near 1500 cm À1 corresponded to ANAH vibrations in amino group of APTES. A peak at 1024 cm À1 was because of the stretching vibrations of TiAOASi moieties and the peak at 1348 cm À1 was ascribed to CAN stretching mode present in the range of 1000-1050 cm À1 while the broadband at1165 cm À1 is ascribed to SiAOASi stretching. The peak at 800 cm À1 corresponded to SiAC stretching and NH 2 out-ofplane bending mode, confirming the APTES coating on the bare gTiO 2 NPs, as shown in Fig. 5.
SEM and EDS
SEM image represented the spherical nature of gTiO 2 NPs of different sizes. Image consisted small i.e. nano and micro-sized particles, as shown in Fig. 6(a). The particles were aggregated due to the effect of Vander Waals forces existing in small sized particles. The particle size from SEM was obtained in the range of 3 nm to 1 mm, marked in Fig. 6(a), indicated the nanohybrid nature of particles. Impurities that can be present in the TiO 2 samples have been evaluated with the EDS techniques. The EDS data (the contents of Ti, O and the impurity atoms on the sample surface) were obtained at different points at the TiO 2 surface. It was noticed that the EDS data of pure TiO 2 Fig. 6(b). The abscissa of the EDS spectrum indicates the ionization energy and ordinate indicates the counts. Higher the counts of a particular element, higher will be its presence at that point or area of interest or vice a versa.
TEM
The particle size estimated from TEM Fig. 7(a), (b) and (c), were found to be in the range of 10-100 nm as shown in Fig. 7(a), bare gTiO 2 NPs were highly aggregated and no distinct particles were found where as in Fig. 7(b), APTES coated gTiO 2 NPs were well separated from each other and reduced aggregation was observed between nano and micro-sized particles indicating nanohybrid nature of APTES-coated gTiO 2 NPs. Fig. 7(c) represented the SAED pattern of APTEScoated gTiO 2 NPs and showed that the small spots form rings, indicated polynanocrystalline nature.
Physical properties of dental materials
The results of the physical properties of dental materials of the materials, namely, maximum compressive stress, flexural stress and % PS of all four sets of samples listed in Table 1 are shown in Tables 2-5. The samples were analyzed thrice. Mean of the three readings (n = 3) with standard deviation (S.D.) are mentioned in the following tables. Table 2 (set 1) with the resin material (BisGMA 100 wt%) and amount of fillers (10 wt%) constant.
Effect of different types of fillers is represented in the
The type of filler that gave higher mechanical strength (maximum compressive and flexural stress) i.e. nanohybrid surface modified fillers, were used for next sets.
Effect of different wt% of nanohybrid fillers is represented in Table 3 (set 2) with the resin material (BisGMA 100 wt%) and type of fillers (nanohybrid) constant.
The amount of nanohybrid fillers that gave higher mechanical strength (Maximum compressive and flexural stress) was used for next sets i.e. 5 wt% mTiO 2 nanohybrid fillers.
Effect of different composition of resin materials viz. BisGMA, UDMA, TEGDMA with varying percentage composition, etc. was carried out in Table 4 (set 3) with the type of mTiO 2 fillers (nanohybrid) and amount of fillers (5 wt%) were kept constant. The composition of resin matrix which gave higher mechanical strength (maximum compressive and flexural stress) and lower % polymerization shrinkage was used for further testing i.e. BisGMA 60 wt% + TEGDMA 20 wt% + UDMA 20 wt%.
Effect of surface modification on mechanical and dental properties of experimental composite materials with 5 wt% nanohybrid mTiO 2 ; resin composition BisGMA 60 wt% + TEGDMA 20 wt% + UDMA 20 wt% was carried out as represented in Table 4.5 (set 4).
The experimental nanocomposites material with 5 wt% surface modified nanohybrid fillers (mTiO 2 ) gave higher mechanical strength (maximum compressive, flexural stress) and lower % polymerization shrinkage and as compared to the material without surface modified nanofillers mTiO 2 .
Set 1 Table 2 represented dental composite material with nanohybrid fillers possessing higher compressive stress 141.95 MPa and flexural stress 52.92 MPa and lower % PS i.e. 7.48% than micro, microhybrid and nanosized fillers.
Set 4 Table 5 represented the comparative result of material with bare and surface modified 5 wt% nanohybrid fillers in which surface modified fillers gave compressive stress 195.55 MPa, flexural stress 83.30 MPa, and lowest % PS 3.48% as compared to the nanocomposite material surface modified fillers. The optimized result of the material which gave the highest mechanical strength and lowest desired % PS by considering all 4 sets is shown in Table 6.
Discussion
Physiochemical characterization showed that the green microwave synthesized particles gTiO 2 were a mixture of nano and micro-sized particles i.e. nanohybrid in nature. Temperature and pressure parameters were controlled in the compact microwave synthesizer, and this rapid process allowed the sizecontrolled synthesis to take place in a few minutes.
Microwave process has high efficiency (Zhu and Chen, 2014), consumes less energy and reduces time, with enhancement in quality of the product. The size of gTiO 2 particles was 146.3 nm i.e. in micrometer, which was concluded from DLS system. In general, TiO 2 particles possess excellent biocompatibility and decrease the potential of an allergic reaction (Hilger, 2013). APTES-coated gTiO 2 were responsible for the reduction in the aggregation of particles even when dispersion into the resin matrix. In the GMA-modified gTiO 2 , the C @ @ C group of GMA on the surface of gTiO 2 could participate in the polymerization of the monomer and form a covalent linkage in the resin matrix and fillers, thus helping in the improvement of mechanical strength of the material (Rastelli et al., 2012;Shinkai et al.,2018). In case of % PS, the volume contraction is dependent on many factors including filler concentration (de Melo Monteiro et al., 2011). Low polymerization shrinkage (% PS) value indicated less micro-leakage and better restoration (Li et al., 2012). The results of other physical properties of dental materials of the experimental composite materials showed that the nanohybrid fillers were excellent among micro, micro hybrid and nano-sized fillers (Table 2). This may be because these fillers depicted the properties of both and they are hard to dislodge the small particles from the gaps of micro-sized particles. The mechanical properties of the final resin material was improved with an increase in wt% of fillers and showed the highest value at 5 wt% (Table 3). This may be because the increase in the number of fillers increases agglomeration, which in turn affects the mechanical strength of the resin (Rastelli et al., 2012). The mixture of the three resins together gave higher values of strength than that of the single resin or mixture of two (Table 4) and thus the surface-modified NPs were apt as dental nanocomposites rather than bare fillers (Table 5).
On comparing the mechanical data of green synthesized fillers with the similar work carried out by Wu and coworkers, it was observed that gTiO 2 fillers achieved reduction of % PS value to 3.48% which is lesser than that of 5.9% (Wu et al., 2014). This may be due to the stepwise selection of nanohybrid fillers, optimum wt% of surface modified fillers and optimum resin composition. Also, the final material (Table 6) showed considerable increase in mechanical strength than of the conventional amalgam material (Narasimha and Vinod, 2013).
Dental nanocomposites have provided cosmetically acceptable results with excellent mechanical properties for both bulk (Rinastiti et al., 2010) and fiber reinforced materials (Scribante et al., 2015). Researchers are synthesizing SiO 2 microspheres (Miao et al., 2012a), zirconia-silica nanofibres (Guo et al., 2012), alumina (Arora et al., 2015), etc. as fillers and some have also worked upon their synthesis for the purpose of dental composites e.g. nanosilica by chemical method (Canche´-Escamilla et al., 2014) and ZrO 2 /Al 2 O 3 fillers by CO 2 laser co-vaporization (Bartolome´et al., 2016), which are purely chemical methods. We have used 'Green Nanotechnology' concept in the field of synthesis of dental fillers. Considering the facts of TiO 2 exposure in human body, 5 wt% of TiO 2 has minimal effects on health compared with the other composites with higher filler content (e.g., 65 wt% of silica in Vertise Flow TM (Maas et al., 2017), 78 wt% filler content in Filtek supreme TM (Franc¸a et al., 2014). Future studies in this field could involve other important aspects of nanofiller characteristics, such as cytotoxicity, color shades, color stability, in vitro studies. Moreover, evaluation of cytotoxicity of the as-prepared resin composite material and NPs in oral proximity and its clinical performance is needed in future.
Conclusion
The results indicated that green synthesized and surface modified 5 wt% (mTiO 2 ) nanohybrid particles in the resin mixture of bisphenol A glycidyl methacrylate (60 wt%), triethylene glycol dimethacrylate (20 wt%), urethane dimethacrylate (20 wt %) gives increased mechanical strength and decreased percentage polymerization shrinkage among the different sets of experimental compositions. In addition to the antimicrobial, hydrophilic and self-cleaning nature of tooth colored TiO 2 nanoparticles; gTiO 2 nanohybrids can be used as effective fillers for light curing dental nanohybrid composite materials for improving their physical properties.
Conflict of interest
The authors declared that there is no conflict of interest | 2019-03-28T13:33:15.880Z | 2019-01-29T00:00:00.000 | {
"year": 2019,
"sha1": "775f5dfe4ab878d95d0f142e7da91f63d7a881ef",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sdentj.2019.01.008",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "774edaa7d58259b8d1d1d450dbf2580e428d7fc1",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
52944323 | pes2o/s2orc | v3-fos-license | Metabolic dynamics of human Sertoli cells are differentially modulated by physiological and pharmacological concentrations of GLP‐1
ABSTRACT Obesity incidence has pandemic proportions and is expected to increase even further. Glucagon‐like peptide‐1 (GLP‐1) based therapies are well‐established pharmacological resources for obesity treatment. GLP‐1 regulates energy and glucose homeostasis, which are also crucial for spermatogenesis. Herein, we studied the GLP‐1 effects in human Sertoli cells (hSCs) metabolism and mitochondrial function. hSCs were cultured in absence or exposed to increasing doses of GLP‐1 mimicking physiological post‐prandial (0.01nM) levels or equivalent to pharmacological levels (1 and 100nM) used for obesity treatment. We identified GLP‐1 receptor in hSCs. Consumption/production of extracellular metabolites were assessed, as well as protein levels or activities of glycolysis‐related enzymes and transporters. Mitochondrial membrane potential and oxidative damage were evaluated. Glucose consumption decreased, while lactate production increased in hSCs exposed to 0.01 and 1nM GLP‐1. Though lactate dehydrogenase (LDH) protein decreased after exposure to 100nM GLP‐1 its activity increased in hSCs exposed to the same concentration of GLP‐1. Mitochondrial membrane potential decreased in hSCs exposed to 100nM of GLP‐1, while formation of carbonyl groups was decreased in those cells. Those effects were followed by an increase in p‐mammalian target of rapamycin (mTOR) Ser(2448). Overall, the lowest concentrations of GLP‐1 increased the efficiency of glucose conversion to lactate, while GLP‐1 concentration of 100nM induces mTOR phosphorylation, decreases mitochondrial membrane potential and oxidative damage. GLP‐1 regulates testicular energy homeostasis and pharmacological use of GLP‐1 analogues could be valuable to counteract the negative impact of obesity in male reproductive function. HIGHLIGHTSGLP‐1 receptor is expressed in human Sertoli Cells (hSCs).Lowest GLP‐1 doses increased efficiency of glucose conversion to lactate in hSCs.100nM GLP‐1 decreased mitochondrial membrane potential and oxidative damage in hSCs.GLP‐1 regulates testicular energy homeostasis.We suggest that GLP‐1 analogues may counteract obesity‐related male infertility.
Introduction
Obesity has emerged as a major healthcare problem increasing the risk of cardiovascular diseases and decreases lifespan (Fontaine et al., 2003). Reduced fertility is a silent obesity-related complication, which among other co-morbidities is becoming a matter of concern (Alves T peptide hormone, produced by posttranslational cleavage of the proglucagon precursor protein (Bell et al., 1983). This peptide is mainly produced by intestinal L-cells and secreted post-prandially (Guedes et al., 2015). GLP-1 is known for its role in glucose homeostasis predominantly mediated by the incretin effect. It potentiates insulin secretion after oral glucose ingestion, as compared to intra-venous glucose administration. GLP-1 also acts in the central nervous system to reduce the appetite and delays gastric emptying. Glucose equilibrium is crucial for energy homeostasis and the proper functioning of physiological functions, including male fertility. However, due to in vivo fastproteolytic digestion mainly by dipeptidyl peptidase 4 (DPP4) and renal clearance, GLP-1 has a short half-life (Ruiz-Grande et al., 1993) that limits the use of the native peptide for pharmacological purposes. Thus, several GLP-1 analogues are available, being options for treatment of diabetes and obesity, as these promote weight loss, while having a low risk of causing hypoglycaemia (Crane and McGowan, 2016).
GLP-1 exerts its activity via GLP-1 receptor (GLP-1R), which belongs to a G-protein coupled receptor family (Bullock et al., 1996). GLP-1R expression within the male reproductive system has been detected in mice Sertoli cells (SCs) (Zhang et al., 2015), evidencing that GLP-1 should have an impact on male fertility. In fact, GLP-1R knockout mice have reduced seminal vesicle and gonadal weighs, despite depicting normal testicular sex steroids levels and retaining fertility (MacLusky et al., 2000).
While it is known that GLP-1 and its analogues are key regulators of metabolism, the molecular means which by they impact cell metabolism, remain unknown. It has been postulated that mitochondria are the main metabolic targets for this hormone (for review Alves et al., 2016). For instance, GLP-1 improves mitochondrial membrane potential (Ogata et al., 2014) and also mitochondrial mass in a pancreatic cell line (Kang et al., 2015). Indeed, both GLP-1 analogues and DPP4 inhibitors, which increase endogenous GLP-1 levels by preventing its degradation, modulate mitochondrial structure and functioning (for review Ranganath, 2008). These metabolic processes depend on signalling pathways network. The mammalian target of rapamycin (mTOR) kinase emerged as pivotal for these processes (for review Oliveira et al., 2017), controlling mitochondrial bioenergetics and glucose metabolism. GLP-1 interferes with mTOR pathway (Park et al., 2015) and recently it was suggested that mTOR has an essential role in male reproduction (Jesus et al., 2017). In addition, mTOR controls glucose consumption and redox balance of human SCs (hSCs) . The SCs are responsible for the nutritional support of spermatogenesis and the major hormonal target inside the testis Martins et al., 2016). Herein, we studied the GLP-1 effects on hSCs metabolism and mitochondrial function. We hypothesized that GLP-1 could interfere in hSC function with potential implications for the nutritional support of spermatogenesis.
Chemicals
All chemicals were obtained from Sigma-Aldrich (St. Louis, MO, USA) unless specified otherwise.
Patient's selection and testicular tissue preparation
Testicular tissue was obtained from testicular biopsies (n = 6) performed to men with conserved spermatogenesis, suffering from anejaculation due to previous vasectomy or traumatic section of the vas deferens, with the ultimate aim of recovering gametes for medically assisted procreation. After informed written consent, hSCs were isolated from the testicular tissue left in culture plate once gamete retrieval was completed. Testicular biopsies and handling of testicular tissue was done at the Centre for Reproductive Genetics Professor Alberto Barros (Porto, Portugal) in accordance with the Guidelines of Local, National and European Ethical Committees and performed in agreement with the Declaration of Helsinki.
Sertoli cells culture
Human SCs were isolated following the protocol optimized by Oliveira and collaborators (Oliveira et al., 2009). Anti-Mullerian hormone and vimentin, were used, as specific protein markers, to assess the purity of the hSCs cultures (Steger et al., 1996). Only cultures with a purity above 95% were used, SC cultures purity was determined by phase contrast microscopy using an ExtrAvidin Peroxidase Staining Kit.
To establish the various experimental groups, cells were washed with PBS, and placed in culture serum-free medium (DMEM: Ham's F12, pH 7.4) with insulin-transferrin-sodium selenite medium (final concentration of 10 mg/L; 5.5 mg/L; 6.7 μg/L, respectively) to which GLP-1 (Bachem AG, Bubendorf, Switzerland) was added (or not). One group of cells was not exposed to GLP-1 (0 nM) (no GLP-1) and three other groups were treated with increasing concentrations of GLP-1 (0.01, 1 and 100 nM). The concentration of 0.01 nM was chosen taking into consideration the postprandial GLP-1 levels found in healthy individuals (Vilsboll et al., 2001;Balestrieri et al., 2015). The other GLP-1 concentrations were chosen to mimic the highest plasmatic concentrations attained after administration of a GLP-1 analogue at the maximum therapeutic dosage recommended for obese individuals (3 mg/daily for liraglutide) either after a single administration (1 nM) or at steady state after a daily administration for 5 weeks (100 nM) (Elbrond et al., 2002;Jiang et al., 2011;Danne et al., 2017).
After 6 h of treatment, cell culture medium was also collected. Afterwards, the cells were washed and exposed to GLP-1 for 6 h. We chose this time period for the exposure of cells to GLP-1 based on GLP-1 half-life (Ruiz-Grande et al., 1993). The cells were then washed with PBS and detached using a Trypsin-EDTA solution (0.05% / 0.02% (w/ v)), counted using a Neubauer chamber and collected for protein, DNA and RNA extraction. Only groups with viability averaging 85-90%, evaluated by Trypan Blue Exclusion test, were considered for analysis.
RNA and DNA extraction
Total RNA (RNAt) was extracted using the E.Z.N.A.® RNAt commercial Kit (Omega Bio-Tek, Norcross, USA) and DNA was extracted using the E.Z.N.A.® Tissue DNA Kit Commercial Kit (Omega Bio-Tek, Norcross, USA), as indicated by the manufacturer. The amount of DNA and RNA was determinated using a NanoDrop 1000 Spectrophotometer (Thermo Fisher Scientific, Wilmington, USA). The 260/280 nm of the samples were used to assess the purity of DNA and RNA. DNA presented a ratio of ≈1.8 and RNA presented a ratio of ≈2 in the extracted samples. To assess the integrity of RNA and DNA, an aliquot of each sample was run on a denaturing agarose gel stained with GreenSafe (NZYTech, Lisboa, Portugal).
cDNA synthesis and Reverse Transcriptase Polymerase Chain Reaction (RT-PCR)
RNAt was reversely transcribed as previously described . The resultant cDNA was used with exon-exon spanning primer sets designed to amplify specific cDNA fragments (Table 1). PCR was performed with a primer's optimal annealing temperature, using standard methods (Table 1). The samples were visualized using the software Image Lab (BioRad, Hercules, CA, USA) coupled to an image acquisition system BioRad FX-Pro-plus (BioRad, Hemel Hempstead, UK). The product size was compared to a DNA ladder (NZYTech, Lisboa, Portugal). Human heart and human liver RNAt (AMS Biotechnology, Abingdon, UK) were used as positive control and cDNA-free sample was used as negative control.
The mRNA expression levels of GLP-1R were evaluated in the different experimental groups. qPCR experiments were carried out in triplicate, in a CFX 96 qPCR setup (BioRad, Hercules, CA, USA). The efficiency of the amplification for all primers sets was determined by using serial dilutions of cDNA . Amplification conditions were followed as previously described (Martins et al., 2016). β-2-microglobulin (β2M) transcript levels were used to normalize the mRNA expression of the target genes. The target genes, sequences and annealing temperatures of the primers are described in Table 1. Following the mathematical model proposed by Pfaffl in the formula: 2 −ΔCt , the fold variation the target gene expression was calculated.
Determination of mtDNA copy number
A qPCR analysis was performed to study the mtDNA copy number as described (Fuke et al., 2011) with small modifications. The efficiency of the amplification was determined by serial dilutions of DNA, and the amplification conditions used were as previously described (Alves et al., 2014). The reaction mixture consisted in NZY qPCR Mix (NZYTech, Lisboa, Portugal), primers (Table 1) and 20 ng of mtDNA. Each reaction was carried out in an CFX 96 (Biorad, Hercules, USA). C t value differences between NADH dehydrogenase subunit 1 (ND1) gene and nuclear encoded beta-2-microglobulin (β2M nc ) gene were used to quantify mtDNA copy number relative with the following model proposed by Pfaffl: 2 -ΔCt .
Cytotoxicity assay
A sulforhodamine B (SRB) colorimetric assay was performed to test the cytotoxicity of GLP-1 to hSCs (Skehan et al., 1990). The cells were seed and treated with the selected concentrations of GLP-1. After 6 h, the assay was performed as previously described . No cytotoxicity was observed for the GLP-1 concentrations used (data not shown).
Western blot
Total proteins isolated from hSCs were extracted using the M-PER Mammalian Protein Extraction Reagent (Thermo Scientific, Rockford, USA). Western blot was performed as previously described (Alves et al., 2014). The membranes were incubated overnight at 4°C with primary and secondary antibodies (Table 2). Mouse β-tubulin was used as protein loading control. ECF detection system was used and the membranes were read in the BioRad FX-Pro-plus (BioRad, Hemel Hempstead, UK). The densities from each band were quantified according to standard methods using Image Lab (BioRad, Hemel Hempstead, UK). Whenever possible the membranes were stripped with Restore Western Blot Stripping Buffer (Thermo Scientific, Rockford, USA) following the manufacturer's instructions, blocked and marked with other primary and consequently secondary antibody.
Lactate dehydrogenase (LDH) activity
The commercial Pierce LDH Kit (Thermo Scientific, Rockford, USA) was used to measure LDH activity. The obtained values of activity were calculated using the molar absorptivity of formazan (ε = 19,900 M −1 .cm −1 ) .
Measurement of oxidative damage
The oxidative damage to proteins and lipids was evaluated by Slotblot. We determined the protein carbonyl, nitro-tyrosine (NT) and 4hydroxynonenal (4-HNE) group levels as previously described . The membranes were incubated with primary and secondary antibodies (Table 2). ECF detection system was used, the membranes were read in the BioRad FX-Pro-plus and the densities from each band were quantified using Image Lab (BioRad, Hemel Hempstead, UK).
Glutathione content assay
Glutathione content of cells exposed to GLP-1 was performed using the commercial kit to quantify total, oxidized and reduced glutathione contents (Enzo Life Sciences, Lausen, Switzerland), according to the manufactory instructions.
Mitochondrial membrane potential
The cation JC-1 dye (Molecular Probes, Eugene, OR, USA) was used to evaluate mitochondrial membrane potential. Cells were exposed to the chosen concentrations of GLP-1 for 6 h and mitochondrial membrane potential determined as previously described (Martins et al., 2016). Fluorescent intensity was measured using a Cytation 3 Imaging Reader (BioTek Instruments, Winooski, USA). The ratio of the fluorescent intensity of JC-1 aggregates to monomers was used as an indicator of mitochondria membrane potential.
Statistical analysis
Statistical significance was assessed by one-way ANOVA. All results were performed in triplicate and data are shown as mean ± SEM (n = 6 for each condition). Statistical analysis was performed using GraphPad Prism 6 (GraphPad Software, San Diego, CA, USA). P < .05 was considered significant.
GLP-1 receptor is expressed in hSCs
GLP-1R was previously identified in the mice testis, only by immunohistochemistry (Zhang et al., 2015). We investigated the presence of GLP-1R in hSCs and detected a 300 bp amplicon, corresponding to the presence of GLP-1R mRNA (Fig.1A). Our results also show that, when exposed to the different concentrations of GLP-1, hSCs do not present alteration in the transcript levels of GLP-1R (Fig. 1B). 3.2. Glucose consumption is decreased in hSCs exposed to GLP-1 Glucose is essential for SCs metabolism and spermatogenesis. This metabolite enters SCs mostly through glucose transporters (GLUTs), GLUT1-3 Martins et al., 2016). After treatment with GLP-1, no alterations were detected in the protein levels of those GLUTs in hSCs ( Supplementary Fig. 1A to C). Nevertheless, when hSCs were exposed to 0.01 and 1 nM of GLP-1, glucose consumption decreased to 45.35 ± 18.40 and 25.66 ± 15.60 pmol/cell, respectively, while cells exposed to 100 nM of GLP-1 consumed 75.82 ± 28.40 pmol of glucose/cell and those not exposed to GLP-1 consumed 122.30 ± 30.70 pmol of glucose/cell ( Fig. 2A). SCs also produce acetate (Alves et al., 2012), which is exported to the intratubular fluid by monocarboxylate transporters (MCTs), as is lactate. Our results show that hSCs exposed to all GLP-1 concentrations did not alter the protein levels of MCT4 ( Supplementary Fig.1D). On the other hand, acetate production by hSCs exposed to 1 nM of GLP-1 increased to 2.49 ± 0.38 pmol/cell when compared to non-exposed cells that produced 1.34 ± 0.31 pmol/cell of acetate ( Supplementary Fig. 2C).
Exposure to GLP-1 increased lactate production by hSCs
Once inside the cell, glucose is metabolized to pyruvate. Pyruvate is at a crossroad of several metabolic pathways. Exposure of hSCs to GLP-1 did not alter the consumption of pyruvate (Supplementary Fig. 2A). One of the metabolic pathways originated from pyruvate is the conversion to alanine by alanine aminotransferase. When we evaluated alanine production by hSCs exposed to GLP-1 no differences were observed ( Supplementary Fig. 2B). In these cells, the majority of pyruvate is converted to lactate by LDH. Our results showed that lactate production by hSCs exposed to GLP-1 increased to 11.49 ± 2.06 and 11.49 ± 1.67 pmol/cell for cells exposed to 0.01, 1 nM, respectively, when compared to non-exposed hSCs (6.72 ± 2.48 pmol/cell of lactate) (Fig. 2B). We further analysed LDH expression and detected that it slightly decreased in cells exposed to 100 nM of GLP-1 to 0.86 ± 0.03fold variation to non-exposed cells when compared to cells exposed to 1 nM of GLP-1 (0.94 ± 0.02-fold variation to non-exposed cells (Fig. 2C). However, LDH activity was increased in hSCs exposed to 100 nM of GLP-1 to 1.26 ± 0.16-fold variation relative to non-exposed cells and when compared to hSCs exposed to 1 nM of GLP-1, which, presented an activity of 0.78 ± 0.11-fold variation relative to nonexposed cells (Fig. 2D).
3.4. Human SCs exposed to 100 nM of GLP-1 presented decreased mitochondrial membrane potential Our data showed that mitochondrial membrane potential decreased in SCs exposed to 100 nM (ratio of 0.61 ± 0.02), when compared to non-exposed or those exposed to 0.01 and 1 nM of GLP-1 (ratio of 1 ± 0.06; 1.15 ± 0.07 and 1.11 ± 0.10, respectively) (Fig. 3A). Still, when we analysed the relative quantity of mitochondrial DNA no alteration was observed in hSCs exposed to GPL-1 when compared to nonexposed cells (Fig. 3B).
3.5. Carbonyl group levels decreased in hSCs exposed to 1 and 100 nM when compared to 0.01 nM of GLP-1 The high glycolytic flux, followed by the conversion of pyruvate to lactate, as happens in SCs, enhances a pro-oxidative environment. The products of protein carbonylation (dinitrophenol (DNP)), nitration (NT) and lipid peroxidation (4-HNE) are biomarkers for oxidative stress. Abbreviations: 4-HNE-4 Hydroxynonenal; DNP-Dinitrophenol; GLUT1-Glucose transporter 1; GLUT2-Glucose transporter 2; GLUT3-Glucose transporter 3; LDH-Lactate dehydrogenase; MCT4-Monocarboxylate transporter 4; P-mTOR-Phospho-mTOR Carbonyl group levels did not altered in any of hSCs exposed groups to GLP-1 when compared with the no GLP-1 group. However, hSCs exposed to 1 and 100 nM of GLP-1 presented lower levels of carbonyl groups (0.78 ± 0.07 and 0.86 ± 0.05-fold variation to non-exposed cells (no GLP-1), respectively (Fig. 4A)) when compared with hSCs exposed to 0.01 nM of GLP-1 (1.10 ± 0.02-fold variation to non-exposed cells (no GLP-1)). Yet, the levels of 4-HNE and NT groups were not altered in cells exposed to GLP-1 when compared with non-exposed cells ( Fig. 4B and C, respectively). Additionally, we determined the protein levels of antioxidant enzymes such as catalase and the total/ reduced glutathione ratio. Our results show that cells exposed to all GLP-1 concentrations did not presented alterations in the protein levels of catalase or in the total/reduced glutathione ratio when compared with the levels detected in non-exposed cells (Supplementary Fig. 3A and 3B, respectively). Results are expressed as mean ± SEM (n = 6 for each condition). Significantly different results (p < .05) are as indicated: * relative to non-exposed cells (no GLP-1); § relative to 1 nM. Results are expressed as mean ± SEM (n = 6 for each condition). Significantly different results (p < .05) are as indicated: * relative to non-exposed cells (no GLP-1); # relative to 0.01 nM; § relative to 1 nM.
A.D. Martins et al. Toxicology and Applied Pharmacology 362 (2019) 1-8 3.6. Phosphorylated mTOR is increased in hSCs exposed to 100 nM of GLP-1 mTOR has a crucial role in coordinating cellular homeostasis and energy status (Chiang and Abraham, 2005). Recent studies demonstrate that mTOR signalling modulates the glycolytic and oxidative profile in hSCs . GLP-1 had no effects regarding mTOR phosphorylation in hSCs of any of the exposed groups when compared with expression in hSCs of no GLP-1 group. Still, hSCs exposed to 100 nM of GLP-1 presented an increase in the protein levels of phosphorylated mTOR (Ser2448), to 1.41 ± 0.28-fold variation to non-exposed cells (no GLP-1) when compared to hSCs exposed to 0.1 nM of GLP-1 (0.77 ± 0.12-fold variation relative non-exposed cells (no GLP-1)) (Fig. 5).
Discussion
GLP-1 is a peptide hormone with an active role in the regulation of circulating glucose levels (Brubaker, 2006). Glucose equilibrium is crucial for energy homeostasis and the proper functioning of physiological functions, including male fertility (Dunning et al., 2005). Excessive fat accumulation is associated with dysregulation of energy homeostasis signalling systems, and with a concurrent impairment of GLP-1 mediated functions (Dirksen et al., 2013). Several GLP-1 analogues are now used as pharmacological agents to promote weight loss (Blundell et al., 2017). Recent studies have consistently associated the increase in obesity rates with decreased male fertility (Rato et al., 2014a). In 2030, it is estimated that over 50% of the world population will be overweight/obese (Smith and Smith, 2016), which will aggravate male infertility trends. Alterations in SC metabolism were hypothesized to contribute for reduced fertility in obese males (Winters et al., 2006). In fact, the SCs are responsible for the nutritional support of spermatogenesis.
There is evidence that GLP-1 may influence male reproductive function (Jeibmann et al., 2005), although few studies have been focused on the molecular mechanisms. Since SCs are major hormonal targets within the testis (Alves et al., 2014;Martins et al., 2015;Martins et al., 2016), we hypothesized that GLP-1 could directly modulate the metabolic functions of these cells and has a possible effect on male fertility. For that, we exposed hSCs to increasing doses of GLP-1 to evaluate: (1) the impact of the postprandial levels of GLP-1; (2) the impact of GLP-1 when present at the levels reached by its analogues (liraglutide) in the plasma of healthy/obese individuals. The maximum plasma concentrations found in overweight and obese subjects treated with GLP-1 analogues is similar to the one observed in healthy individuals (Onge et al., 2016). GLP-1 actions are mediated by the GLP-1R. So far GLP-1R has only been identified in mice SCs by immunohistochemistry of the testicular tissue (Zhang et al., 2015). Our results allowed us to observe for the first time the expression of GLP-1R in isolated hSCs. Interestingly, GLP-1 exposure did not change GLP-1R expression in cultured hSCs, illustrating that the mRNA expression of this receptor in these cells is not rapidly responsive to exposure to increasing GLP-1 concentrations.
We then evaluated the impact of GLP-1 postprandial levels in hSCs metabolism and bioenergetics. These cells uptake glucose from the interstitial fluid by the action of GLUTs (GLUT1-3). Glucose is then used to produce essential metabolites (mainly lactate) that will serve as energy sources for developing germ cells Martins et al., 2016). Although GLP-1 exposure did not alter GLUTs expression, hSCs decreased the consumption of glucose after exposure to postprandial levels of this hormone, as compared to non-exposed cells. Concurrent results had been reported in other mammalian cell lines, with GLP-1 exposure decreasing deoxyglucose uptake (Morales et al., 2014). Our results suggest that this hormone is capable of eliciting an alteration of glucose uptake, likely due to an alteration on its metabolism. Still, GLP-1 postprandial levels were capable to stimulate lactate production by hSCs. Previous studies, show that lactate is essential for spermatogenesis since it is used as metabolic fuel by developing germ cells (Boussouar and Benahmed, 2004) and has an anti-apoptotic effect on those cells (Rato et al., 2014b). The postprandial levels of GLP-1 increased the production of lactate, while glucose consumption was decreased, as compared to nonexposed cells, illustrating the metabolic commitment of these cells to an efficient production of lactate. No effect was detected on mitochondria functionality in these cells. Neither mitochondrial membrane potential nor DNA content were affected by exposure to these levels of GLP-1. Moreover, the oxidative stress markers (protein carbonyl and nitrotyrosine groups, and lipid peroxides) were not altered, which correlates with a normal functioning of the mitochondria. Hence, at postprandial levels, GLP-1 seems to be vital for eliciting the production of lactate by hSCs, which in turn consume lower amounts of glucose. These cells have adaptive mechanisms, and show a metabolic plasticity (Alves et al., 2014;Martins et al., 2015;Martins et al., 2016;Meneses et al., 2016) that is very important to sustain the metabolic support of spermatogenesis. They utilize a wide sort of metabolic sources to produce lactate, namely from triglycerides that accumulate as lipid droplets (Gorga et al., 2017). Depending on the substrate availability and on stimuli, SCs can oxidize these cytoplasmic lipidic droplets to support their metabolic requirements (Jutte et al., 1985). It has been described that GLP-1 may promote lipid droplet remodelling and lipolysis in human adipocytes, when present at concentrations above 10 −11 M (Villanueva-Penacarrillo et al., 2001). A similar event may be occurring in hSCs exposed to the postprandial levels of GLP-1 (Vilsboll et al., 2001;Balestrieri et al., 2015). Similar results were obtained when the hSCs were exposed to GLP-1 concentrations that mimic the levels of GLP-1 analogues observed in the plasma after a single administration of liraglutide at the therapeutic dosage recommended for obese individuals (Elbrond et al., 2002). Both glucose consumption and lactate production were maintained, when compared to cells exposed to postprandial levels of GLP-1. The SC is responsible for producing lactate favouring the glycolytic flux , but it also uses mitochondrial oxidative phosphorylation to sustain its own energetic needs. Human SC exposed to 1 nM of GLP-1 presented an increase in acetate production, however this did not lead to an increase in mitochondrial membrane potential, as observed in other cell lines (Morales et al., 2014). No changes were observed in the pro-oxidant environment, nitration and lipid peroxidation, nor in the levels of antioxidant defences, but we found less oxidative damage in proteins of cells exposed to this concentration of GLP-1. This protective effect mediated by GLP-1 may be essential to counteract the pro-oxidant environment promoted by an increased metabolic activity.
Contrasting results were obtained with cells exposed to GLP-1 concentrations that mimic the levels of GLP-1 analogues observed in the plasma after a prolonged administration of liraglutide at the therapeutic dosage recommended for obese individuals (Jiang et al., 2011;Danne et al., 2017). In this case, hSCs appear to consume higher amounts of glucose, producing the same amounts of lactate, as compared to cells exposed to postprandial GLP-1 levels. This increased glucose consumption was associated with LDH activity stimulation, which suggest an adaptation to sustain lactate production. In fact, hSCs exposed to 100 nM of GLP-1, presented a decrease in mitochondrial membrane potential. As said, SCs have a distinct metabolic behaviour when compared to most somatic cells, with remarkable metabolic resemblances with cancer cells . Thus, decreased mitochondrial membrane potential may not be a sign of compromised function, but rather a shift to sustain the metabolic biosynthetic requirements of developing germ cells and redirecting the metabolism of the cell for oxidative phosphorylation. Indeed, when we accessed oxidative damage, which is often associated with mitochondria malfunction, the exposure to 100 nM GPL-1 did not promote protein nitration nor lipid peroxidation, but these cells rather showed less oxidative damage in proteins without any changes in the levels of antioxidant defences. Again, GLP-1 appears to exert a protective effect that is essential to counteract testicular pro-oxidant environment. Although these results suggest that GLP-1 has an antioxidant effect the exact mechanism by which GLP-1 decreases oxidative stress in hSCs remains to be elucidated. Our studies further suggest that these alterations in glucose metabolism and mitochondrial function in hSCs exposed to 100 nM GPL-1 are associated with stimulation of mTOR Complex 1 (mTORC1) pathway, as mTOR phosphorylation at Ser2448 is increased. Although the significance of mTORC1 pathway for GLP-1 mediated effects is unclear, the involvement of this signalling pathway has been described in several cellular systems (Ravassa et al., 2011).
In conclusion, GLP-1 was able to modulate glucose metabolism and bioenergetics, promoting the production of lactate by hSCs. Moreover, exposure to the highest concentration of GLP-1 decreased oxidative damage in these cells. Also, the absence of toxic effects of GLP-1 at this concentration in hSCs, allied to a decrease in oxidative damage, adds a possible positive impact on male fertility. Still, further experiments are needed to clarify the effects of GLP-1 in male reproductive health and to determine if the effects observed in vitro translate to in vivo. Taking in consideration the decline of fertility rates parallel to the increasing prevalence of obesity, it is crucial to understand how GLP-1 affects male fertility. The use of GLP-1 analogues for obesity treatment could also be valuable to counteract the negative impact of adiposity related metabolic dysregulation in male reproductive function and arise as an additional target for medical intervention. | 2018-10-22T17:25:40.424Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "fb2e481d7ef416bb146256b11a3e9456bb0d1644",
"oa_license": "CCBY",
"oa_url": "https://ubibliorum.ubi.pt/bitstream/10400.6/9277/1/ARTIGO_83.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "fb2e481d7ef416bb146256b11a3e9456bb0d1644",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
25802634 | pes2o/s2orc | v3-fos-license | Characterization of RNA strand displacement synthesis by Moloney murine leukemia virus reverse transcriptase.
The RNase H activity of reverse transcriptase (RT) is presumably required to cleave the RNA genome following minus strand synthesis to free the DNA for use as a template during plus strand synthesis. However, since RNA degradation by RNase H appears to generate RNA fragments too large to spontaneously dissociate from the minus strand, we have investigated the possibility that RNA displacement by RT during plus strand synthesis contributes to the removal of RNA fragments. By using an RNase H- mutant of Moloney murine leukemia virus (M-MuLV) RT, we demonstrate that the polymerase can displace long regions of RNA in hybrid duplex with DNA but that this activity is approximately 5-fold slower than DNA displacement and 20-fold slower than non-displacement synthesis. Furthermore, we find that although certain hybrid sequences seem nearly refractory to the initiation of RNA displacement, the same sequences may not significantly impede synthesis when preceded by a single-stranded gap. We find that the rate of RNA displacement synthesis by wild-type M-MuLV RT is significantly greater than that of the RNase H- RT but remains less than the rate of non-displacement synthesis. M-MuLV nucleocapsid protein increases the rates of RNA and DNA displacement synthesis approximately 2-fold, and this activity appears to require the zinc finger domain.
Retroviral replication requires the single-stranded RNA genome of the virus to be converted into double-stranded DNA through a complex series of reactions termed reverse transcription. This process appears to be catalyzed solely by the viral reverse transcriptase (RT) 1 that possesses the following two distinct enzymatic activities: a polymerase activity that synthesizes DNA using either RNA or DNA templates, and an RNase H activity that cleaves RNA in hybrid duplex with DNA (1,2).
The current model of reverse transcription proposes that the RNase H activity of RT is critical for several steps including degradation of the 5Ј end of the RNA genome following minusstrong-stop DNA synthesis to facilitate the first jump, specific cleavage at the polypurine tract to create the plus strand primer, and removal of the plus and minus strand primers (3). Additionally, it is presumed that the RNase H activity is required to degrade the RNA genome following minus strand synthesis to free the minus strand DNA for use as a template during plus strand synthesis (reviewed in Ref. 3). In vitro studies, however, suggest that RNA fragments that are too large to spontaneously dissociate from the minus strand remain following cleavage of the genome (4 -11). Furthermore, evidence that plus strand synthesis in several retroviral systems is discontinuous demonstrates that stably annealed RNA fragments persist in vivo (3,(12)(13)(14). This raises the interesting possibility that reverse transcription requires a mechanism for the displacement of genomic RNA fragments during plus strand synthesis.
Most replicative polymerases require accessory proteins such as helicases and single-strand binding proteins (SSBs) to unpair the duplex region in front of the primer terminus during DNA synthesis (15). In contrast, studies from our laboratory 2 and others (16 -20) demonstrate that RTs from several retroviral systems possess the capacity to catalyze displacement of the non-template DNA strand in the absence of accessory proteins, although the rate of synthesis appears to be roughly 3-12-fold slower than that found during non-displacement synthesis on a single-stranded template. Similarly, both human immunodeficiency virus type 1 (HIV-1) and Moloney murine leukemia virus (M-MuLV) RTs appear to possess at least a limited capacity to displace non-template RNA during synthesis on RNA-DNA hybrid templates (21), but this process has not been characterized in detail.
Since reverse transcription to yield full-length viral DNA in vitro has only been achieved in permeabilized virions or ribonucleoprotein complexes (22)(23)(24), it seems possible that one or more virion-associated accessory proteins are required for the complete reaction. A leading candidate for the role of an accessory factor is the viral nucleocapsid (NC) protein. NC is a small, basic protein that possesses either one or two zinc finger motifs in conventional retroviruses. NC binds nucleic acids with some apparent cooperativity, shows a higher binding affinity for RNA over DNA with a preference for single strands, and promotes renaturation between complementary nucleic acid chains (25)(26)(27)(28)(29)(30)(31). These properties are reminiscent of those associated with SSBs (32)(33)(34), thus leading to the proposal that NC may serve to facilitate reverse transcription. Many studies have indicated that NC promotes the first and second template switches and is important during the initiation of reverse transcription from the tRNA primer (35)(36)(37)(38)(39)(40), while other reports have suggested that NC improves the efficiency of synthesis during reverse transcription (39,(41)(42)(43)(44)(45).
In this study we have tested the ability of M-MuLV RT to catalyze RNA displacement synthesis in the absence or pres-ence of the RNase H activity, and we have investigated the effects of NC on both DNA and RNA displacement synthesis.
Our results indicate that RT has the capacity to displace RNA, but the rate is slower than that of DNA displacement synthesis and much slower than non-displacement synthesis. We find that M-MuLV NC facilitates displacement and that this activity is dependent on the zinc finger motif of the protein.
Nucleic Acids
Plasmids-The recombinant phagemid pBSMOLTR(Ϫ) was generated as described previously for pBSMOLTR(ϩ) 2 except the long terminal repeat (LTR) insert was cloned into pBluescript II KS(Ϫ) (Stratagene) such that minus sense single-stranded DNA would be isolated by infection with M13KO7 helper phage. The recombinant phagemid pGEMLTR2 contains a 645-bp fragment of the M-MuLV LTR isolated from pBSMOLTR(Ϫ) (genome position 7851 to 231) and cloned into pGEM3Zf(ϩ) (Promega Corp.). Single-stranded pGEMLTR2 DNA produced by helper phage rescue contained minus strand viral sequences. Phagemid pGEMLTR3 was generated by cloning a 744-bp EcoRI-BamHI LTR insert from pBSMOLTR(Ϫ) into pGEM3Zf(Ϫ) such that phage-rescued single-stranded DNA contained plus strand viral sequences. For brevity, nucleic acids derived from pGEMLTR2 and pGEMLTR3 will be referred to as LTR2 and LTR3, respectively. M13LTR2 was constructed by polymerase chain reaction amplification and cloning of a 697-bp region of pGEMLTR2 (including the insert) into M13mp7 at the HincII restriction enzyme sites.
Single-stranded DNAs-Various single-stranded recombinant phagemid and phage DNAs were isolated by established procedures (46). Where indicated, the single-stranded phagemid DNA was linearized by restriction enzyme digestion after annealing an oligonucleotide that generated the restriction enzyme recognition site, followed by phenol/ chloroform extraction of the product. To recover the single-stranded insert (ssLTR2i) from M13LTR2, the DNA was heated to 90°C, slowcooled in EcoRI buffer to anneal the complementary regions flanking the insert, and digested with 500 units of EcoRI in a final volume of 0.5 ml for 1 h at 37°C. Following phenol/chloroform extraction, the 687-nt single-stranded EcoRI product was gel-isolated with QIAEX II (QIA-GEN) on a 0.7% agarose gel following the manufacturer's protocol. Several gel-isolated DNA fragment preparations were combined and further purified over an anion column (QIAGEN-tip 100) as specified by the manufacturer.
Preparation of RNA-For LTR2 RNA, BamHI-linearized LTR2 DNA was transcribed by T7 RNA polymerase as specified in the RiboMAX kit (Promega) except that prior to DNase I treatment, the reaction was treated with Escherichia coli alkaline phosphatase (ϳ0.4 units/g plasmid DNA) for 20 min at 37°C to remove 5Ј-triphosphates (46). Fulllength RNA transcripts were purified by electrophoresis on a 6% denaturing polyacrylamide gel and elution into 10 mM Tris-HCl, pH 8.0, 1 mM EDTA (TE) for 14 h at 25°C. The eluate was filtered through a 0.2-m syringe filter (Corning Glass Works), ethanol-precipitated in the presence of 0.3 M sodium acetate, and resuspended in TE. LTR3 RNA was generated using SP6 RNA polymerase (New England Biolabs, Inc.) following established procedures (46). Following phenol/chloroform extraction and ethanol precipitation, the RNA was treated for 20 min at 37°C with E. coli alkaline phosphatase (2 units/g RNA). SDS was added to 0.05%, and the RNA was extracted twice each with phenol and chloroform, ethanol-precipitated in the presence of 0.3 M sodium acetate, and resuspended in TE. To establish the efficiency of full-length RNA production, parallel transcription reactions were carried out containing 36 M UTP and 50 Ci of [␣-32 P]UTP. Although a detectable fraction of the LTR3 RNA appeared to be truncated (perhaps due to premature termination by SP6 RNA polymerase), Ͼ80% of the transcripts were 500 nt or larger. Therefore, all calculations involving RNA displacement on pGEMLTR3 used 500 bases as the effective end of the displacement region. All RNAs were stored at Ϫ80°C.
Preparation of Primer-Templates
Short Oligonucleotide Primer-Templates-To ensure that only the downstream oligonucleotides varied between reactions, the oligonucleotides were annealed to the single-stranded template DNA in two stages. In the first stage, the end-labeled primer (oligo II) was annealed to the single-stranded DNA template (pBSMOLTR) at a molar ratio of 1:1.5 in 1.4ϫ RT buffer (see below), by heating to 63°C, and then slow-cooling to 14°C. In the second stage, the annealed sample was divided into three parts to which were added either DNA oligo IV (DNA displacement), RNA oligo IV (RNA displacement), or TE (non-displacement) resulting in a ratio of primer:template:downstream oligonucleotide of 1:1.5:7.5 (see also Fig. 1A). The second stage annealing was performed under the same conditions as the first.
Extended Primer-Templates-The DNA displacement templates were prepared as follows (see also Fig. 2A): the DD oligo (corresponding to the 5Ј end of the downstream non-template strand) was annealed to linear single-stranded DNA (or ssLTR2i for NC assays) at a 1:2 molar ratio (single-stranded DNA:oligonucleotide) in 2.3ϫ Sequenase buffer (Amersham Pharmacia Biotech) by heating for 3 min at 65°C and then incubating at 42°C for 45 min. Reaction conditions were adjusted to 40 mM Tris-HCl, pH 7.5, 20 mM MgCl 2 , 50 mM NaCl, 6 mM dithiothreitol (DTT), 0.2 mM dNTPs, and the DD oligo was extended with Sequenase at 37°C for 30 min. The reaction was terminated by the addition of EDTA to 20 mM, extracted twice with phenol/chloroform and chloroform, and then ethanol-precipitated in the presence of 0.3 M sodium acetate. As a control for the efficiency of the DD oligo extension, a parallel reaction was carried out except the DD oligo:linear DNA ratio was ϳ1:12, and the DD oligo was 32 P-end-labeled.
For the DNA displacement assay, the 5Ј-32 P-labeled oligonucleotide primer was annealed to the DNA displacement template at a molar ratio of 1:2 (primer:template) in annealing buffer (200 mM KCl, 10 mM Tris-HCl, pH 7.5), by heating for 45 min at 67°C and then for 30 min at 41°C. The RNA displacement and non-displacement templates were prepared by combining the end-labeled primer with single-stranded linear DNA (or ssLTR2i for NC assays) and in vitro transcribed RNA (RNA displacement template) or TE (non-displacement template) at a molar ratio of 1:1.6:2.7 (primer:template:RNA) and annealing as described above (see also Fig. 2A).
Displacement Synthesis Assays
Displacement Assays Using Oligonucleotide Primer-Templates-Synthesis assays contained 10 nM primer in a 30-l reaction volume. The annealed short oligonucleotide primer-templates (described above) were preincubated with 200 units of SuperScript II (SSII) in 1ϫ reac-tion buffer (1ϫ RT buffer, (50 mM Tris-HCl, pH 8.3, 50 mM KCl, 6 mM MgCl 2 ), containing 1 mM DTT and 2% glycerol) for 2 min at 37°C. Programmed synthesis was initiated by the addition of dATP, dGTP, and dCTP (final concentrations, 200 M each) in 1ϫ reaction buffer equilibrated to 37°C. Omission of dTTP directed the termination of synthesis to the first dA residue in the template, found 13 bases beyond the point of initiation. Reactions were incubated at 37°C, and 5-l aliquots were removed and mixed with an equal volume of 95% formamide, 20 mM EDTA, 0.05% bromphenol blue, 0.05% xylene cyanole at the indicated times. As a negative control for each assay, T4 DNA polymerase was used in place of SSII. To prevent degradation of the oligonucleotides by the 3Ј-exonuclease activity of T4 DNA polymerase, the dNTPs were added prior to preincubation, and the reactions were initiated by the addition of enzyme. For analysis, the samples were heated to ϳ95°C for 3 min, electrophoresed on a 20% denaturing polyacrylamide gel, and analyzed by autoradiography and PhosphorImager analysis. Calculations were performed using the area integration feature of the ImageQuant software. Full-length products were defined as those fragments Ն29 bases in length.
Displacement Assays Using LTR2 and LTR3 Primer-Templates-Synthesis assays contained 10 nM primer, 50 mM Tris, pH 8.3, 50 mM KCl, 6 mM MgCl 2 , 5 mM DTT, 0.1 g/l bovine serum albumin, and 200 M dNTPs (final volume 27-40 l). For each reaction, the annealed primer-template combination was warmed to 37°C in the presence of MgCl 2 , DTT, and bovine serum albumin and, after the addition of 300 -400 units of SSII, preincubated for 30 s at 37°C. Synthesis was initiated by the addition of dNTPs to a final concentration of 200 M. At the indicated times 5-l aliquots were added to 15 l of 98% formamide, 6 mM EDTA, 0.04% xylene cyanole and analyzed by denaturing polyacrylamide gel electrophoresis. To compare directly M-MuLV RT versus SSII, or for control reactions using T4 DNA polymerase, the dNTPs were added prior to preincubation at 37°C for 2 min, and synthesis was initiated by the addition of the enzyme. Median and maximum extension lengths were determined essentially as described elsewhere, 2 except the unextendable primer (defined as the lowest amount of primer length radioactivity at any time point in the series) was subtracted from the total unextended primer in each sample. The median extension length was defined as the length at which half the products were longer and half were shorter; the maximum length was arbitrarily taken as the length that exceeded 99% of the total products. Extension rates were based on multiple independent experiments and determined by least squares analysis of extension lengths plotted as a function of time.
RT Assays Involving NC-NC assays were carried out using the ssLTR2i template (described above) to avoid adding excess DNA to which NC would potentially bind non-productively outside of the template region. The 5Ј-32 P-labeled primers used were as follows: T730 (1 nM), which anneals immediately upstream of the non-template RNA to create a nicked primer-template, and T7M13 (2.7 nM), which anneals 17 nt upstream of the non-template RNA to form a gapped template. The experiments were carried out as described above except that 1-10 pmol of NCp10 or NCdd were added per fmol of primer to the preincubation mixture. For the "no NC control," an equivalent volume of NC resuspension buffer (below) was added. When necessary, SSII was diluted in RT dilution buffer (20 mM Tris-HCl, pH 8.0, 1 mg/ml bovine serum albumin, 2 mM DTT, and 20% glycerol). Extension lengths were plotted as a function of time, and the rates were calculated from the steepest portion of each curve.
Annealing Assay-The NC annealing assay was carried out essentially as described previously (48). In a 10-l reaction, 5 fmol of 5Ј-32 Plabeled R Ϫ and 20 fmol of R ϩ oligonucleotides were incubated in 20 mM Tris-HCl, pH 7.5, 50 mM NaCl, and 1 M ZnCl 2 in the absence of NC or in the presence of 0.15-59 pmol of NC- (19 -53), or 0.04 -19 pmol of NCdd or NCp10 for 5 min at 37°C. When required, NC peptides were diluted in NC resuspension buffer. The reactions were stopped by the addition of SDS to 1.5%, extracted with an equal volume of phenol, 0.2% SDS, and added to 3ϫ loading buffer (19.5% Ficoll, 30 mM EDTA, 0.15% SDS, 0.025% xylene cyanole). The samples were electrophoresed on an 8% native polyacrylamide gel at 4°C and visualized by autoradiography after exposure of the wet gel to film at Ϫ80°C.
Capacity of RT to Catalyze RNA Strand Displacement Synthesis through Short Regions of Hybrid Duplex-To determine
whether RT possesses the capacity to displace RNA during DNA synthesis, we examined the efficiency of primer extension by RT on oligonucleotide primer-templates in vitro. In this assay we used the recombinant RNase H Ϫ point mutant of M-MuLV RT, SuperScript II (SSII), to avoid the complication of RNase H cleavage of the RNA. RNA displacement primertemplates were created by annealing a 13-mer RNA oligonucleotide (oligo IV) to single-stranded DNA immediately downstream of a 5Ј-32 P-labeled DNA primer (oligo II) (Fig. 1A). For comparison, corresponding DNA displacement and non-displacement templates were generated by replacing RNA oligo IV with a DNA oligonucleotide or omitting the downstream oligonucleotide. Programmed synthesis by SSII was limited to 13 bases of extension by omitting dTTP from the reactions, thus providing a defined end point to facilitate analysis. It should be noted that after the addition of ϳ6 nt during displacement synthesis, the remainder of the downstream strand will melt off the template, and the reaction will revert to the non-displacement mode. The appearance of full-length products with the RNA-DNA hybrid template (Fig. 1B, lanes 7 -11, arrow) demonstrated that, in the absence of RNase H activity, RT had the capacity to displace RNA. Comparison of the pattern of stalling and the accumulation of full-length products on the three templates, however, indicated that RNA displacement synthesis was less efficient than DNA displacement or nondisplacement synthesis (Fig. 1B, compare lanes 7-11 with lanes 2-6 and 12-16). Not unexpectedly, synthesis resulting from misincorporation by RT (1, 2) was observed beyond the directed end point; these products were included with the full-length products for quantitative purposes. Based on a comparison of the rates of accumulation of full-length products (Fig. 1C), RT carried out RNA displacement synthesis approximately 5-fold slower than DNA displacement and at least 16-fold slower than non-displacement synthesis. As a control for these experiments, T4 DNA polymerase, which lacks strand displacement activity, was used to verify the structures of the primer-templates. T4 DNA polymerase rapidly extended the primer on single-stranded templates but not on displacement templates (data not shown), thus demonstrating that the downstream oligonucleotides were stably annealed in the displacement assays.
Comparison of Displacement and Non-displacement Synthesis on Extended Primer-Templates-To characterize further RNA displacement synthesis by SSII and to compare the rate of RNA displacement to DNA and non-displacement synthesis, extended primer-templates derived from the M-MuLV LTR were generated ( Fig. 2A). Kinetic analysis of synthesis by SSII on the LTR2 primer-templates supported our previous observation that RNA displacement synthesis by RT was slower than DNA or non-displacement synthesis. Most primers were extended to the end of the linear template (655 nt) within the first 2 min during non-displacement synthesis (Fig. 2B, lane 5), whereas full-length products accumulated less rapidly on the DNA and RNA displacement templates; significant 655-nt product was not observed until the 5-min time point during DNA displacement (Fig. 2B, lane 11) or the 20-min time point during RNA displacement synthesis (Fig. 2B, lane 17). The calculated maximum rates of non-displacement, DNA displacement, and RNA displacement synthesis were 12.9, 2.4, and 0.63 nt/s, respectively. Surprisingly, we observed that during RNA displacement synthesis, the majority of the extension products remained stalled after only 1 to 4 bases had been added (Fig. 2B, lanes 13-17) compared with the rapid extension of primers through this region on the non-displacement and DNA dis-placement templates (Fig. 2B, lanes 3-7 and 8 -12). Therefore, the resulting distribution of RNA displacement synthesis products was bimodal; at the 20-min time point, for example (Fig. 2B, lane 17), 50% of the products were stalled after 1 to 3 bases had been added while the other 50% of the terminations were distributed over the remaining 652 bases of the template. Due to this early stalling, the median RNA displacement synthesis rate was calculated to be roughly 150 times less than that of DNA displacement synthesis. A summary of the average median and maximum extension rates calculated from multiple independent experiments is shown in Table I.
To test whether the stalled intermediates remained extendable or were dead-end synthesis products, we added either additional SSII or the displacing polymerase, Sequenase, to the RNA displacement reactions after the initial 20 min incubation. Time points up to 2 h after the second addition of enzyme showed that the stalled products were extendable by Sequenase, but little if any additional extension by SSII was observed (data not shown).
As described above for the oligonucleotide primer-template reactions, T4 DNA polymerase was used as a control to confirm the structure of the templates. As expected, on the non-displacement template T4 DNA polymerase efficiently extended the primers to full-length product (Fig. 2B, lane 18) but failed to extend significantly the primers on the DNA and RNA displacement templates (Fig. 2B, lanes 19 and 20).
Efficiency of RNA Displacement Synthesis Initiation Is Sequence-dependent-To test whether the bimodal distribution of products observed with the LTR2 primer-template was characteristic of RNA displacement synthesis in general, a second set of extended primer-templates (LTR3) was generated. These templates were similar to those shown in Fig. 2A, except that the sequence of the primer and the template downstream from the nick differed from the LTR2 templates. Fig. 3 shows a time course of synthesis by SSII over the first ϳ75 nt on the LTR3 templates. As with the LTR2 templates, the primers were efficiently extended during non-displacement and DNA displacement synthesis (Fig. 3, lanes 2-4 and lanes 5-10); by the 15-s time point for non-displacement (Fig. 3, lane 2) or the 5-min time point for DNA displacement synthesis (Fig. 3, lane 8), no significant stalled products remained within the first 10 bases downstream from the primer. The accumulation of stalled products in the same region at the 5-min time point was greater during RNA displacement (Fig. 3, lane 14), but they did not persist to later time points (lanes 15 and 16). The reduced stalling yielded a median rate for RNA displacement on the LTR3 template that was 5-fold greater (0.01 nt/s) than that with the LTR2 template. The maximum rates of non-displacement, DNA displacement, and RNA displacement synthesis on the LTR3 templates were 15.1, 3.2, and 0.88 nt/s, respectively, and thus similar to the maximum rates determined for the LTR2 templates.
Initiation of RNA Displacement on a Gapped Template-To determine whether the stalling observed on the LTR2 RNA FIG. 1. Comparison of strand displacement with non-displacement synthesis by SSII on oligonucleotide primer-templates. A, RNA or DNA forms of the 13-base oligo IV were annealed immediately downstream of 5Ј-32 P-labeled DNA oligo II to create a nicked primertemplate for displacement assays, or oligo IV was omitted to generate a non-displacement template. Programmed synthesis allowed primer extension to proceed until the first template directed dTTP was required, 13 bases beyond the point of initiation. B, non-displacement (lanes 2-6), RNA displacement (lanes 7-11), or DNA displacement (lanes 12-16) primer-templates were used in time course assays in which programmed synthesis was catalyzed by SSII. Aliquots of the reaction were terminated at the time points indicated above each lane, and the products were separated on a 20% denaturing polyacrylamide gel. The displacement template was affected by the position of the primer relative to the RNA non-template strand, a gapped RNA displacement template was created (Fig. 4A). Synthesis from the priming oligonucleotide (T7primer) used in the LTR2 studies (above) was compared with synthesis from an alternate primer (gap-primer) that annealed 40 bases upstream of the non-template RNA. This configuration does not change the sequence at which RNA displacement is initiated and thus allowed us to compare directly the synthesis initiating at a nick with that initiating at a gap.
When RNA displacement was preceded by non-displacement synthesis (gap configuration), stalling during the initiation of RNA displacement was reduced significantly. The arrows in Fig. 4B indicate the positions of the first base of RNA displacement (ϩ1 position) on the gapped template (Fig. 4B, lanes 4 -8) and on the nicked template (Fig. 4B, lanes 13-17); products migrating at or above the arrows reflect synthesis requiring RNA displacement. To minimize the contribution of products resulting from non-displacement synthesis through the gapped portion of the template, the 5-and 20-min time points were used to analyze stalling at the ϩ1 position with the two templates. At the 5-min time point, 9.9% of the RNA displacement product was stalled at ϩ1 on the gapped template, whereas 42.8% was stalled in the analogous position on the nicked template; the corresponding values for the 20-min time point were 3.7 and 16.5%, respectively.
Unexpectedly, the pausing pattern in the single-stranded region of the gapped template differed dramatically from the pausing over the same sequence with the non-displacement template. Significant pauses were observed at positions Ϫ7 to Ϫ1 (relative to the start of RNA displacement) on the gapped template (Fig. 4B, lanes 4 -7, vertical line on left) that were absent on the non-displacement template (Fig. 4B, lanes 1 and 2). Similar differences were observed when pauses in the same region on a gapped DNA displacement template were compared with non-displacement products (data not shown). The basis for these pauses on what should be identical stretches of singlestranded DNA is not clear (see "Discussion").
RNA Displacement Synthesis by Wild-type RT-Since the RNase H activity of RT is predicted to cleave the genomic RNA prior to plus strand synthesis in vivo, it was of interest to investigate the extent to which limited RNase H activity might affect the rate of RNA displacement synthesis by RT in our in vitro assay. DNA synthesis catalyzed by M-MuLV RT on the LTR2 primer-templates was compared with that catalyzed by SSII under identical conditions. On the RNA displacement template (Fig. 5A), accumulation of full-length products was observed as early as the 2-min time point when synthesis was catalyzed by M-MuLV RT (Fig. 5A, lane 10) but not until the 20-min time point with SSII (Fig. 5A, lane 7). For both SSII and M-MuLV RT, significant pausing was observed during the first 2 bases of addition. At the earliest time point, the amount of product stalled from the primer up to the ϩ2 position was nearly identical for the two enzymes (Fig. 5A, lanes 3 and 8), but at later time points, these pauses were more effectively resolved in reactions containing RNase H activity (lanes 10 -12) compared with those lacking it (lanes 5-7). A plot of the maximum extension rate by M-MuLV RT on the RNA displacement template shows an increasing slope up to the 5-min time point, after which the end of the linear template was approached (Fig. 5B). The initial extension rate by M-MuLV RT of 0.8 nt/s increased to a maximum of 2.5 nt/s between the 2-and 5-min time points. Synthesis by SSII on an identical template and under the same conditions yielded a linear extension rate of 0.6 nt/s. As expected, the rates of synthesis by SSII and M-MuLV RT on the DNA displacement or non-displacement templates were very similar (Fig. 5B).
Effect of M-MuLV NC on RNA Displacement Synthesis-We investigated the effect of M-MuLV NC on displacement synthesis by RT using chemically synthesized NC and mutant NC proteins. To test the functional activity of our NC preparations, we performed a standard annealing assay (48) in which the capacity of the protein to promote hybridization between complementary DNA strands was monitored. The 56-residue M-MuLV NC protein, NCp10, contains a single zinc coordination site (zinc finger) flanked by basic regions important in nucleic acid annealing (51). The NCdd mutant contains a Gly-Gly linker in place of the deleted zinc finger, and the NC-(19 -53)
FIG. 3. Initiation of displacement and non-displacement synthesis on extended LTR3 primer-templates.
The LTR3 primer-templates were prepared as shown in Fig. 2A except linearized single-stranded LTR3 DNA was the template strand, and the priming oligonucleotide was 5Ј-32 P-labeled SP6 primer. The non-template strand RNA in the RNA displacement assay was transcribed from the SP6 promoter of double-stranded LTR3 DNA. SSII synthesis on non-displacement (lanes 2-4), DNA displacement (lanes 5-10), and RNA displacement (lanes [11][12][13][14][15][16] templates was terminated at the time points (in minutes) indicated above each lane. Only the lower third of the 6% denaturing polyacrylamide gel is shown. Size markers (determined by counting bands from the primer up) are shown at left, and lane 1 shows the position of the unextended primer. mutant lacks the zinc finger as well as residues from the N and C termini. NCp10 and NCdd have been reported to promote nucleic acid annealing in vitro, whereas NC-(19 -53) lacks annealing activity (52). In the control reaction without added protein a low level of background annealing was observed after incubation of the complementary 68-mer oligonucleotides at 37°C for 5 min (Fig. 6, lane 2); heating and slow cooling of the oligonucleotides promoted nearly 100% duplex formation (Fig. 6, lane 3). Annealing was promoted by NCp10 in a dose-dependent manner (Fig. 6, lanes 7-9) with ϳ100% of the product migrating as duplex at the highest NCp10 concentration tested (Fig. 6, lane 9). Likewise, NCdd promoted annealing to a similar extent at equivalent molar concentrations (Fig. 6, lanes 10 -12). Consistent with previously reported results, NC-(19 -53) appeared to have little or no effect on the rate of annealing of the oligonucleotides (Fig. 6, lanes 4 -6).
NCp10 and the NC mutants were tested for their effect on RNA displacement synthesis by RNase H Ϫ RT. Titration assays were carried out by adding increasing concentrations of NC to the LTR2 RNA displacement assay prior to the addition of SSII and dNTPs. Synthesis products from reactions containing either NCdd (Fig. 7, lanes 8 -11) or NC-(19 -35) (data not shown) appeared identical to mock reactions in which no NC protein was added (Fig. 7, lane 3). The extension products from reactions in which low concentrations of NCp10 were added appeared the same as the no NC control (compare Fig. 7, lanes 4 and 5 to lane 3), whereas the proportion of longer products increased in reactions with higher NCp10 concentrations (Fig. 7, lanes 6 and 7). The NC:nt ratio required to effect this shift was approximately equivalent to that found to promote duplex formation in the NC annealing reactions (Fig. 6 and data not shown). Of particular note, it appeared that the stalling consistently observed within the first 4 bases of RNA displacement synthesis on LTR2 was reduced in the presence of NCp10, and the amount of radioactivity migrating at ϳ100 -150 nt increased (compare Fig. 7, lanes 6 and 7 with lane 3).
To characterize the extent to which NC facilitates RNA displacement synthesis by SSII and to test the effect of NC on DNA displacement and non-displacement synthesis, time course assays were carried out in the absence of NC or in the presence of equivalent concentrations of NCp10 or NCdd. The presence of NCp10 or NCdd had no effect on the rate of extension during non-displacement synthesis (Fig. 8A), and analysis of the gel from which the rates were calculated revealed no qualitative difference when NC was added (data not shown). As was found for RNA displacement, NCp10 facilitated DNA displacement synthesis by SSII, whereas NCdd did not (Fig. 8B); the maximum rate of DNA displacement synthesis in the presence of NCp10 was 1.7-fold greater than in the absence of NC and 1.9-fold greater than when NCdd was added. During RNA displacement synthesis (Fig. 8C), NCp10 improved the maximum extension rate of SSII by 1.7-and 1.8-fold over that observed in the absence of NC or with NCdd, respectively. NCp10 increased the median rate of RNA displacement synthesis by 2-fold, while no significant change in the median rate of DNA or non-displacement synthesis was observed (data not shown).
DISCUSSION
In the present study, we have analyzed the capacity of M-MuLV RT to displace RNA during synthesis on RNA-DNA hybrid duplexes. In the absence of RNase H activity, RT carried out RNA displacement synthesis on either short oligonucleotide or extended hybrid primer-templates, but the rate of synthesis was lower than during either DNA displacement or non-displacement synthesis. These findings are consistent with FIG. 4. Comparison of RNA displacement synthesis on gapped versus nicked primer-templates. A, shown schematically are the primer-templates used to compare synthesis on the gapped and nicked templates using linearized single-stranded LTR2 DNA as the template strand. The nicked template is identical to the LTR2 RNA displacement template (see Fig. 2A). The gap-primer anneals to the single-stranded DNA template 40 bases upstream from the 5Ј end of the RNA, leaving a 40-base singlestranded "gap" between the end-labeled primer and the non-template RNA strand. B, synthesis on non-displacement (lanes 1-3), gapped (lanes 4 -8), and nicked (lanes 13-17) templates by SSII was terminated at the time points (in minutes) indicated above each lane, and the products were analyzed on a 6% denaturing polyacrylamide gel. The positions of the first base of RNA displacement are indicated by the arrows and, for the gapped template, were determined by reading the adjacent sequencing ladder (lanes 9 -12) generated using 5Ј-32 P-labeled gap-primer. The vertical line (at left) marks the position of a series of pauses on single-stranded DNA unique to the gapped template. those of Fuentes et al. (21) who showed that both HIV-1 and M-MuLV RTs lacking RNase H activity could displace a short RNA oligonucleotide during DNA synthesis.
We estimated the maximum rate of synthesis by SSII (RNase H Ϫ RT) to be 0.6 nt/s during RNA displacement, a rate 5-fold slower than that of DNA displacement synthesis and 19-fold slower than non-displacement synthesis (Table I). However, the median rate of RNA displacement synthesis was approximately 750 times slower than non-displacement synthesis due to the substantial stalling that occurred during the first 4 bases of synthesis beyond the initial primer. Qualitatively it appears that synthesis was strongly inhibited during the first several bases of extension but that once beyond this point, the nascent chains were readily elongated.
We were interested in determining whether the stalling observed during initiation on the LTR2 template was a general property of RNA displacement synthesis by RT. If so, it could indicate that RT has a very limited capacity to displace RNA. Alternatively, we considered the possibility that factors such as sequence context or the initiation of displacement synthesis at a nick may have contributed to the stalling. The former possibility was addressed with the LTR3 primer-template pair. Alteration of both the primer and the downstream hybrid sequence led to a significant decrease in the amount of product stalled at initiation, results which concurred with the pattern of pausing observed with the oligonucleotide primer-templates. Thus initiation on the LTR2 template appears to be unusually inefficient. The similarity between the maximum rates of synthesis on the two extended primer-templates, however, provided strong evidence that displacement of RNA by RT is significantly slower than displacement of DNA under otherwise identical conditions. This conclusion is supported by the relative rates estimated from assays using the oligonucleotide primer-templates.
Why is RNA displacement more difficult than DNA displacement? If displacement synthesis is merely a passive process displacement template (see Fig. 2A) was terminated at the time points (in minutes) indicated above each lane. The products were resolved on a 6% denaturing polyacrylamide gel. Size markers are shown in lane 1 and the unextended primer in lane 2. B, the maximum length extension product produced by M-MuLV RT (wt) and RNase H Ϫ RT (ssII) were determined as described under "Experimental Procedures" and graphed as a function of time. Data were taken from A (RNA) or otherwise identical reactions performed using the LTR2 non-displacement template (NON) or LTR2 DNA displacement template (DNA) (see Fig. 2A for schematic of templates). relying on duplex breathing to allow synthesis through doublestranded regions, then it would be expected that differences in the thermostability of RNA-DNA hybrids as compared with duplex DNA would be reflected in the rate of synthesis on the two templates. However, the thermostability of duplex DNA is predicted to be slightly higher than that of hybrid duplex on sequences of random base composition (53)(54)(55). Notably, the predicted thermostability of the RNA-DNA hybrid immediately downstream from the primer terminus on the LTR2 template is less than that for duplex DNA of the same sequence, yet on the RNA displacement template RT extends through this region with one-tenth the efficiency observed for DNA displacement synthesis. Thus the data here support and extend the conclusions of Whiting and Champoux 2 that while a passive mechanism remains a formal possibility, RT most likely displaces DNA and RNA actively using either an SSB-like or helicaselike mechanism. If RT actively participates in strand separation, differences in how RT interacts with the RNA or DNA non-template strands may be responsible for the observed differences between the rates of RNA and DNA displacement synthesis.
In addition to finding that the early stalling observed during RNA displacement on the LTR2 template was sequence-dependent, we found that the initiation of displacement synthesis was also more efficient if a gap rather than a nick preceded the region to be displaced. This finding was surprising since we did not expect the process of initiating displacement to be affected by synthesis occurring upstream of the RNA 5Ј end. One possible explanation for this finding is that some minimal length of synthesis, whether it be non-displacement or displacement synthesis, is required to effect a change in the properties of the polymerase that facilitate displacement of the non-template strand. For example, a shift from a distributive to processive mode of synthesis could account for the relatively rapid extension rates observed after an initial stalling, or for the ease of initiating displacement synthesis after extending through a gap. Consistent with this possibility, DeStefano et al. (6) found that the dissociation of RT from a primer-template occurs in a biphasic manner, suggestive of two binding modes for the polymerase with its substrate. Moreover, Whiting and Champoux 2 recently found that a distributive to processive synthesis transition occurs after the addition of ϳ10 bases during DNA displacement synthesis.
Enzymatic footprinting of M-MuLV RT by Wohrl et al. (56) suggests that contacts between RT and the template may extend up to 6 bases downstream of the primer terminus (position ϩ6). Thus it is plausible that RT may be required to melt up to 6 bases of the non-template strand before necessary downstream contacts with the template strand can be made. The observation that RT stalls ϳ6 bases upstream of the 5Ј end of the non-template strand on the gapped displacement template may provide tentative support for this hypothesis, although such stalling has not been observed on the other sequences we have tested.
Given that the biologically relevant form of RT contains RNase H activity, it was of interest to measure displacement synthesis by M-MuLV RT under the same conditions used for SSII. As expected, the rate of synthesis by M-MuLV RT on the RNA displacement template was significantly greater than that catalyzed by SSII. Since the rates of non-displacement and DNA displacement synthesis were nearly identical for the two enzymes, it seems very probable that RNase H-directed cleavage of the non-template RNA strand was responsible for the increased rate of synthesis by M-MuLV RT. However, despite the ϳ100-fold molar excess of enzyme over the RNA-DNA template, the rate of synthesis by M-MuLV RT remained less than that on a DNA displacement template; thus the cleavage that occurred over the course of the 20-min incubation was not sufficient to offset the intrinsic difficulty RT appears to have in displacing RNA. Additionally, the median rate of synthesis by M-MuLV RT on the hybrid template showed little increase over that observed with SSII, despite the slight reduction in the amount of stalling observed during initiation. Extrapolation using the maximum rates (Fig. 5B), however, predicts that on longer hybrid templates the rate of synthesis by M-MuLV RT would surpass that of DNA displacement synthesis. During retroviral replication, there may be sufficient time between minus strand synthesis and the onset of plus strand synthesis for the RNase H activity to reduce the RNA genome to relatively small fragments. Many of these fragments may be short enough to readily dissociate from the DNA; however, if longer fragments remain, then the relatively weak RNA displacement activity of RT could become rate-limiting for the overall process. Alternatively, Miller et al. (57) suggest that RNA displacement synthesis need only progress efficiently through the LTR (to allow for the second jump) as preintegration complexes composed of discontinuous plus strands appear competent to integrate.
Like many SSBs (33), retroviral NC promotes both nucleic acid helix destabilization and strand renaturation (28,30,58,59); thus, it seems reasonable that NC might play the role of an SSB-like accessory factor in reverse transcription. Studies looking at the effects of NC on extension rates, enzyme pausing, and processivity by RT during non-displacement synthesis have yielded contradictory results (39,41,44,45,60,61). Similarly, DNA displacement synthesis by HIV-1 RT appears either unaffected (16) or slightly stimulated (42) by NC. To our knowledge, the effect of NC on RNA displacement synthesis has not previously been characterized.
Our titration of NC levels during RNA displacement synthesis shows that a discrete and reproducible transition in the distribution of products occurs when RNA displacement is carried out in the presence of sufficient NC. Such a transition is consistent with previous observations that NC acts stoichiometrically rather than catalytically (27)(28)(29)62). This transition occurred at roughly the same NC:nt ratio required to promote strand annealing in the standard NC annealing assay. The ratio of NC:nt required to promote rapid duplex formation in the annealing assay was several times greater than expected based on the work of Lapadat-Tapolsky et al. (48) using HIV-1 NC, which might reflect either differences in the annealing capacity of the different NC types or decreased activity of our preparation.
Surprisingly we find that stimulation of displacement synthesis by NC is dependent on the presence of the zinc finger domain. Numerous studies looking at the importance of the NC zinc finger motifs have generally concluded that this domain is required for the proper selection and packaging of genomic RNA but that the motif is dispensable for such activities as RNA dimerization, annealing of the tRNA to genomic RNA, strand renaturation, and nonspecific RNA binding (25,51,52,(63)(64)(65). On the other hand, several studies have provided evidence that the zinc finger domain is critical for viral infectivity beyond the requirement for proper RNA packaging. Point mutations within the zinc finger domains of HIV-1 or M-MuLV NC have been identified that produce virus with only a somewhat reduced RNA content but with a greatly reduced minus strand synthesis capability and no infectivity (66 -68). Recent characterization of one such mutant demonstrated that the mutant successfully completes the first template switch but fails to synthesize full-length minus strand DNA (69). The failure of this mutant to complete reverse transcription cannot be explained by the loss of NC annealing or RNA dimerization activities since these activities have been shown to be independent of the zinc finger (51,63). It is possible that this mutant, like the NCdd zinc finger mutant in results presented here, fails to facilitate RNA and DNA displacement synthesis. | 2018-04-03T01:24:05.881Z | 1998-04-17T00:00:00.000 | {
"year": 1998,
"sha1": "24efc308c2bffab65b08a33dd5b14286eb6f5b6f",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/273/16/9976.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "1e9980dd8de98d8fbea1a12f7dfd42496fe50c31",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
132957643 | pes2o/s2orc | v3-fos-license | The complete chloroplast genome sequence of Camellia mingii (Theaceae), a critically endangered yellow camellia species endemic to China
Abstract Camellia mingii is a recently described and critically endangered new species from Southeast Yunnan Province, China. Genetic information of C. mingii would provide guidance for the conservation of this wild yellow camellia. Here, we reported and characterized its complete chloroplast (cp) genome using Illumina pair-end sequencing data. The total chloroplast genome size was 156,806 bp, including inverted repeats (IRs) of 26,020 bp, separated by a large single copy (LSC) and a small single copy (SSC) of 86,536 and 18,230 bp, respectively. A total of 132 genes, including 37 tRNA, eight rRNA, and 87 protein-coding genes were identified. Phylogenetic analysis showed that C. mingii was closely related to the clade containing C. huana and C. impressinervis.
Camellia, with ca. 120 species (Ming 2000;Min and Bartholomew 2007), mainly distribute in East and Southeast Asia. The diversity center of the genus is Southern Yangtze River of China (Ming and Zhang 1996). Yellow camellias are a group of species of Camellia with yellow, shiny petals, they are valuable ornamental plants and are also used in traditional Chinese medicine and commercial teas (Chang and Ren 1998;Min and Bartholomew 2007). Camellia mingii S. X. Yang is a local endemic, currently known only in Funing Xian, Yunnan Province, China. It grows on limestone in evergreen broad-leaved forests at elevation of 800-1300 m. Only three populations of C. mingii in Funing County of Yunnan were found till now, the category of Critically Endangered (CR) was recommended. Both morphological comparison and molecular analyses support C. mingii as a new member of yellow camellias (Liu et al. 2019). A number of chloroplast genomes of Camellia have been published (Yang et al. 2013;Huang et al. 2014); however, only few of endangered camellias were reported (Wang et al. 2017;Liu et al. 2018;Xu et al. 2018). In this study, we present the complete chloroplast (cp) genome sequence of C. mingii using Illumina sequencing technology.
Fresh leaves were collected from an individual of C. mingii in Funing county of Yunnan province, the voucher specimen (S. X. Yang 5610) was deposited in the Herbarium of Kunming Institute of Botany, Chinese Academy of Sciences (KUN). Genomic DNA was isolated using a modified CTAB approach (Doyle and Doyle 1987). The 150 bp pair-end reads were generated using the Illumina Hi-Seq 2500 platform. Totally, 5,692,455 reads in size of 2.82 G were obtained for both ends. The chloroplast genome was de novo assembled using GetOrganelle script (Jin et al. 2018), with SPAdes 3.10.1 as assembler (Bankevich et al. 2012), then visualized by Bandage 0.8.1 (Wick et al. 2015) to determine paths of the cp genome. The resultant cp genome was annotated by aligning with those of its congeners (e.g. Camellia taliensis, NC022264; Camellia sinensis, KC143082) in Geneious 8.0.2 (Kearse et al. 2012). Phylogenetic analysis of 25 Camellia species and two outgroups followed our previous study (Yu et al. 2017).
The chloroplast genome of C. mingii was 156,806 bp, with a mean coverage of the cp genome of 80.0. The GenBank accession number of the chloroplast genome of C. mingii is MK473913. The GC content of the genome was 39.6%. The length of inverted repeats (IR), large single copy (LSC), and small single copy (SSC) was 26,020, 86,536, and 18,230 bp, respectively. The chloroplast genome of C. mingii contained 132 genes, with 8 rRNA genes, 37 tRNA genes, and 87 protein-coding genes. Annotation revealed that four rRNA genes, seven tRNA genes, and seven protein-coding genes were duplicated in the IR region.
The phylogenetic relationship between C. mingii and other congeneric species was reconstructed by comparison with other 24 chloroplast genomes of Camellia. The maximum likelihood phylogenetic tree revealed that C. mingii was closely related to C. huana and C. impressinervis, both belong to Camellia subgenus Thea sect. Archecamellia (Figure 1). However, the phylogenetic position of C. mingii need further investigation because of current limited sampling and the conflict between our results and the results from a previous study based on nuclear GBSSI sequences (Liu et al. 2019). The complete cp genome of C. mingii would be useful for the genetic diversity studies of this endangered yellow camellia species and facilitate the understanding of the conservation and restoration of C. mingii.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2019-04-26T13:49:52.386Z | 2019-01-02T00:00:00.000 | {
"year": 2019,
"sha1": "2f95b884c06c9c5e90cc50c813c4881d96ae8386",
"oa_license": null,
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23802359.2019.1596765?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "8407e7aa0731ee42c26a12fce0ed56d4bb8197ee",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
251018166 | pes2o/s2orc | v3-fos-license | Point Cloud Quality Assessment: Dataset Construction and Learning-based No-Reference Metric
Full-reference (FR) point cloud quality assessment (PCQA) has achieved impressive progress in recent years. However, in many cases, obtaining the reference point clouds is difficult, so no-reference (NR) metrics have become a research hotspot. Few researches about NR-PCQA are carried out due to the lack of a large-scale PCQA dataset. In this paper, we first build a large-scale PCQA dataset named LS-PCQA, which includes 104 reference point clouds and more than 22,000 distorted samples. In the dataset, each reference point cloud is augmented with 31 types of impairments (e.g., Gaussian noise, contrast distortion, local missing, and compression loss) at 7 distortion levels. Besides, each distorted point cloud is assigned with a pseudo quality score as its substitute of Mean Opinion Score (MOS). Inspired by the hierarchical perception system and considering the intrinsic attributes of point clouds, we propose a NR metric ResSCNN based on sparse convolutional neural network (CNN) to accurately estimate the subjective quality of point clouds. We conduct several experiments to evaluate the performance of the proposed NR metric. The results demonstrate that ResSCNN exhibits the state-of-the-art (SOTA) performance among all the existing NR-PCQA metrics and even outperforms some FR metrics. The dataset presented in this work will be made publicly accessible at http://smt.sjtu.edu.cn. The source code for the proposed ResSCNN can be found at https://github.com/lyp22/ResSCNN.
INTRODUCTION
Recently, thanks to the increasing capability of 3D acquisition devices, point cloud has emerged as the most popular format for immersive media. A point cloud consists of a collection of points, each of which has geometric coordinates but may also contain a number of other attributes such as color, reflectance and surface normals. Point cloud has been used in many applications such as augmented reality (AR), autonomous driving, industrial robotics, documentation and facial landmarking [17,58]. In practice, a variety of distortions could be involved and affect human perception. Developing point cloud quality assessment (PCQA) can help to understand the distortions and carry out the quality optimization for distorted point clouds. Generally, PCQA can be performed using subjective experiments or objective metrics. Because the subjective experiment is expensive and time-consuming, studying robust and effective objective PCQA metrics is important. However, different from 2D
Our Approach
In this paper, we first build a large-scale PCQA dataset in which the reference samples have rich geometry and color information, and the distortions lie in both geometry and color domains; secondly, we annotate the built dataset using the pseudo MOS which has been successfully applied in images [69]; thirdly, we propose a sparse convolutional neural network (CNN) based NR-PCQA metric to extract the hierarchical features directly from 3D point clouds, considering both the geometry and texture information.
The distortion of point clouds is more complex than that of images. The photometric attribute of point clouds may be subject to similar distortions in 2D images, because the dense point clouds are also produced by optical devices, such as the depth camera and light field camera. Thus, the distortions induced during the image production may also appear in the point clouds, such as Gaussian noise. Besides, the data structure of point clouds leads to some unique distortions, such as the geometrical missing. As a result, the point cloud distortions may lie in their geometry components, or attribute components, or both. To better understand and study the point cloud distortions, we build a PCQA dataset with the largest size so far, which contains 104 reference point clouds with 31 types of distortions (such as Gaussian noise, contrast distortion, local missing and compression loss) at 7 different levels, leading to a total number of more than 22,000 distorted point clouds. Some reference point cloud samples are chosen from [3-5, 7, 50] which have already been used in the MPEG PCC standards. Some reference point cloud samples are crafted from the mesh format data in [1,8].
To annotate the newly built dataset, we conduct a large-scale subjective experiment to obtain the subjective MOS for some samples. The experiment is conducted with 1,240 distorted point cloud samples selected from the database covering all distortion types. Then, inspired by [69], we compute Spearman rank order correlation coefficient (SROCC) of FR-PCQA metrics for each distortion type based on the subjective MOS and their predicted scores. The scores from the best FR-PCQA metric for each distortion type are selected and normalized to obtain pseudo MOS and annotate the whole dataset. Since the performance of some existing FR-PCQA algorithms has been shown to be consistent with the HVS under certain types of distortions, the pseudo MOS can be considered accurate.
Finally, considering the uneven distribution of 3D point clouds, the sparse convolution is introduced into NR-PCQA in this work using Minkowski Engine [15]. We employ a sparse tensor representation and attempt to develop a sparse CNN based NR-PCQA metric called ResSCNN. It extracts the hierarchical features from 3D point clouds using the sparse convolution instead of the traditional dense convolution to avoid the massive increase in the elements of feature map. Besides, the dimensionality reduction techniques, such as down-sampling, are not included in the proposed metric because the dimensionality reduction itself introduces extra distortions.
We evaluate the proposed ResSCNN in this paper on the newly established LS-PCQA dataset, SJTU-PCQA dataset [71] and WPC2.0 dataset [37,57]. ResSCNN presents robust and competitive performance on all datasets compared with other NR metrics and even some FR metrics.
Contributions
The main contributions of this paper are as follows.
• We establish a large-scale PCQA dataset called LS-PCQA with 22,568 distorted point clouds derived from 104 original reference point clouds, each with 31 types of distortions at 7 distortion levels. The new dataset covers a wide range of impairments during point cloud production, compression, transmission and presentation. To the best of our knowledge, it is the largest PCQA dataset at present. • Based on LS-PCQA dataset, we conduct a fairly large subjective experiment to collect MOS.
In the experiment, we recruit 224 candidates to score 1,240 distorted point clouds, and ensure that at least 16 valid subjective scores are collected for each distorted point cloud according to ITU-R BT.500 [25]. Based on the subjective MOS, pseudo quality scores are calculated to annotate the whole LS-PCQA dataset. • We develop a NR-PCQA metric using sparse CNN with only 1.2M parameters. The experiment results show that our proposed metric offers robust and competitive performance compared with other NR metrics and even some FR metrics over three datasets.
The rest of this paper is organized as follows. The related work is surveyed in Section 2. The proposed large-scale PCQA dataset, LS-PCQA, is introduced in Section 3, and Section 4 presents the new NR-PCQA metric, ResSCNN, with its performance evaluation given in Section 5. Finally, the conclusion is drawn in Section 6.
RELATED WORK
This section surveys the development of PCQA metrics and 3D feature description.
Quality Assessment Metrics
Quality assessment is widely used in images [31,32] and various 3D media formats [33,49,78,79]. For point clouds, some existing quality assessment metrics evaluate the distortion based on geometrical attributes only. Specifically, p2point [16] quantifies the distances between corresponding points to measure the degree of distortion. P2plane [41] improves over p2point by projecting the obtained p2point distances along the surface normal direction. The point-to-mesh (p2mesh) [61] reconstructs the surface and then measures the distance from a point to the surface, but the efficiency of p2mesh is heavily dependent on the accuracy of the surface reconstruction. Both p2point and p2plane have already been applied in the standardized MPEG PCC technology [6].
On the other hand, Alexiou et al. [9] propose to measure the geometrical distortion based on the angular difference of point normals. Javaheri et al. [28] propose a generalized Hausdorff distance by employing the th lowest distance instead of the biggest distance to address that Hausdorff distances are over-sensitive to noise.
The aforementioned point-wise metrics ignore the fact that HVS is more sensitive to structural features. Besides, color information also plays an important role in PCQA. Considering the huge success of SSIM [68] in IQA, researchers start to consider spatial structural features as the quality index. Some of them take geometry and color into consideration simultaneously. Meynet et al. [42] propose to use the local curvature statistics to reflect the point cloud surface distortion, and further pool curvature and color lightness together via optimally-weighted linear combination [43]. Viola et al. [67] propose a quality assessment metric based on the color histogram. Alexiou et al. [10] incorporate four types of point cloud attributes, namely, geometry, normal vectors, curvature values and colors, into the form of SSIM [68]. Yang et al. [74] propose to extract point cloud color gradient using graph signal processing to estimate the point cloud quality. Zhang et al. [77] improve [74] using a HVS-based multi-scale method. Javaheri et al. [27] propose a point-to-distribution quality assessment metric considering both the geometry and texture.
Another idea for PCQA is to project the 3D point cloud into a number of 2D planes, and then the 2D IQA metrics can be used, such as the ones in [9,29,63,71]. However, the selection of projection directions may significantly influence metric performance. Besides, projection can cause information loss, limiting overall performance [74]. Therefore, the performance of these projection-based metrics is not satisfactory under multiple types of distortions.
For RR-PCQA, Viola et al. [66] use the statistical information of the geometry, color and normal vector to evaluate the point cloud quality. Liu et al. [38] build the connection between the quality and compression parameters (e.g., quantification step) of point clouds, which can be used to guide point cloud compression strategy with certain rate constraints.
The point cloud quality metrics surveyed above are FR and RR metrics, which means that the input of all these metrics requires the whole reference samples or some of its features. However, in many practical scenarios, obtaining the reference is difficult. For example, some point clouds are captured in the wild, these samples do not have high-quality references naturally. Thus, NR metrics deserve serious treatment. Tao et al. [60] propose a NR-PCQA metric based on point cloud projection and multi-scale feature fusion. Liu et al. [39] propose to predict the quality scores by utilizing the distortion classification information. Yang et al. [73] propose a NR-PCQA framework by leveraging the rich subjective scores of natural images through the domain adaptation.
The existing NR-PCQA metrics are all projection-based, which will introduce information loss [74]. In this work, we propose a NR-PCQA metric operating directly on 3D point clouds.
3D Feature Description
The first step of learning-based PCQA is to extract the representative features. Early work in 3D applications uses the hand-crafted feature descriptors to discriminate the local geometry characteristic. Johnson et al. [34] propose to project adjacent points onto the tangent plane to describe the geometry characteristics. Tombari et al. [62] propose to use covariance matrices of point pairs. Salti et al. [55] propose to create a 3D histogram of normal vectors. Rusu et al. [53,54] propose to build an oriented histogram using pairwise geometric properties. Guo et al. [24] provide a comprehensive review of such hand-crafted descriptors.
The powerful representation ability for learning-based methods has attracted more and more attentions. The current research hotspots have been directed to learning-based 3D feature representation. Zeng et al. [75] propose to learn the 3D patch descriptors by leveraging a Siamese CNN. Khoury et al. [36] propose to adopt the multi-layer perceptrons to map the 3D oriented histogram to a low-dimensional feature space. Deng et al. [18,19] propose to adopt PointNet [14] for geometric feature description.
However, the representative feature needed for NR-PCQA is quite different from other 3D applications such as 3D object classification, detection, and segmentation. It requires whole perception and higher efficiency. All the above work extracts a small patch or a set of key points to solve the visual tasks in a lower-dimensional space. However, NR-PCQA requires both local details and global understanding. Specifically, for subjective perception, local stimuli will be first perceived (such as the gradients), and then global stimuli are augmented (e.g., with structural contours of object/scene). Therefore, the design of a NR-PCQA metric should generate the final scores via taking the whole point cloud into consideration.
Exploiting the sparsity of 3D point clouds, Graham et al. [23] propose sub-manifold sparse CNN, and illustrate that sub-manifold sparse convolutions offer reliable performance in terms of sparsity invariance and reduced computing load in point cloud processing tasks [22]. Choy et al. [15] propose the Minkowski Engine, an extension of the sub-manifold sparse network to higher dimensions. In this work, the proposed NR-PCQA metric extracts the hierarchical features based on sub-manifold sparse CNN without adopting the dimensionality reduction techniques. In this way, the above-mentioned limitations can be addressed for NR-PCQA.
However, for 3D point clouds, only a few datasets of small scale have been built. They include PointXR [11], IRPC [30], SJTU-PCQA [71] and WPC [37,57], the largest one among them only contains a few hundred distorted samples. The limit in the amount of data can easily lead to overfitting for the learning-based metrics. Besides, the existing datasets have obvious weaknesses. On one hand, the number of reference point clouds and included distortion types are insufficient. On the other hand, the quality of some reference point clouds is not good enough, which can impact the results of subjective experiments. These drawbacks would greatly affect the development of NR-PCQA metrics.
The difficulty of building a large-scale PCQA dataset comes from two aspects. The first is to obtain enough original point clouds. To solve this problem, we collect mesh format data and convert them to point clouds to enlarge the availability of point clouds. The second lies in the annotation of built datasets which requires well-conducted subjective experiments under strict control conditions. Since subjective experiments are time-consuming and expensive, it is quite difficult to collect MOS for a large number of point clouds. To address this problem, the pseudo MOS is adopted here. By establishing the annotation criteria, the distorted samples can be automatically labeled by the computer algorithms (i.e., using the high-performance FR metrics).
In this paper, 104 original reference point clouds are selected or crafted from [1, 3-5, 7, 8, 50]. Each reference point cloud is distorted by 31 types of impairments under 7 distortion levels. In total, more than 22,000 distorted point clouds are generated. Table 1 compares the newly built dataset and 4 existing datasets, which clearly shows that our newly established dataset has a larger size and covers more distortion types. To annotate the new large-scale dataset, we split the whole dataset into three parts. Part I was labeled via the subjective experiment and is used to screen FR metrics for multiple distortion types to generate the pseudo MOS for part II and part III. Part II is used to evaluate the accuracy of pseudo MOS. Part III is labeled using the pseudo MOS only, which is utilized to increase dataset scale. The details are as follows: • Part I contains 930 distorted samples randomly selected from the whole dataset and they are annotated through the subjective experiment. In the sample selection, we pick 6 original point clouds with 5 distortion levels for each of the 31 considered distortion types. The obtained MOS is used to conduct the selection of best FR-PCQA and nonlinear mapping function to compute the pseudo MOS (see Section 3.4). • Part II contains 310 distorted samples, consisting of 2 original point clouds with 5 distortion levels for each of the 31 distortion types. These samples are also annotated through the subjective experiment. We use both the subjective MOS and pseudo MOS to label Part II, and illustrate the accuracy of pseudo MOS (see Section 3.5). • Part III contains the remaining distorted samples, which is labeled by the pseudo MOS only.
Considering the effectiveness of pseudo MOS, Part III can be extended to arbitrary size to facilitate the construction of large-scale PCQA dataset.
Reference Point Clouds
The reference point clouds in the built dataset come from MPEG and JPEG point cloud datasets [3][4][5]7], as well as the 3D mesh data [1, 8,50]. For point clouds from MPEG and JPEG datasets, manual examination is conducted to make sure we obtain point clouds of high quality. We define the point clouds which can score 5 (1 is the lowest and 5 is the highest) in the subjective experiment as high-quality samples. For 3D mesh samples, we apply uniformly distributed random sampling to take sample points from the surfaces of mesh, as shown in Fig. 1. Specifically, the surfaces of mesh are randomly sampled to obtain the Cartesian coordinates of the crafted point clouds. For texture, the sampled points are colored by examining the texture material at the same positions. In total, 104 different point clouds, including 28 human models, 48 animal models and 28 inanimate objects, are chosen as the reference point cloud samples. All these reference point clouds are carefully screened to ensure their high quality, all of which score a MOS of 5 in the subjective experiment and don't have holes and other distortions when being presented.
Distorted Point Clouds
Each reference point cloud is degraded by 31 types of impairments under 7 distortion levels, covering a wide range of impairments during point cloud production, compression, transmission and presentation. The distortion types are listed in Table 4, and more details are given in the Appendix. In total, 22,568 distorted point clouds are generated.
Obtaining the Subjective MOS
To select the best FR-PCQA metric for each distortion type and verify the accuracy of pseudo MOS, we annotate Part I and Part II of the built dataset using the subjective experiment. The double stimulus method is adopted for subjective rating. We strictly follow the steps proposed by ITU-R Recommendation BT. 500 [25]. The configuration of the subjective experiment is shown in Table 2. In the subjective experiment, the participants sit in a controlled environment. Specifically, the zoom rate is set as 1:1. The presentation device used in subjective experiments is Dell SE2216H with a 21.5-inch monitor with a resolution of 1920×1080 pixels. Inspired by [30,71], the point-based rendering is adopted with square primitives because of its similarity with the smallest elements of 2D images (pixel), and a size of 2 to ensure no holes in the reference point clouds. The sitting posture of the participants is adjusted to ensure that their eyes are at the same height as the center of the screen. The viewing distance is about three times the height of the rendered point cloud (≈ 0.75 meters). The subjective experiment is conducted indoors, under a normal lighting condition.
Only rotation operation is allowed to emulate the free-view navigation in the subjective experiment. This is because the distance to the 3D object will influence the subjective perception significantly. Even for a point cloud with good quality, it will show some holes if we zoom in too much. For participants who are lack of enough prior knowledge on point clouds, this phenomenon will bias their judgment. Therefore, we use the function 'zoom rate' provided by CloudCompare to fix the viewing distance, which maintains the consistency of viewing experience across participants.
Each pair of point clouds is presented in temporal sequence with the reference always being the first and takes about 20 seconds for each participant to examine, leaving the next 5 seconds for the rating before the next pair is shown. The given scores are in the range of [1, 5] which correspond to five quality levels shown in Table 3. Table 3. Five-grade impairment scale.
MOS Meaning 5
Almost no distortion is perceived 4 Distortion can be perceived but don't hinder the viewing 3 Distortion slightly obstructs the viewing 2 Distortion definitely obstructs the viewing 1 Distortion seriously hinders the viewing The screening method described in BT. 500 [25] is applied to remove the outliers whose scores are inconsistent with others. Specifically, the 2 test (by calculating the kurtosis coefficient of the function) is adopted to ascertain whether a subject needs to be removed or not. The scores from a subject are rejected when 2 is not between 2 and 4. As a result, 14 participants are identified as outliers and removed, which may derive from that they were not serious about the experiment pre-training and subjective experiment. The scores from the remaining 210 participants are kept for the following analysis. At least 16 reliable subjective scores are collected for each distorted point cloud after outlier removal.
We present the MOS distribution in Fig. 2 and Fig. 3 to demonstrate the validity of subjective data. Fig. 2 shows the MOS distribution for distorted point clouds. It can be seen from Fig. 2 that the subjective scores are spread across various MOS levels. Fig. 3 shows the average MOS for different distortion levels under each distortion type, which demonstrates the correlation between the average MOS and distortion levels. [43], GraphSIM [74] and MPED [72]. The Pearson linear correlation coefficient (PLCC) and SROCC are usually used to quantify the performance of the quality assessment metrics. Table 4.
PLCC measures the linear correlation between MOS and predicted quality scores via
where is the true MOS,ˆis the predicted quality score, and andˆare their arithmetic mean. SROCC assesses the monotony between MOS and predicted quality scores. It is defined as where is the number of distorted point clouds, is the rank of in the MOS, and is the rank of in the predicted quality scores. SROCC can be reformulated as where denotes the ranking operation. SROCC is considered to be the best nonlinear correlation indicator, because SROCC is only concerned with the order of elements in a sequence. Therefore, even if orˆis affected by any monotone nonlinear transformation (such as logarithmic transformation and exponential transformation), SROCC will not be affected because the order of elements is not changed. Therefore, we use SROCC as index to select the best FR metrics. Table 4 lists the SROCC of these FR-PCQA metrics for each distortion type based on the subjective MOS of Part I, and the best results are highlighted in bold. Table 4 indicates that adopting a single quality assessment metric to label the whole dataset will be insufficient and inaccurate. Each quality assessment metric has its own limitations. Although some FR metrics achieve the best results for certain distortion types, they may have poor performance or even cannot respond to some other distortion types. For example, p2point has achieved the best performance for some geometrical distortions, but it is insensitive to photometric attribute distortions. PCQM achieves a SROCC of 0.950823 for Gaussian noise distortion, but for downsampling distortion, its SROCC is only 0.524864. Thus, we keep the best one among these FR-PCQA metrics for each distortion type to annotate the distorted point clouds.
Nonlinear
Mapping. The range of each FR-PCQA metric is different from each other. To label the distorted point clouds, a nonlinear regression function is adopted to re-scale the results to cast them under the same range of [1,5]. In this process, the nonlinear regression function should not change the monotonicity of the scores from the FR metrics, but the overall monotonicity may be different for different nonlinear regression functions due to the splicing of the quality scores. The commonly used nonlinear regression functions include four-parameter logistic regression (Logistic-4) [64], five-parameter logistic regression (Logistic-5) [56], and four-parameter polynomial regression (Cubic-4) [26,65].
Logistic-4 applies where is the normalized score, is the quality score predicted by the best quality assessment metric and 1 , 2 , 3 , 4 are the fitting parameters.
Logistic-5 achieves normalization via where 1 , 2 , 3 , 4 , 5 are the parameters to be determined. Cubic-4 uses the following for normalization where , , , are the model parameters.
In order to select the most appropriate mapping function, a validation experiment is conducted using Part I of the newly built dataset. The three nonlinear regression functions in consideration are adopted for MOS normalization to project the best-predicting FR metric for each distortion type to the range of subjective MOS (i.e., 1-5). Then, SROCCs between the normalized scores and subjective MOS are computed for each nonlinear regression function. The results are listed in Table 5, with the best results highlighted in bold. It can be seen from Table 5 that Logistic-5 offers the best results. Thus, Logistic-5 is adopted as the nonlinear normalization function for MOS mapping in this work. The normalized pseudo MOS is used as the labels for the built large-scale dataset.
Accuracy Analysis of Pseudo MOS
We verify the reliability of the pseudo MOS via conducting the validation experiment using samples from Part I and Part II of the partitioned dataset. Note that Part II of the dataset is not used in the selection of FR-PCQA metrics to generate the pseudo MOS. The SROCCs between the subjective MOS and generated pseudo MOS in Part I and Part II are computed and summarized in Table 6. We can see from Table 6 that the SROCCs between the subjective and pseudo MOS of Part I and Part II are 0.902697 and 0.878517, which justifies the accuracy of the computed pseudo MOS and validity of the proposed annotation method.
Statistical analysis of annotation errors for samples in Part I and Part II is also conducted. The annotation error is defined as ( − ). The histograms of annotation errors for Part I and Part II are shown in Fig 4. The mean, standard deviation and 95% quantile of annotation errors for Part I and Part II are summarized in Table 7. Fig. 4 shows that most annotation errors have small magnitudes, even for the samples in Part II that are not used in the selection of FR metrics. These results corroborate that the pseudo MOS can be considered accurate approximation of the costly subjective MOS. To gain more insights, we present the annotation error statistics as a function of the distortion levels in Table 8. We can see that the pseudo MOS exhibits improved accuracy under more severe distortions. This is in fact expected, as the HVS would be more sensitive to obvious distortions, which can be better captured by FR metrics.
RESSCNN: A SPARSE CNN BASED METRIC FOR NR-PCQA
An end-to-end learning-based NR-PCQA metric ResSCNN is proposed in this section.
Network Architecture
As shown in Fig. 6, the proposed ResSCNN consists of three modules: a hierarchical feature extraction module , a pooling and concatenation module Φ and a quality prediction module . takes point clouds with arbitrary number of points as input and uses a stack of sparse convolutional layers and residual blocks to extract the hierarchical features. Φ applies the global pooling and concatenation operation to generate the feature vectors with consistent shape. uses a concatenation of fully connected layers to map the feature vectors to the predicted quality scores.
Hierarchical Feature Extraction
The hierarchical feature extraction module is composed of four blocks. Each block consists of three sparse convolutional layers, in which the second and third sparse layers are connected to the residual pattern. The input of feature extraction module is the sparse tensor of the point cloud with arbitrary number of points. Mathematically, a sparse tensor for point cloud ∈ ×6 is represented as a set of coordinates and associated features : where indicates the geometry attributes of point cloud and , , ∈ are the 3D coordinates of point . is the occupation index to distinguish points occupying the same coordinates. in the input sparse tensor has 3 feature dimensionalities referring to the color attributes of point cloud and ∈ 3×1 , ∈ [0, 255] is the R, G, B attributes associated with the -th point. The 3 in the proposed network refers to the sparse convolution, which can be denoted generically as , ∈ } defines the shape of a 3D convolutional kernel, covering the set of offsets from the current center, , such that ∈ . and denote the feature vector at the coordinate . and are the predefined input and output coordinates of sparse tensors. denotes the kernel value at offset . In this work, we set = = and set as the list of offsets in D-dimensional hypercube centered at the origin to achieve the sub-manifold sparse convolution [23].
The sparse convolution, instead of the dense convolution, is adopted to extract the features of the input point clouds. The sparse convolution will not reduce the sparsity of point clouds. On the contrary, due to the density heterogeneity, the conventional dense convolution will result in a massive increase in the elements of feature map for point clouds.
Each sparse convolutional layer in Fig. 6 is characterized by the kernel size, stride and dimensionality, and is followed by batch normalization before being fed to the nonlinearity activation function (ReLU).
Pooling and Concatenation
The branches of pooling and concatenation module come from the output of each block in the hierarchical feature extraction module. To deal with the output of each block with different shapes, the features extracted from the sparse CNN are globally pooled into 64 × 1 feature vectors. Then the obtained four 64 × 1 feature vectors are concatenated to form the 256 × 1 representative hierarchical feature vector.
Quality Prediction
The quality prediction module is composed of two fully connected layers, denoted as FC-1 and FC-2, to map the hierarchical feature vector to the predicted quality score. The number of input channels of FC-1 is 256, which is equal to the length of the hierarchical features. The number of output channels of FC-1 and the number of input channels of FC-2 are 32. The output of the quality prediction module is a predicted quality score of 1 channel with the same scale of training labels.
In the pooling and concatenation module, global pooling is applied to obtain the feature vector from different depths of the sparse CNN, which can be denoted as where ∈ 64×1 represents the normalized feature vector from different depths of the hierarchical feature extraction module, denotes the parameters of the hierarchical feature extraction module, and the operation Φ denotes the global pooling applied to the original deep convolutional features with irregular shapes. The hierarchical feature vectors obtained from the pooling and concatenation module are where ⊕ denotes the concatenation operation. The final predicted quality scores are found via where is the predicted quality score, represents the fully connected layers, and denotes the parameters of the quality prediction module.
Loss Function and Training Strategy
1 loss, 1 ( ) = | |, will fluctuate around the stable value, and converging to achieve higher accuracy is difficult. The gradient of 2 loss 2 ( ) = 2 with respect to the predicted value at the beginning of training is large and the training is unstable. Since smooth 1 loss combines the advantages of 1 loss and 2 loss, and avoids their disadvantages, smooth 1 loss is adopted to improve the robustness of the network: where = − , is the predicted quality score and is the ground truth. The gradient of loss function with respect to is formulated as where we can see that when = − is small, the gradient with respect to becomes smaller. When is large, the upper limit of the gradient with respect to is 1, to ensure that derivatives are continuous for all degrees. The training of ResSCNN for point clouds consumes much more time than that of training using 2D images. To accelerate the speed of training, Stochastic Gradient Descent (SGD) is adopted with a learning rate = 1 − 3 with an exponential learning rate schedule = 0.99.
As the shapes of input point clouds are different from one another for end-to-end learning, the batch size is set to 1 when training the proposed network. We accumulate several losses and gradients to emulate the process of batch optimization. Besides, the data augmentation is invoked during training with random scaling of [0.8, 1.2] and random rotation of [0 • , 360 • ) to make sure that the proposed network is robust to the transformation of viewpoint.
Datasets.
To evaluate the performance of the proposed ResSCNN, we conduct evaluation experiments using Part III of the proposed LS-PCQA dataset, where the distorted samples are all labeled by pseudo MOS. Besides, the performance of the proposed metric is also evaluated over SJTU-PCQA [71] and WPC2.0 [37,57] datasets.
Prediction Performance
We use PLCC and SROCC to quantify the performance of the quality assessment metrics. In particular, [43], GraphSIM [74] and MPED [72], as well as the existing NR metrics, including PQA-Net [39] and IT-PCQA [73]. All the FR-PCQA and NR-PCQA results are obtained using the source code released by the authors. The results are summarized in Table 9, where the rows represent the quality assessment metrics and columns give the results of overall SROCC and PLCC. From Table 9, we can see that: i) The proposed ResSCNN achieves the SOTA performance among the existing NR metrics and even outperforms some of the FR metrics. ii) The FR metrics outperform the NR metrics on the whole, which is in fact expected, because the lack of reference information in NR metrics increases the difficulty for PCQA task. But the performance of the proposed ResSCNN is not distant from that of the existing FR metrics. For example, if we compare ResSCNN and the FR metrics with best performance in three datasets, we have (0. 60 iii) The large-scale dataset brings more challenges to PCQA task. We can notice that both FR and NR metrics for PCQA have potentials for further improvement.
In summary, the proposed ResSCNN offers robust and competitive performance over three datasets, compared with the existing NR and even some FR metrics.
Efficiency of The Proposed Dataset
The built large-scale dataset can benefit the training of learning-based NR-PCQA metrics. We conduct an experiment to verify the improvements due to pre-training on the established LS-PCQA. The proposed ResSCNN is first pre-trained on the training set of LS-PCQA, then trained using the training set of SJTU-PCQA and tested on the testing set of WPC2.0. Next, the training database and testing database are switched and the experiment is repeated. The experiment results are listed in Table 10 and Table 11. We can see from Table 10 and Table 11 that the generalization capability has been improved due to pre-training on LS-PCQA. After being pre-trained on LS-PCQA, ResSCNN that is trained over SJTU-PCQA and tested on WPC2.0 achieves up to about 40% and 33% increase in PLCC and SROCC. ResSCNN that is trained over WPC2.0 and tested on SJTU-PCQA achieves up to about 12% and 15% increase in PLCC and SROCC as well.
In summary, the established LS-PCQA can benefit the learning-based NR-PCQA metrics, and enhances the generalization ability for the NR-PCQA task.
Effect of Sampling
Many 3D applications adopt dimensionality reduction techniques as a pre-processing method for input normalization, such as key point extraction and down-sampling. However, these dimensionality reduction techniques will introduce extra geometrical distortions into the point clouds, and thus should not be used in NR-PCQA task.
In this subsection, we conduct experiments to demonstrate that the dimensionality reduction techniques decrease the performance of NR-PCQA metrics. Specifically, we down-sample the point clouds to 400,000, 100,000, 50,000, 10,000 and 2,500 points respectively. The overall performance and performance for down-sampling distortion for ResSCNN are shown in Table 12.
We can see from Table 12 that: i) Generally, the proposed ResSCNN tends to provide better performance with more points. ii) The dimensionality reduction techniques which remove part of the points will affect the results of NR-PCQA, which also corroborates our observations that NR-PCQA is different from other vision tasks and taking the complete samples into consideration is necessary.
Thus, the proposed ResSCNN should take entire point clouds as input to avoid introducing additional distortions in the pre-processing stage.
Effectiveness of Hierarchical Features
Hierarchical features have been applied in many image applications. The reason is that the semantic information of CNN from the shallow to deep layers has specific characteristics [12]. The shallow features provide rich details of color and texture while deep features contain more conceptual and semantic information. Thus, it could be expected that the hierarchical features in PCQA task can improve the robustness under various distortions.
In this subsection, we conduct experiments on LS-PCQA to verify the rationality of the proposed hierarchical features. Specifically, we use the features from the first block in Fig. 6 as shallow features, and the features from the last block as deep features. The results are shown in Table 13. We can see from Table 13 that: i) The hierarchical feature outperforms the shallow and deep features alone. ii) Most of the distortion types in the built dataset are detailed distortions, and the shallow features provide rich details of color and texture. Thus, the shallow features can better handle detailed distortions. However, the conceptual and semantic distortions are also included in the PCQA dataset, such as the contrast distortion and luminance distortion, and the deep features should also need to be considered in NR-PCQA metrics.
In other words, the proposed hierarchical feature exhibits more robust performance.
Effect of Network Depth
In this part, we conduct experiments to investigate the impact of network depth on quality prediction over the LS-PCQA dataset. The results are shown in Table 14. We can see from Table 14 that: i) The shallow feature has a good response to a large number of detailed and visually distorted samples in the dataset. But using only the shallow features cannot describe the dataset adequately. ii) The model tackles the dataset better with increasing depth of the network. However, the increase of the number of parameters may result in over-fitting. That is why we observe that the network with 4 blocks yields the best performance.
Effectiveness of the Residual Module
The proposed ResSCNN aims to handle point clouds with arbitrary number of points, and the pooling operation is adopted for the normalization of obtained features. Some part of the information in the features is partially damaged in this process. To enhance the expressiveness of output features, we compensate for the information loss using the residual connections.
In this part, we conduct experiments to illustrate the effectiveness of the residual module. Specifically, we compare the prediction accuracy of ResSCNN on LS-PCQA with several alternatives of residual connections. The testing networks are composed of four identical blocks as shown in Fig. 7. For dimensionality matching, the first blocks in residual networks B-D have residual connection spanning 2 and 3 layers. The results are shown in Table 15. We can see from Table 15 that the use of residual connections improves the accuracy of quality prediction. As a result, the final design of our proposed network shown in Fig. 6 does adopt residual connections.
Performance for Different Distortions
In this subsection, we evaluate the performance of several FR-PCQA metrics, including GraphSIM and PCQM, together with NR-PCQA metrics, including PQA-Net, IT-PCQA and ResSCNN, under different distortions. Specifically, on the basis of the experiment results in Section 5.2, the PLCC and SROCC of these PCQA metrics are calculated for each distortion type. The results are listed in Table 16, and the best results for FR and NR metrics are highlighted in bold respectively.
We can see from Table 16 that: i) Compared with the existing projection-based NR-PCQA metrics, the proposed ResSCNN exhibits the best performance for almost all distortion types. Specifically, the proposed ResSCNN performs better in some geometrical distortions, such as the Local Missing distortion. The reason is that the projection renders some geometrical loss difficult to detect. ii) All the NR-PCQA metrics exhibit poor performance in some color distortions, such as the Contrast Distortion and Poisson Noise. Compared with the geometrical distortion, these color distortions are hard to be perceived without the assistance of reference. We hypothesize that extra semantic features in terms of color distributions need to be accounted for in the design of NR-PCQA. iii) The FR-PCQA metrics are more robust than the NR metrics. With the original point clouds as references, the FR-PCQA metrics are easier to quantify the influence of distortions on human perception. Besides, the sensitivity of FR metrics is different. For example, GraphSIM is more sensitive to GPCC distortions, while MPED is more sensitive to the reconstruction distortion. On the whole, the proposed ResSCNN exhibits the best performance for almost all distortion types compared with the existing NR-PCQA metrics. Moreover, the PCQA becomes more challenging without references, and the learning-based NR-PCQA metrics have potentials for further improvement.
CONCLUSION
In this work, we proposed a NR-PCQA metric, ResSCNN. To meet the requirement of data scale for training learning-based metrics, we firstly built a PCQA dataset of the largest size at present. The built LS-PCQA dataset contains more than 22,000 distorted samples derived from 104 original reference point clouds with 31 impairment types at 7 distortion levels. Leveraging the newly built dataset, a NR-PCQA metric based on sparse CNN was proposed. Experiment results have demonstrated the efficiency of our proposed ResSCNN which achieves the SOTA performance among the existing NR-PCQA metrics and is even competitive compared with the FR-PCQA metrics. Besides, we have proved that the proposed large-scale dataset can help improve the generalization ability of the learning-based NR-PCQA metrics. | 2020-12-23T02:15:57.663Z | 2020-12-22T00:00:00.000 | {
"year": 2020,
"sha1": "07bcb51461b4b2a5871eb30d1bb54a744fa98495",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "07bcb51461b4b2a5871eb30d1bb54a744fa98495",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
257857486 | pes2o/s2orc | v3-fos-license | Transplacental passage of hyperforin, hypericin, and valerenic acid
Safe medications for mild mental diseases in pregnancy are needed. Phytomedicines from St. John’s wort and valerian are valid candidates, but safety data in pregnancy are lacking. The transplacental transport of hyperforin and hypericin (from St. John’s wort), and valerenic acid (from valerian) was evaluated using the ex vivo cotyledon perfusion model (4 h perfusions, term placentae) and, in part, the in vitro Transwell assay with BeWo b30 cells. Antipyrine was used for comparison in both models. U(H)PLC-MS/MS bioanalytical methods were developed to quantify the compounds. Perfusion data obtained with term placentae showed that only minor amounts of hyperforin passed into the fetal circuit, while hypericin did not cross the placental barrier and valerenic acid equilibrated between the maternal and fetal compartments. None of the investigated compounds affected metabolic, functional, and histopathological parameters of the placenta during the perfusion experiments. Data from the Transwell model suggested that valerenic acid does not cross the placental cell layer. Taken together, our data suggest that throughout the pregnancy the potential fetal exposure to hypericin and hyperforin – but not to valerenic acid – is likely to be minimal.
Introduction
Pregnancy is a vulnerable period for mental disorders and/or symptoms. Studies from various countries (e.g., low-middle-income countries, United States, Sweden), reported a prevalence of 14%-20% for psychiatric diseases in pregnancy (Andersson et al., 2003;Marcus et al., 2003;Fisher et al., 2012). In Switzerland, the annual rate of perinatal women using mental health services accounted for 16.7% (Berger et al., 2017). Moreover, a Swiss cross-sectional survey revealed that more than 52.0% of the participants suffered from mental disorders and/or symptoms during pregnancy but only a few (1.6%) took synthetic psychoactive medications (Gantner et al., 2021). Mental disorders like mild depression, sleep disorders, and anxiety can lead to complications like preterm birth if left untreated (Dunkel Schetter, 2011). However, most synthetic drugs may not only cause side effects in the mother, but also cross the placental barrier and reach the fetus. Concerns on tolerability, teratogenicity and impact on neonatal outcomes exist and are, in part, supported by various studies (Sivojelezova et al., 2005;Rahimi et al., 2006;Grigoriadis et al., 2014;Yonkers et al., 2014;Gao et al., 2018). Pregnant women in need of medications such as selective serotonin reuptake inhibitors (SSRIs) and benzodiazepines therefore face the dilemma of either using or refraining from using them.
Phytomedicines are popular alternatives to synthetic medications. Many pregnant women use herbal medicines, in addition to or rather than synthetic drugs, probably as they perceive those alternatives to be the safer choice for their unborn child (Sarecka-Hujar and Szulc-Musiol, 2022). A recent Swiss survey revealed that 89.9% of pregnant women are using some type of herbal medicine (Gantner et al., 2021). Some healthcare professionals also tend to recommend phytomedicines and, hence, contribute to the trust in these products (Stewart et al., 2014). Most phytomedicines are available without prescription. In general, safety data for use during pregnancy are lacking for phytomedicines (Bernstein et al., 2021;Morehead and McInnis, 2021). For example, it is not known whether pharmacologically active compounds in these products can cross the placental barrier. Given the lack of data, the agencies responsible for approval of drugs require a warning label in the patient information of these products.
In the treatment of mild to moderate depression, St. John's wort (Hypericum perforatum L., Hypericaceae) is an alternative to SSRIs, as the clinical efficacy has been documented in several clinical trials (Linde et al., 2008). Hyperforin and hypericin are two characteristic compounds in St. John's wort. They have been shown to possess various CNS-related pharmacological activities but are not solely responsible for the antidepressant properties. As for most other phytomedicines, the entire extract has to be considered as the active ingredient (Butterweck and Schmidt, 2007;Nicolussi et al., 2020). A recent study based on data from Germany found that pregnant women mainly used St. John's wort in the first trimester, but simultaneous dispensation of other drugs that favour interactions and the observation of a relatively high rate of non-live births call for a thorough further safety investigation (Schäfer et al., 2021). Valerian (Valeriana officinalis L., Caprifoliaceae) is known for its sleep promoting and anxiolytic properties (Hattesohl et al., 2008). Valerenic acid, a characteristic compound in valerian, is an allosteric modulator of GABA A receptors (Becker et al., 2014), but again is not solely responsible for the clinical efficacy of phytomedicines containing valerian extracts. Valerian is favoured by many expecting mothers, as shown in a multinational study where valerian was among the five most frequently used herbal medicines (Kennedy et al., 2013). St. John's wort and valerian have been used for decades in Europe and have been labelled with "well established use" and "traditional use", respectively, by the Committee on Herbal Medicinal Products (HMPC) of the European Medicines Agency (EMA). However, they recommend neither of the herbs to be used during pregnancy due to insufficient toxicological data (European Medicines Agency, 2015;. Relevant toxicological aspects in the context of pregnancy include effects on placental function and on transplacental transfer. The human placenta develops during pregnancy, at each gestational stage supplying the developing fetus with blood, nutrients, and oxygen, while also regulating the removal of waste products and carbon dioxide. In addition, it metabolises substances and releases hormones that influence the course of pregnancy, fetal metabolism and growth, and labour itself (Gude et al., 2004). The placenta also protects the fetus from infections, maternal diseases, and some xenobiotics including drugs. Most drugs pass through the placenta via passive or simple diffusion that is influenced by factors such as molecular weight (MW), degree of ionization, lipid solubility, protein binding, concentration gradient of the drug across the placenta, placental surface area/thickness, pH of maternal and fetal blood, and placental metabolism (van der Aa et al., 1998;Syme et al., 2004;Tetro et al., 2018). Other processes of drug transfer across the placenta include facilitated diffusion, active transport, and endocytosis. Once formed, the placental syncytiotrophoblast is the rate-limiting barrier separating the maternal and fetal circulation, with various transporters and enzymes located at the apical and basolateral membranes (Desforges and Sibley, 2010;Al-Enazy et al., 2017). Prior to their formation, a monolayer of precursor cells (cytotrophoblasts) held together by tight junctions exerts the barrier function (Prouillac and Lecoeur, 2010). Data on transplacental passage of drugs can be obtained using the ex vivo human placental perfusion model (with term placentae) representative of the late stage of pregnancy (Myllynen and Vahakangas, 2013). It is considered to be the gold-standard among placental transfer models (Panigel et al., 1967;Malek et al., 2009;Grafmuller et al., 2013), and we have recently shown its usefulness for studying the transplacental transfer of phytochemicals (Spiess et al., 2022a). In vitro Transwell models utilising monolayers of confluent, human, nondifferentiated placental cells, on the other hand, reflect the transfer through a continuous layer of cytotrophoblasts (Bode et al., 2006;Vähäkangas and Myllynen, 2006;Prouillac and Lecoeur, 2010).
In the present study, we investigated the transplacental passage of hyperforin, hypericin, and valerenic acid (Figure 1) using the ex vivo term placental perfusion model. In this model, we also investigated their effects on metabolic, functional, and histopathological properties of placental tissue. In case of valerenic acid, an in vitro Transwell model based on the human placental BeWo b30 cell line (Li et al., 2013) was used in addition.
Materials and methods
Chemicals, reagents, and study compounds All solvents were of UPLC grade. Acetonitrile (MeCN) was purchased from Merck. Methanol was from Avantor Performance Materials Poland. Purified water was obtained from a Milli-Q integral water purification system. Dimethyl sulfoxide (DMSO) was supplied by Scharlau, and formic acid was from BioSolve. Antipyrine, hyperforin dicyclohexylammonium salt and bovine serum albumin (BSA) were obtained from Sigma-Aldrich, and antipyrine-d 3 from HPC Standards. Valerenic acid was purchased from Extrasynthese and PhytoLab, and hypericin was from Carbosynth. Warfarin was purchased from Toronto Research Chemicals, and digoxin from Sigma-Aldrich.
Ex vivo human placental perfusion Placentae collection
Placentae were collected with informed written consent from women with uncomplicated term pregnancies immediately after undergoing primary caesarean section. This procedure was approved by the Ethics Committee of the Canton of Zurich (KEK-StV73 Nr. 07/07;21 March 2007). All research was performed in accordance with the Declaration of Helsinki and other relevant guidelines/regulations. All placentae were verified to be negative for HIV, HBV, SARS-CoV-2, and twin pregnancy donors were not included in the study. Placentae with a ragged maternal surface (visible disruptions; macroscopic tissue trauma), evidence of basal plate fibrin deposition, suspected placental infarction or too little fetal membrane (on the disk of the placenta) were not considered for perfusion. Supplementary Table S2 gives an overview of the experimental conditions and characteristics of all placentae used: a total of 11 placentae (donated by 11 women) were suitable for perfusion, 3-4 individual placentae were used for each test substance.
Equipment and experimental procedure of perfusion
The ex vivo human placental perfusion model was adapted from the models of Schneider (Schneider, 1972) and Grafmüller (Grafmuller et al., 2013), and has been described in detail (Spiess et al., 2022a). In brief, one villous tree of a cotyledon (placenta lobule) was perfused by cannulation of the chorionic artery and corresponding vein. Three blunt cannulae were inserted into the intervillous space to reconstruct the maternal circuit and to allow the transplacental exchange through the perfusion medium (PM). The PM consisted of Earle's buffer (1 part), cell culture medium 199 (2 parts; Sigma-Aldrich), and supplements (BSA, dextran, glucose, sodium bicarbonate, amoxicillin, and heparin). The fetal perfusate was gassed with 95% N 2 /5% CO 2 , and the maternal perfusate with 95% air/5% CO 2 instead. Two heating magnetic stirrers ensured a constant distribution of study compounds in both reservoirs. A physiological temperature of 37°C was maintained with flow heaters (heating columns) and a water bath. Digitally controlled peristaltic pumps (Ismatec) transported the fetal and maternal perfusates at a rate of 3 and 12 mL/min, respectively. The perfusion experiment, including a 20 min non-recirculating (open) and a 20 min recirculating (closed) preparatory phase, started with equal volumes of 100 mL fresh PM. The study compound and antipyrine were added to the perfusate of the maternal circuit, both at a final concentration of 5 μM (a 941 ng/mL antipyrine, 2′684 ng/mL hyperforin, 2′522 ng/mL hypericin, and 1′172 ng/mL valerenic acid).
The dissolution and adherence of the compounds to the perfusion equipment and to the tubing system was assessed before starting perfusions with human placentae. All study compounds were pumped over a period of 240 min through an empty (i.e., in the absence of placental tissue) perfusion chamber comprising only the maternal circuit. The compounds were directly dissolved in PM at a final concentration of 5 μM (100 mL reservoir) and individually tested in three independent experiments (n = 3).
Sample preparation and quality controls
Antipyrine served as connectivity (positive) control in all placental perfusions to verify the overlap of the maternal and the fetal circulation. The stability of volumes in each reservoir ensured the integrity of the circuits and served for the detection of fetalmaternal (FM) leaks (≤4 mL/h). Throughout the perfusion additional quality control measures included a physiological pHrange (7.2 ± 0.1) and a controlled fetal perfusion pressure (≤ 70 mmHg). Samples were taken at defined timepoints over a 240 min period, immediately centrifuged, and stored in glass Frontiers in Pharmacology frontiersin.org 03 micro-inserts (VWR) at −80°C for bioanalytical analysis. A blood gas analysis (pH, pO 2 , pCO 2 , glucose, lactate; ABL800 FLEX) of fetal and maternal samples was performed to ensure viability and metabolic activity of placental tissue during perfusion. The production of the placental hormones beta-human chorionic gonadotropin (β-hCG) and leptin was monitored by standard ELISA to assess tissue functionality ex vivo [see (Spiess et al., 2022a) and references therein]. For β-hCG, flat-bottom microplates were coated with rabbit polyclonal anti-hCG antibody (Agilent Dako) at a 1:1′000 dilution and the mouse monoclonal anti-hCG (abcam; 1:5′000) served as secondary antibody. The peptide hormone hCG (Lucerna-Chem) was used as reference standard; standard final concentrations of between 100 mU/mL and 2.5 mU/mL were prepared by serial dilution in seven steps and used in each plate. The goat anti-mouse-IgG-horseradish peroxidase conjugate (abcam) antibody was used at a 1:5′000 dilution. The substrate consisted of O-phenylenediamine dihydrochloride. Intra-assay CV% was ≤10%, inter-assay CV% was <6% at 100 mU/mL and <15% at 10 mU/mL. For leptin, microtiter plates were coated with mouse monoclonal anti-human leptin/OB (R&D Systems; 1: 250). The second antibody was biotinylated monoclonal mouse anti-human leptin/OB (R&D Systems), used at a dilution of 1:1′000. The standard was recombinant human leptin (R&D Systems); standard final concentrations of between 2′000 pg/mL and 15.6 pg/mL were prepared by serial 1:2 dilution in seven steps and used in each plate. Intra-assay CV% was ≤10%, interassay CV% was <3% at 2′000 pg/mL and <15% at 250 pg/mL. The conjugate streptavidin-horseradish peroxidase (Southern Biotechnology Associates, 1:4′000) was added and the plate incubated for 60 min. For the development, ready-to-use tetramethylbenzidine substrate solution (Thermo Fischer) was used.
Histopathological evaluation
Immediately after perfusion the placentae were fixed in 4% paraformaldehyde for at least 24 h. Tissue samples from representative placental tissue sections, each from perfused, nonperfused, and transitional area were prepared to perform a pathological examination. For this purpose, the tissue was embedded in paraffin, cut (∅ 2-3 µm), and stained (standard hematoxylin-eosin stain, Braun-Brenn modified Gram stain) according to the standards of routine histopathological diagnosis of the Department of Pathology and Molecular Pathology (University Hospital of Zurich). The tissue of the non-perfused specimens was examined with regard to general placental pathologies as described in routine diagnostics (Turowski et al., 2012;Khong et al., 2016). The blood void and width of the intervillous space and fetal blood vessels in the chorionic villi provided information about the quality of perfusion, and particular attention was paid to the presence of intravascular thrombi. General signs of degeneration, such as vacuolization of the cytotrophoblast, villous vascular endothelium viability, and formation of hydropic villous changes compared with nonperfused tissue indicated whether tissue damage might have occurred during perfusion. Microscopic effects and placental tissue damage in the perfused area were expressed in relative amounts (%) to the non-perfused tissue.
In vitro permeability assay Cell culture
BeWo b30 choriocarcinoma cells were obtained from Dr. Tina Buerki-Thurnherr (Empa, St. Gallen, Switzerland), with permission from Dr. Alan L. Schwartz (Washington University School of Medicine, St. Louis MO, USA), and were cultivated in F-12 K Nut Mix supplemented with 10% heat inactivated FBS, antibiotics (100 U/mL penicillin, 100 µg/mL streptavidin), and 2 mM L-glutamine (all from Gibco). The cells were cultivated in a humidified incubator at 37°C and 5% CO 2 atmosphere.
Monolayer formation on cell culture inserts
BeWo b30 cells were cultured on Transwell ® polycarbonate membrane inserts (24-well format; 0.4 μm pore size, 0.33 cm 2 cell growth area, 200 μL apical volume, 1′000 μL basolateral volume; Corning, Sigma-Aldrich) at a density of 60′000 cells/well. These inserts were then cultivated in a cellZscope (nanoAnalytics) at 37°C/ 5% CO 2 . Cell culture medium was replaced every 2 days up to 11 days to find the best possible conditions.
Evaluation of monolayer formation
Measurement of transepithelial electrical resistance (TER) was used to assess the tightness of a cell-to-cell barrier, and the electrical capacitance C cl , provided additional information about the properties of the cell layer (e.g., presence of microvilli and other membrane extrusions). TER and C cl values were recorded in 15 min time intervals, using a cellZscope ® . TER values were corrected for the surface area (Ωcm 2 ) and the reference resistance (well with the same filter insert and medium, but absence of cells). Moreover, a permeability assay with sodium fluorescein (NaF) was performed on days 7-11 (n = 3). The basolateral compartment of a transparent 24-well plate consisted of PBS only (1′000 μL) while NaF (5 μM in PBS; 200 μL) was added to the apical compartment for 60 min. The control consisted of cell-free inserts (n = 3). Basolateral samples (50 μL) were directly added to black Nunc MaxiSorp microtiter plates, and concentrations were determined using a Cytation 3 fluorescence microplate reader (BioTek Instruments; excitation wavelength 460 nm; emission wavelength 515 nm).
Immunocytochemical staining of cells on inserts
BeWo b30 cells were stained with fluorescent probes for nuclei and cytoplasm as follows. Cells were fixed in 4% paraformaldehyde (Artechemis) for 20-30 min. Afterwards they were permeabilized and blocked with 0.3% Triton-X-100 (Sigma) in 1% BSA (Thermo Scientific) in PBS (Gibco) for another 20-30 min at room temperature on a shaker (50 rpm). Then, cells were incubated at room temperature on a shaker (50 rpm) for 4-6 h, wrapped in tinfoil, with a 1:10′000 solution of 4′,6-diamidino-2-phenylindole (DAPI; Sigma) and a 1:400 phalloidin-rhodamine (Invitrogen) solution diluted with 0.1% Triton X-100 in 1% BSA/PBS. Cells were then extensively washed with PBS, and the insert membranes were embedded between glass cover slides using Mowiol 4-88 (Sigma-Aldrich) to obtain flat membranes. The images were acquired with a Leica CTR 6000 microscope (Leica Microsystems) and the corresponding Leica Application Software X.
Permeability assay
BeWo b30 cells were cultured in transparent 24-well plates under the same conditions as mentioned above and transferred to the cellZscope ® at day 8 to record TER and C cl values. After 24 h (day 9) and when a TER of 30-60 Ωcm 2 was reached and C cl was between 0.5 and 5.0 μF/cm 2 , the permeability assay was initiated by adding valerenic acid (5 μM; in HBSS with 4% BSA) to the apical compartment, while the basolateral compartment contained 1′000 μL of HBSS only. In addition, antipyrine (5 μM) was added along with the test substance as a control. Samples (150 μL apical, 800 μL basolateral) were collected at each time point (0, 15, 30, 60 min; one insert per time point). After 60 min, a part of the inserts was transferred back to the cellZscope ® to monitor TER and C cl values during another 24 h. The cells grown on the other part of the inserts were quickly washed with 200 μL of cold HBSS and then lysed with 700 μL acetonitrile (100 μL apical, 600 μL basolateral) for 40 min on an orbital shaker (450 rpm, room temperature) to determine the test substance cell contents.
Calculation of permeability coefficients and recovery
Apparent permeability coefficients (P app ) were calculated according to the following Eq. 1: where ΔQ/Δt is the rate of amount transported to the receiver compartment, A is the membrane surface area (0.33 cm 2 ), and C A0 is the initial concentration in the apical compartment.
The clearance values were calculated according to the following Eq. 2 (Neuhaus et al., 2008): where C Bn and V B are the concentration and volume in the basolateral compartment at a specific timepoint (n), respectively; C A and V A are the concentration and volume in the apical compartment, respectively, and C Bn-1 is the total amount of substance found in the basolateral compartment up to the previous timepoint (n-1).
Recovery (mass balance) of each compound was calculated according to the following Eq. 3: where C Af , C Bf and C Cf are the final compound concentrations in apical, basolateral, and cellular compartments, respectively; C A0 is the initial concentration in the apical compartment, and V A , V B and V C are the volumes in the respective compartments.
LC-MS/MS analysis
Instrument and chromatographic conditions U(H)PLC-MS/MS measurements were performed on a 6460 Triple Quadrupole MS system with a 1290 Infinity LC system equipped with a binary capillary pump G4220A, column oven G1316C, and multisampler G7167B (all Agilent). Quantitative analysis by MS/MS were performed with electrospray ionization (ESI) in MRM (multiple reaction monitoring) mode. Desolvation and nebulization gas was nitrogen. MS/MS data were analyzed with Agilent MassHunter Workstation software version B.07.00. The temperature of the autosampler was 10°C. An Acquity UPLC HSS T3 column (1.8 μm; 100 mm × 2.1 mm) (Waters) was used for separation of the analyte and the internal standard (IS), except for hyperforin and its IS warfarin where an Acquity UPLC CSH Phenyl-Hexyl column (1.7 μm; 50 mm × 2.1 mm) was used. The mobile phase consisted of purified water with 5% MeCN containing 0.1% formic acid (A1) and MeCN containing 0.1% formic acid (B1). Analyses of hypericin and valerenic acid including the IS digoxin were performed on an Acquity UPLC system containing a binary pump, autosampler, and column heater, connected to an Acquity TQD (all Waters). Desolvation and nebulization gas was nitrogen, and collision gas was argon. Flow rate for analysis of all compounds was 0.4 mL/min. The column used was an Acquity UPLC BEH C18 (1.7 μm; 50 mm × 2.1 mm) (Waters). The autosampler temperature was set at 10°C, and the column temperature at 55°C. The mobile phase consisted of purified water containing 0.1% NH 4 OH at pH 10.7 (A2) and MeCN/ purified water (9:1) containing 0.1% NH 4 OH (B2). U(H)PLC
Standards and stock solutions
Stock solutions of analytes and IS (Table 1) were prepared as previously described (Spiess et al., 2022a
Stability assay in placental homogenate
The stability of hyperforin, hypericin, and valerenic acid was assessed in PBS, PM and placental homogenates (prepared according to (Spiess et al., 2022a)) over a period of 360 min, as previously published (Riccardi et al., 2020)). In short, after spiking the various matrices with the study compounds, samples were either processed immediately for U(H)PLC-MS/MS analysis (C0), or kept at 4°C/37°C on an orbital shaker (600 rpm) for 360 min before processing for U(H)PLC-MS/MS analysis. Samples were processed via solid phase extraction or protein precipitation prior to analysis.
Data processing and calculations
Concentrations in the placental perfusion profiles ( Figure 2) and system adherence tests (Figure 4) were expressed as a percentage (%) of the maternal concentration at the beginning of the perfusion, whereby the maternal concentration, measured in the maternal reservoir, was adjusted to the total volume of the full maternal circuit (maternal reservoir and dead volume of the system). The FM concentration ratio (FM ratio; Figure 3) was calculated for each timepoint and plotted against the perfusion time (min). The final recovery (%) is the sum of the amounts of study compound present in both perfusates at the end of a perfusion, and the sample removed during the experiment. Glucose consumption and lactate production are presented as the sum of changes (from the beginning to the end of perfusion) in total content (μmol) in both circuits, normalized by total perfusion time (min) and weight (g) of perfused cotyledon. The net release rate of placental hormones -β-hCG (U) and leptin (ng)during the placental perfusion was also normalized by total perfusion time (min) (see (Spiess et al., 2022b) for equations).
Statistical data analysis
Multiple group comparisons were performed for the glucose consumption, lactate production, β-hCG and leptin production using the non-parametric Mann-Whitney U with GraphPad Prism (version 9.3.1 for macOS; GraphPad Software). Probability values of p ≤ 0.05 were considered statistically significant. Data are expressed as mean ± SD of three to four independent experiments.
Results
In silico predictions of physichochemical properties of the test substances Hyperforin, hypericin and valerenic acid (Figure 1) exhibit rather differing physicochemical properties (Table 2), as determined by the softwares QikProp (Schrodinger LLC) and ACD/Percepta (ACD/Labs release 2020.1.1). Hyperforin and hypericin have both MWs that are markedly higher than that of valerenic acid (and the positive control antipyrine). Hyperforin and hypericin are predicted to be substantially more lipophilic than valerenic acid, which in turn is more lipophilic than the positive control antipyrine, as reflected by the cLogD pH7.2 values (Table 2). Hypericin has a markedly higher number of hydrogen bond donors than all other compounds. Finally, the pK a values of hyperforin and valerenic acid differ as well. At pH 7.2 of the perfusion experiments, hypericin is present in charge states between 0 and −2, while hyperforin and valerenic acid occur essentially in the single charge state of −1. In contrast, antipyrine is fully uncharged ( Table 2).
Ex vivo characterisation of transplacental transfer
The transplacental transfer of hyperforin, hypericin and valerenic acid resulted in three distinctly different profiles (Figure 2). Hyperforin showed little transfer to the fetal circuit, despite a decrease in the maternal circuit. After 240 min of perfusion, only 7.2% of initial concentration appeared in the fetal compartment, while 68.6% of hyperforin remained on the maternal side ( Figure 2A). Hypericin did not cross the human placental barrier within 240 min, while the concentration in the maternal compartment decreased to Figure 2C). At the same time, antipyrine as a connectivity control reached an equilibrium after 120 min ( Figures 2B, D). For valerenic acid, a gradual increase in the fetal compartment and a concomitant decrease in the maternal compartment was observed, reaching an equilibrium after 240 min (44.0% [fetal] vs. 45.0% [maternal] of initial concentration; Figure 2E). The integrity of maternal and fetal circuits was again confirmed with antipyrine ( Figure 2F). Perfusion profiles with absolute concentrations can be found in Supplementary Figure S1.
The FM ratio (Figure 3) of hyperforin reached a maximum of 0.18 after 180 min, thereby confirming that only minor amounts crossed the placental barrier. In contrast, the FM ratio of hypericin was zero, as the compound could not be detected in the fetal circuit. For valerenic acid, the FM ratio was 1.01 after 240 min, reflecting the identical concentrations in the fetal and maternal compartments. Antipyrine as a positive control reached an equilibrium between fetal and maternal concentrations after 90 min (FM ratio of 1.03), which remained unchanged during the course of the experiment (FM ratios of 0.98, 1.05, 0.92 at 120, 180, and 240 min, respectively).
Recovery and stability of the test substances
The system adherence tests (empty perfusions; Figure 4) revealed that negligible proportions of hypericin, valerenic acid and antipyrine were lost over a period of 240 min (calculated values of 3.8%, 5.6%, and −2.2% of initial concentration, respectively). The relative amount of hyperforin which adhered to the perfusion equipment after 240 min was significantly higher (65.4%). Apart from the system adherence test, several aspects must be considered for the recovery calculations of study compounds during placental perfusions ( Figure 5). After 240 min of perfusion, the compounds were distributed in the two compartments (fetal vs. maternal) in the following proportions: antipyrine (27.8% vs. 29.5%), hyperforin (5.3% vs 52.2%), hypericin (0.1% vs. 36.7%), and valerenic acid (33.1% vs. 34.4%). As shown in Supplementary Table S1 and in Figure 5, 17.2%-24.1% of the test compounds were removed by sampling during the perfusion, corresponding to onefourth of the final recovery. When assessing the final recovery in the placenta perfusions (without considering the results of the independent system adherence test), the following values were obtained: 79.2% ± 2.6% for antipyrine, 78.4% ± 23.0% for hyperforin, 54.0% ± 13.7% for hypericin, and 91.5% ± 19.1% for valerenic acid.
Limited stability of study compounds in the various matrices could lead to misleading results. Therefore, their stability was assessed over 360 min in three different matrices that were relevant for our experiments (PBS, PM, and three placental homogenates [Donors 1-3]) and at two temperatures (4°C, 37°C) ( Figure 6). Hyperforin and hypericin were less stable over 360 min at 4°C and 37°C in PBS compared to PM, while the stability data of valerenic acid were very comparable in PBS and PM (approx. 100%). In addition, the solubility of hyperforin was higher in PM than in PBS (121% vs. 100%, Supplementary FIGURE 4 Adherence of study compounds in a 240 min system adherence test (circulation of study compounds through an empty perfusion chamber comprising only the maternal circuit). All compounds were tested individually in three independent experiments, and data are expressed as mean ± SD. Displayed is the percentage (%) of compound (of initially analyzed concentration in the maternal sample) that adheres to the equipment: antipyrine (−2.2% ± 7.0%), hyperforin (65.4% ± 0.7%), hypericin (3.8% ± 13.1%), and valerenic acid (5.6% ± 1.9%) at 240 min.
FIGURE 5
Recovery of study compounds in the human ex vivo placental perfusion system, expressed as percentage (%) of amount analyzed in the maternal sample at the beginning of the perfusion. The final recovery was calculated as the sum of compound present in fetal and maternal perfusates at the end of a perfusion, and the amounts sampled during the perfusion from fetal and maternal perfusates (Supplementary Table S1). All data are represented as mean ± SD of three to four independent experiments.
Effects on placental function and histology
Possible effects of the study compounds on placental metabolic activity and hormonal production were also investigated. The metabolic activity of all perfused placental tissues was similar, as neither glucose consumption nor concomitant lactate production were affected by the study compounds ( Figure 7A). With antipyrine (from control perfusions), the total glucose consumption and lactate production during the perfusion were 0.39 and 0.27 µmol/g/
FIGURE 6
Stability data for hyperforin (A), hypericin (B), and valerenic acid (C) expressed as percentage (%) of the initial concentration (C0) in PBS. The stability test was performed for 360 min at two different temperatures (4°C and 37°C) and three different matrices (PBS), perfusion medium (PM), and placental homogenates from three different donors. Differences due to matrix effects were excluded in a separate experiment (Supplementary Figure S2). All data are represented as mean ± SD.
Frontiers in Pharmacology frontiersin.org min, respectively. Beta-human chorionic gonadotropin (β-hCG) and leptin production were determined as an additional measure for placental function and found to be somewhat lower in the presence of all compounds ( Figure 7B). However, neither hyperforin, hypericin, nor valerenic acid inhibited their production in a statistically significant manner. This implied that the tissue of all placentae retained their functionality throughout the ex vivo perfusion period. A β-hCG production of 1.44 U/min and leptin production of 2.26 ng/min was observed in control perfusions with antipyrine only. Detailed histopathological examination of the perfused tissue revealed that microscopic effects of perfusion were seen in addition to the macroscopically apparent pale tissue (Table 3): i) villous vessels of the perfused side were ≥80% empty (non-perfused area was ≤20% empty of blood), ii) intervillous space of perfused tissue was ≥70% empty of blood (vs. 30%-80% in non-perfused area) and equally or more dilated in contrast to the non-perfused side, iii) formation of hydropic villous changes was found more frequently in the perfused (5%-40%) than in the non-perfused areas (0%-5%), and iv) a clear transition between perfused and non-perfused tissue was observed in most of the cases. The endothelium in the perfused tissue was still viable after 360 min of perfusion. Other histopathological observations that argue against damage to placental tissue after perfusions with the test substances (hyperforin, hypericin, valerenic acid) were i) a low percentage of thrombi in villous vessels (up to 5%) of perfused tissue, ii) no thrombi detectable in vessels of stem villi (perfused and non-perfused), iii) trophoblast vacuolization in perfused areas occurred in a proportion of 0%-30% and was substance-independent, some but not all (two out of four) cotyledons perfused with valerenic acid showed higher proportions (80%-90%), iv) no ruptures of villous vessels, and v) no extravasations into villous stroma. No signs of inflammation were found in any of the examined perfused tissue areas, as neither bacteria nor neutrophils were present in the villous vessels and intervillous spaces. In addition, the assessment of global placental pathology was unremarkable, with no evidence of fetal/maternal vascular malperfusion, villous immaturity, chronic/acute villitis, chronic deciduitis, chorioamnionitis, and bacteria in the non-perfused areas of the placenta.
In vitro permeability assays
All results shown so far were obtained with term placentae. To better evaluate the transplacental passage of valerenic acid, the in vitro BeWo b30 Transwell model was used. Hyperforin and hypericin were not suited for these experiments as they did not cross the cell-free inserts to a sufficient extent (data not shown). In our hands, a dense BeWo b30 cell layer was obtained 9 days after cell seeding on semipermeable insert membranes. This timepoint was chosen because i) translocation of NaF (a marker of paracellular passive diffusion) was minimal, with a basolateral amount of 2.8% of the initial NaF concentration (Supplementary Figure S3), ii) TER values reached a value of ≥30 Ωcm 2 on day 9 after cultivation, which was markedly higher than in previous days (Supplementary Figure S4A), and the C cl was below the expected 5 μF/cm 2 (Supplementary Figure S4B) and, iii) staining of nuclei (blue) and actin (red) showed gapless growth of BeWo b30 cells on cell culture inserts (Supplementary Figure S5). In this Transwell model, valerenic acid did not cross the placental cell layer within 60 min to reach detectable concentrations. In contrast, valerenic acid could pass the semipermeable cell-free insert to the same extent as the positive control antipyrine (Figure 8A; 28.3 μL within 60 min). Antipyrine crossed from the apical to the basolateral compartment through the placental cell layer and the semi-permeable cell-free insert membrane at the same clearance rate ( Figure 8B; 35.5 μL and 33.5 μL, respectively). The P app was zero for valerenic acid, and 28.0 × 10 −6 cm/s for antiyprine. The recoveries after 60 min were 73.7% ± 11.9% for valerenic acid, and 81.9% ± 14.2% for antipyrine, and included the final amounts in the apical, basolateral, and cellular compartment (Supplementary Figure S6). Mean TER and C cl values were similar before and after the permeability experiment with valerenic acid and antipyrine (Supplementary Figure S7).
FIGURE 7
Assessment of tissue viability and functionality during the ex vivo human placental perfusion. (A) Glucose consumption and lactate production under exposure to study compounds and antipyrine (from control perfusions). Displayed are the changes between beginning and end of the perfusion in fetal and maternal circuits. Data are normalized by the total perfusion time (min) and perfused cotyledon weight (g). All data are represented as mean ± SD of three to four independent experiments. (B) Beta-human choriogonadotropin (β-hCG) and leptin tissue production of perfusions with study compounds and antipyrine (from control perfusions). Displayed is the net release rate of placental hormones during the placental perfusion, normalized by the total perfusion time (min). All data are represented as mean ± SD of three to four independent experiments (except for the leptin value of antipyrine, where only two values are included). No statistically significant differences were found between the groups (p > 0.05 in all cases).
Frontiers in Pharmacology frontiersin. org TABLE 3 Detailed histopathological evaluation assessing the microscopic effects of human ex vivo placental perfusions with hyperforin (n = 4), hypericin (n = 3) and valerenic acid (n = 4), and the damage of placental tissue in perfused areas compared to non-perfused areas. Hydropic changes in non-perfused Our results from the ex vivo human term placental perfusion model showed that only minor amounts of hyperforin were transported to the fetal circulation, resulting in a very low FM ratio. Hypericin did not cross the placental barrier, while valerenic acid equilibrated between the maternal and fetal compartments. In addition, metabolic, functional, and histopathological properties of placentae during perfusions were not significantly altered by the test substances. Observations performed with the in vitro Transwell model with human placental cells indicated that valerenic acid was unable to cross the cell monolayer, thereby suggesting that the compound may not cross cytotrophoblast layers.
Hyperforin Hypericin Valerenic acid
Our observations from the ex vivo human term placental perfusion model fit well with the predicted physicochemical properties of the investigated molecules and an expected transport through the placenta barrier by passive diffusion. It is likely that the relatively high MW (>500) of hyperforin and hypericin, together with their ionization states at pH 7.2, hindered their transfer [compare with (Syme et al., 2004;Tetro et al., 2018)]. In contrast, the markedly smaller valerenic acid could equilibrate between maternal and fetal compartments almost as quickly as antipyrine (MWs of 234.3 and 188.2, respectively). Although our results are in line with a transfer by passive diffusion, a possible involvement of additional mechanisms in the transplacental transfer of hyperforin, hypericin and valerenic acid requires further investigation. The results of our stability experiments revealing some degradation of hyperforin, and to a smaller extent hypericin, at 37°C but not at 4°C suggest some metabolization of these compounds through placenta enzymes.
Strengths and limitations
Plant extracts consist of a variety of different compounds, some of which are present in low amounts only. Therefore, it was crucial to develop sensitive U(H)PLC-MS/MS methods capable of detecting very low concentrations of analytes that one would expect in vivo upon oral ingestion of phytomedicines. However, it should be noted that confident statements can only be made within the calibration range (see Materials and methods). A limitation of the study is that concentrations below the limit of quantification had to be assumed to be zero. Ex vivo placental perfusion is to date the only experimental model preserving the structural integrity and cellcell organisation of the organ. It most closely mimics the in vivo situation and, therefore, provides good predictions for placental transfer in vivo. A disadvantage of the model is that it represents the situation at term, when transplacental transfer is known to be maximal. For compounds that were not transferred in this model (hyperforin and hypericin) one can reasonably assume that they are also not transferred at earlier stages of pregnancy. The in vitro Transwell model that mimics the cytotrophoblast monolayer (Bode et al., 2006;Vähäkangas and Myllynen, 2006), provided valuable information for valerenic acid. We opted for not testing hyperforin and hypericin in the Transwell model, since under our experimental conditions these molecules did not cross the membranes of the inserts (in the absence of a cell layer) in a measurable way. The transfer across the membranes was not significantly increased when using high protein concentration in the medium (up to 4% BSA; data not shown). High protein concentrations have been used to improve solubility and, hence, transfer of poorly soluble compounds (Füller et al., 2018). Similar limitations of such permeability experiments with hypericin have been previously described in the Caco cell model (Verjee et al., 2015). A limitation common to both models is that they cannot fully represent the in vivo situation, as they do not take into account aspects such as dissolution, absorption, distribution, metabolism, and excretion of the compounds (Poulsen et al., 2009;Hutson et al., 2011;Myllynen and Vahakangas, 2013), which strongly influence transplacental transfer. Finally, the two methods only allow a study of shortterm toxic effects on placental tissue and cells, while in pregnancy the placentae are exposed for extended periods to the substances. Especially with compounds that accumulate in cell membranes, such as hypericin [(Verjee et al., 2015); own unpublished observations], this might lead to an underestimation of possible undesired effects.
Recovery of hyperforin, hypericin and valerenic acid
Experiments with hyperforin and hypericin required special attention, due to their high lipophilicity and poor solubility. In the absence of biological material, hyperforin showed a significant loss in the empty perfusions (65.4% over a 4-h period) which could be due to adsorption to tubing/equipment or precipitation. The higher recovery in the presence of placental tissue could be explained by the higher protein content in the system. The presence of protein is needed to stabilize and solubilize this compound, as described in literature (Füller et al., 2018), and this was reflected by the high stability in the PM but not in presence of PBS ( Figure 6). The amounts in the fetal and maternal compartments, and the amounts removed by sampling were comparable for hyperforin (78.4%) and for antipyrine (79.2%). In addition, loss of hyperforin in the incubations with placental homogenates may indicate a possible metabolisation. The lowest final recovery was found for hypericin (54.0%), although there was only a small loss due to system adsorption (3.8% over a 4-h period). Again, stability was good in PM but not in PBS ( Figure 6). However, the recovery data did not take into account the percentage in placental tissue. Interestingly, fluorescence microscopic images showed a considerable accumulation of hypericin in placental cells (data not shown), which is similar to previous observations with Caco-2 cells (Verjee et al., 2015). In the placental perfusion model valerenic acid showed good stability and high recovery, thereby facilitating data interpretation.
St. John's wort: Comparison with previous in vitro studies
Previous in vitro studies showed that extracts from St. John's wort had no negative impact on placental cells at concentrations up to 30 µg/mL (cytotoxicity, apoptosis) or 100 µg/mL (genotoxicity, metabolic activity, and influence on placental cell differentiation) . Our present data with term placentae suggest Frontiers in Pharmacology frontiersin.org that a possible fetal exposure to hypericin and hyperforin is likely minimal. This is particularly important, as transplacental transport is maximal at term due to a decrease of cell layer thickness and number of cell layers towards the end of pregnancy (van der Aa et al., 1998;Vähäkangas and Myllynen, 2006). In vitro, hyperforin showed no effects on viability, metabolic activity, and on induction of placental cell differentiation at concentrations up to 30 μM. However, hyperforin led to increased apoptosis and genotoxic effects starting at concentrations of 3 and 10 μM, respectively, and inhibited FSK-induced placental cell differentiation at concentrations of ≥1 µM (Spiess et al., 2022b). It should be noted that these test concentrations were significantly higher than reported plasma concentrations in humans. Upon oral administration of a single dose of 300 mg St. John's wort extract containing 14.8 mg hyperforin a maximum plasma concentration of 150 ng/mL (approx. 0.28 µM) was reached (Biber et al., 1998). C max values of 83.5 ng/mL (a 0.16 µM) and 122 ng/mL (a 0.23 µM) hyperforin were determined after single dose administration of 612 mg and 900 mg dry extract, respectively (Schulz et al., 2005b;a). However, the hyperforin content in commercially available products can vary considerably (Schäfer et al., 2019). Given that low amounts of hyperforin can possibly cross the transplacental barrier, and our recent data on inhibition of cell differentiation at ≥ 1 µM concentrations in BeWo cells (Spiess et al., 2022b), it may be prudent to resort in pregnancy to products with a low hyperforin content. Hypericin lowered the viability of placental cells already at 1 μM concentrations, and apoptotic and genotoxic effects were seen at concentrations of 1 and 10 μM, respectively (Spiess et al., 2022b). The amount of hypericin in commercially available drugs varies. For products marketed in Switzerland and Germany, amounts ranging from 0.08 mg to 0.21 mg per 100 mg of tablet have been reported (Schäfer et al., 2019). In human volunteers, plasma concentrations of 2.2 ng/mL hypericin (a 4.36 nM) have been found (Jackson et al., 2014). The plasma levels of hypericin thus are significantly lower than the concentrations found toxic in BeWo cells (Spiess et al., 2022b). However, plasma levels of hypericin may increase upon coadministration of certain other drugs (Jackson et al., 2014).
Valerian: Comparison with previous in vitro studies
At concentrations up to 30 µg/mL, no signs of cytotoxicity or apoptosis, and at concentrations up to 100 µg/mL, no signs of genotoxicity, alteration of metabolic activity and placental cell differentiation were observed in BeWo cells for the valerian extract . Valerenic acid was not permeable in the in vitro Transwell model mimicking continuous cytotrophoblast layers that are likely to play a role in the placenta barrier at early stages of pregnancy. However, it reached an equilibrium between the fetal and the maternal circulation in the ex vivo placental perfusion model representative for late stages of pregnancy. With respect to valerenic acid, concentrations up to 30 μM did not lower viability of BeWo b30 cells, and no increase in apoptosis or genotoxicity, and no negative effect on the metabolic activity and cell differentiation were observed (Spiess et al., 2022b). Valerenic acid contents ranging from 1.21 mg/g to 2.46 mg/g product have been reported in products marketed in Australia (Shohet et al., 2001), while 0.57 mg-2.20 mg valerenic acid per tablet/capsule have been found in products marketed in Switzerland (Winker et al., manuscript under review). For valerenic acid, maximal plasma concentrations of 2.3 ng/mL (a 9.82 nM) to 3.3 ng/mL (a 14.08 nM) have been reported (Anderson et al., 2005;Anderson et al., 2010). Considering that valerenic acid did not affect cell viability in BeWo b30 cells at concentrations up to 30 μM (Spiess et al., 2022b), the valerenic acid content in products, and reported plasma concentrations, there appears to be a large safety margin.
Final statement
Hyperforin could only cross the complex placental barrier to a very small extent, while hypericin appeared to be non-permeable. Valerenic acid crossed the placental barrier at term when permeability is higher, but not in the in vitro BeWo transfer model representative of cytotrophoblast monolayer. Taken together, our data suggest that when treating mild mental disorders with St. John's wort and valerian extracts, fetal exposure to hypericin and hyperforin and, at early stages of pregnancy to valerenic acid, is likely to be low. Our study included so far only single compounds that are considered as relevant for the pharmacological properties of St. John's wort and valerian. Given that the entire extracts, and not just single compounds, are considered as the active ingredient of phytomedicines, the possible influence of the extract matrix on placental permeability of these compounds should be evaluated. Recent in vitro data in BeWo cells with St. John's wort and valerian extracts, and with hyperforin, hypericin and valerenic acid Spiess et al., 2022b) suggest moreover no toxicity at concentrations to be expected in humans at the recommended extract doses, but caution against using products containing high amounts of hyperforin.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee of the Canton of Zurich (KEK-StV73 Nr. 07/07; 21 March 2007). The patients/participants provided their written informed consent to participate in this study.
Author contributions
DS performed the ex vivo placental perfusion and in vitro Transwell experiments, performed data analysis, interpretation, and visualisation, and wrote parts of the draft manuscript. VA developed bioanalytical methods, performed all bioanalyses, and wrote parts of the manuscript. AC developed bioanalytical methods.
Frontiers in Pharmacology frontiersin.org 13 JR was involved in the establishment of the in vitro model and helped with all permeability experiments. AT conducted stability testing, performed data analysis, and supervised the bioanalytical method development, together with MO and SK helped DS with data analysis and paper editing. MR was responsible for the histopathological examinations. APS-W, MH, and OP designed the study and supervised DS, VFA and AC, respectively. All authors were involved in data interpretation and reviewing of the manuscript and agreed with the final version.
Funding
This work has been financially supported by the Swiss National Science Foundation (Sinergia project CRSII5_177260; Herbal Safety in Pregnancy). | 2023-04-01T13:05:36.900Z | 2023-03-31T00:00:00.000 | {
"year": 2023,
"sha1": "2342eecfa4e127cda7ad886d6ae70185893a9cf6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "2342eecfa4e127cda7ad886d6ae70185893a9cf6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
253100057 | pes2o/s2orc | v3-fos-license | Bifunctional anti-PD-L1/TGF-βRII agent SHR-1701 in advanced solid tumors: a dose-escalation, dose-expansion, and clinical-expansion phase 1 trial
Background Dual inhibition of PD-1/PD-L1 and TGF-β pathways is a rational therapeutic strategy for malignancies. SHR-1701 is a new bifunctional fusion protein composed of a monoclonal antibody against PD-L1 fused with the extracellular domain of TGF-β receptor II. This first-in-human trial aimed to assess SHR-1701 in pretreated advanced solid tumors and find the population who could benefit from SHR-1701. Methods This was a dose-escalation, dose-expansion, and clinical-expansion phase 1 study. Dose escalation was initiated by accelerated titration (1 mg/kg q3w; intravenous infusion) and then switched to a 3+3 scheme (3, 10, 20, and 30 mg/kg q3w and 30 mg/kg q2w), followed by dose expansion at 10, 20, and 30 mg/kg q3w and 30 mg/kg q2w. The primary endpoints of the dose-escalation and dose-expansion parts were the maximum tolerated dose and recommended phase 2 dose. In the clinical-expansion part, selected tumors were enrolled to receive SHR-1701 at the recommended dose, with a primary endpoint of confirmed objective response rate (ORR). Results In total, 171 patients were enrolled (dose-escalation: n=17; dose-expansion, n=33; clinical-expansion, n=121). In the dose-escalation part, no dose-limiting toxicity was observed, and the maximum tolerated dose was not reached. SHR-1701 showed a linear dose-exposure relationship and the highest ORR at 30 mg/kg every 3 weeks, without obviously aggravated toxicities across doses in the dose-escalation and dose-expansion parts. Combined, 30 mg/kg every 3 weeks was determined as the recommended phase 2 dose. In the clinical-expansion part, SHR-1701 showed the most favorable efficacy in the gastric cancer cohort, with an ORR of 20.0% (7/35; 95% CI, 8.4–36.9) and a 12-month overall survival rate of 54.5% (95% CI, 29.5–73.9). Grade ≥3 treatment-related adverse events occurred in 37 of 171 patients (22%), mainly including increased gamma-glutamyltransferase (4%), increased aspartate aminotransferase (3%), anemia (3%), hyponatremia (3%), and rash (2%). Generally, patients with PD-L1 CPS ≥1 or pSMAD2 histochemical score ≥235 had numerically higher ORR. Conclusions SHR-1701 showed an acceptable safety profile and encouraging antitumor activity in pretreated advanced solid tumors, especially in gastric cancer, establishing the foundation for further exploration. Trial registration ClinicalTrials.gov, NCT03710265 Supplementary Information The online version contains supplementary material available at 10.1186/s12916-022-02605-9.
Background
Immune checkpoint inhibitors targeting programmed death receptor 1 (PD-1) or its ligand (PD-L1) have been approved for treating multiple advanced or metastatic tumors. However, the objective response rate (ORR) in the all-comer population is less than 20% in most tumor types [1]. Local immunosuppressive factors within the tumor microenvironment could induce the resistance to PD-1/PD-L1 blockade [2]. Combination with chemotherapy, antiangiogenic inhibitors, or other immunotherapies are effective strategies to improve the resistance, but with risks of additive toxicities. Bifunctional antibodies have the potential to resolve these issues.
Transforming growth factor-β (TGF-β)-mediated signaling promotes tumor cell invasiveness, migration, and metastasis [3]. In addition, TGF-β is crucial to create an immune suppressive tumor microenvironment, which inhibits T lymphocyte proliferation, induces naïve T cell's differentiation into Tregs and Treg expansion, reduces the production of natural killer cells, promotes the differentiation and expansion of myeloidderived suppressor cells, and consequently enhances immune suppression [4][5][6]. The independent but complementary immunosuppressive functions between PD-1/PD-L1 and TGF-β pathways make dual inhibition of the two pathways a potent therapeutic strategy. Even in the immune-excluded tumors, blocking TGF-β can enable therapeutic responses to immune checkpoint inhibitors [7].
SHR-1701 is a new bifunctional fusion protein composed of a monoclonal antibody against PD-L1 fused with the extracellular domain of TGF-β receptor II. In vitro and preclinical studies showed that SHR-1701 had a high affinity for PD-L1, TGF-β1, and TGF-β3 and exhibited high PD-L1 target occupancy. We initiated this first-in-human study to assess the safety, tolerability, pharmacokinetics, pharmacodynamics, and preliminary antitumor activity of SHR-1701 in multiple advanced solid tumors.
Study design and participants
This was a multicenter, first-in-human, 3-part, phase 1 trial of SHR-1701 done at 19 hospitals in China. The study was composed of dose-escalation and dose-expansion parts in advanced solid tumors, followed by a clinical-expansion part in selected tumors including biliary tract cancer (BTC), head and neck squamous cell carcinoma (HNSCC), gastric cancer (GC), hepatocellular carcinoma (HCC), pancreatic cancer, renal cell carcinoma (RCC), urothelial carcinoma (UC), and esophageal cancer (Clini calTr ials. gov, NCT03710265; Additional file 1: Fig. S1).
Patients were eligible for dose-escalation and doseexpansion parts if they had pathologically confirmed advanced or metastatic solid tumors that had progressed on or were intolerant to standard therapies or for which no standard therapies were available. Patients enrolled in the clinical-expansion cohort of BTC should (1) have progressed on or are intolerant to at least one line of systemic treatment for advanced or metastatic disease or have progressed on or within 6 months after completion of adjuvant therapy; (2) the last regimen before study entry should be gemcitabine combined with platinum-or fluoropyrimidine-based agents. For patients enrolled in other clinical-expansion cohorts, no more than two lines of prior systemic treatments for advanced or metastatic disease were allowed.
Additional inclusion criteria were age between 18 and 75 years, Eastern Cooperative Oncology Group performance status of 0 or 1, at least one measurable lesion according to Response Evaluation Criteria in Solid Tumors (RECIST; v1.1), life expectancy of at least 12 weeks, and adequate hematological, hepatic, and renal function. Patients in clinical-expansion cohorts should provide fresh tumor tissues or archival samples that were obtained within 12 months before study treatment. Key exclusion criteria included prior exposure to any inhibitor against PD-1, PD-L1, CTLA-4, and/or TGF-β; any anti-cancer treatment within 28 days prior to the first mg/kg every 3 weeks was determined as the recommended phase 2 dose. In the clinical-expansion part, SHR-1701 showed the most favorable efficacy in the gastric cancer cohort, with an ORR of 20.0% (7/35;95% CI,) and a 12-month overall survival rate of 54.5% (95% CI, 29.5-73.9). Grade ≥3 treatment-related adverse events occurred in 37 of 171 patients (22%), mainly including increased gamma-glutamyltransferase (4%), increased aspartate aminotransferase (3%), anemia (3%), hyponatremia (3%), and rash (2%). Generally, patients with PD-L1 CPS ≥1 or pSMAD2 histochemical score ≥235 had numerically higher ORR.
Conclusions: SHR-1701 showed an acceptable safety profile and encouraging antitumor activity in pretreated advanced solid tumors, especially in gastric cancer, establishing the foundation for further exploration.
Trial registration: Clini calTr ials. gov, NCT03710265 Keywords: PD-L1, TGF-β, SHR-1701, Tumor, Immunotherapy study dose; uncontrolled or symptomatic central nervous system metastases; active or a history of autoimmune disease that was expected to relapse; and immunosuppressive therapy within 7 days prior to the first study dose.
The study was approved by the Ethics Committee of each study center and conducted in accordance with the Good Clinical Practice and Declaration of Helsinki. All patients provided written informed consent. All authors had access to the study data and reviewed and approved the final manuscript.
Procedures
The dose-escalation part was initiated by an accelerated titration design at 1 mg/kg every 3 weeks, in which only one patient was required if none of the following events occurred during the 21-day tolerability observation: (1) grade ≥2 rash, nausea, vomiting, diarrhea, or fatigue lasting ≥3 days after symptomatic treatment, and any other grade ≥2 non-hematological toxicities; (2) grade ≥3 anemia, grade ≥2 decreased platelet count, grade ≥2 decreased neutrophil count, and any other grade ≥3 hematological toxicities. Otherwise, additional two to five patients were needed for dose at 1 mg/kg every 3 weeks. Standard 3+3 escalation scheme was adopted thereafter, at sequential dose levels of 3, 10, 20, and 30 mg/kg every 3 weeks and 30 mg/kg every 2 weeks. After completion of the dose-escalation part, three or four selected dose regimens would be expanded to collect more data. Subsequently, multiple clinical-expansion cohorts were enrolled to further assess the efficacy of SHR-1701 in selected tumors.
SHR-1701 was given as a 0.5-to 1-h intravenous infusion until disease progression, intolerable toxicity, withdrawal by investigator or patient, or study completion. In the absence of intolerable toxicity or cancer-related clinical deterioration, treatment continuation beyond the initial RECIST v1.1-defined progression was permitted. Treatment interruptions were allowed to manage adverse events. If the toxicity had been reduced to grade ≤1 or baseline level, SHR-1701 could be resumed.
Adverse events were evaluated until 90 days after the last dose and graded according to the National Cancer Institute Common Terminology Criteria for Adverse Events v4.03. Tumor response was assessed by investigators according to RECIST v1.1 and modified RECIST 1.1 for immune-based therapeutics (iRECIST) criteria at screening, every 6 weeks during 24 weeks after first administration, and every 9 weeks thereafter. Complete response (CR) or partial response (PR) should be confirmed at a subsequent assessment after at least 4 weeks.
Serum SHR-1701 concentrations were determined by using a validated enzyme-linked immunosorbent assay with a limit of quantitation of 0.100 μg/mL. Pharmacokinetics parameters were determined by non-compartmental analysis.
The PD-L1 target occupancy in peripheral blood mononuclear cells was assessed by flow cytometry. CD3 + T lymphocytes were identified using Alexa Fluor 488 mouse anti-human CD3 (BD Biosciences, San Jose, CA, USA) and according to their forward scatter and side scatter features. AF647-SHR-1316 was used to detect the PD-L1 targets not bound by SHR-1701. The cell population was gated on the CD3 + T cells and target occupancy was calculated based on the pre-dose samples for each patient. The free TGF-β1 level in plasma was determined by electrochemiluminescence assay (Meso Scale Discovery, MD, USA).
PD-L1 tumor expression was determined by immunohistochemistry carried out at a central laboratory (E1L3N clone, Cell Signaling Technology) and calculated as combined positive score (CPS, defined as the number of PD-L1 staining cells [tumor cells, lymphocytes, and macrophages] out of the total number of tumor cells, multiplied by 100).
Phosphorylation of SMAD2 was centrally detected by immunohistochemistry (138D4 clone, Cell Signaling Technology) and presented as histochemical score (H-score, defined and calculated as the product of the intensity score and proportion) in tumor and immune cells.
Outcomes
The primary endpoints of dose-escalation and doseexpansion parts were the maximum tolerated dose and recommended phase 2 dose. Dose-limiting toxicities were defined as a treatment-related adverse event that occurred during the first treatment cycle (21 days for every 3 weeks schedule and 28 days for every 2 weeks schedule) and met any of the following criteria: (1) grade ≥3 non-hematological toxicities, excluding grade ≥3 nausea, vomiting, diarrhea, or fatigue that recovered to grade ≤2 within 3 days after symptomatic treatment, transient grade 3 infusion reaction or fever (<6 h), and grade 3 increased alanine aminotransferase, increased aspartate aminotransferase, or skin toxicities that recovered to grade ≤2 within 7 days after appropriate treatment; (2) grade 3 decreased platelet count lasting for ≥7 days or with bleeding symptom, grade 3 anemia that could not recover to ≥9 g/dL within 14 days without blood transfusion or use of erythroid growth factor, grade 3 neutropenic infection or febrile neutropenia (≥38.5°C), grade 4 decreased neutrophil count lasting for ≥4 days, and any other grade ≥4 hematological toxicities; (3) other unexpected, durable, and intolerable grade ≥2 toxicities requiring discontinuation of SHR-1701 as judged by the Safety Monitoring Committee. Maximum tolerated dose was defined as the maximum dose level in which ≤1 of 6 patients experienced a dose-limiting toxicity. The recommended dose was determined by the Safety Monitoring Committee based on all results during the dose-escalation and dose-expansion parts. Second endpoints were the pharmacokinetic profile, pharmacodynamic activity, and preliminary antitumor activity.
The primary endpoint of the clinical-expansion part was confirmed ORR assessed by RECIST v1.1, defined as the percentage of patients whose best overall response was confirmed CR or PR. Second endpoints included disease control rate (DCR), clinical benefit rate (CBR, defined as CR, PR, or stable disease lasting at least 24 weeks), duration of response (DoR), and progression-free survival (PFS) per RECIST v1.1, as well as overall survival (OS).
Exploratory endpoints included efficacy outcomes assessed according to iRECIST and correlations of baseline PD-L1 expression and pSMAD2 activity with tumor response to SHR-1701.
Statistical analysis
The total number of patients required for dose escalation depended on the toxicities observed, with 3 to 6 patients per dose level except the initial dose. For the dose levels expanded, a total of 10 to 12 patients per dose level were required. For the clinical-expansion cohorts of selected tumors, 20 to 30 patients per cohort were planned.
Efficacy and safety were analyzed in all patients who received at least one dose of study treatment. The population for pharmacokinetic or pharmacodynamic analysis included all patients who received study treatment and had at least one corresponding post-treatment variable.
ORR, DCR, and CBR were reported with the corresponding 95% CI calculated via the Clopper-Pearson method. The Kaplan-Meier method was used to estimate the median DoR, PFS, and OS, and the 95% CIs were estimated by the Brookmeyer-Crowley method. The areas under the curve (AUCs) were generated by plotting receiver operating characteristic curves that illustrated sensitivity and 1-specificity for pSMAD2 level. Fisher's exact test was used to assess the independence of ORR and pSMAD2 level. Statistical analyses were done using SAS v9.4 and pharmacokinetic analyses were done using Phoenix WinNonlin v8.1.
Determination of recommended phase 2 dose
In the dose-escalation part, no dose-limiting toxicity was observed in the 17 patients, and the maximum tolerated dose was not reached. Subsequently, 10, 20, and 30 mg/ kg every 3 weeks and 30 mg/kg every 2 weeks doses were expanded, with another 33 patients enrolled. A linear dose-exposure relationship with SHR-1701 dosing from 1 to 30 mg/kg was observed (Fig. 1A). Pharmacokinetic parameters of SHR-1701 following a single infusion are listed in Additional file 1: Table S1. The concentration of SHR-1701 peaked at 1.68 to 2.98 h after infusion. The geomean half-life ranged from 4.6 to 8.1 days. Parameters reflecting exposure (including C max , AUC last , and AUC inf ) increased and clearance decreased slowly over the dose There was a sharp decrease in the patient in the 1 mg/kg every 3 weeks group on C5D1 before administration, which might be caused by delayed treatment (interval between C4D1 and C5D1, 27 days).
The patient withdrew from the study after C5D1. In the 10 mg/kg every 3 weeks group, PD-L1 target occupancy of 1 patient decreased to 25% on C5D1, but reached saturated level before administration on C9D1. All the remaining patients had a sustained and saturated PD-L1 target occupancy throughout the study. C TGF-β1 concentrations following SHR-1701 treatment. The free TGF-β1 level in the patient in the 1 mg/kg every 3 weeks group sharply increased on C5D1 before administration. In addition to treatment delay, low dose level might be the reason. PD-L1, programmed death ligand 1; TGF-β1, transforming growth factor-β 1; EOT, end of treatment; EOS, end of study ranges examined. PD-L1 target occupancy rate at the surface of peripheral blood CD3-positive T cells exceeded 90% at 72 h after the first dose (Fig. 1B). At 72 h, the free TGF-β1 levels in peripheral blood sharply decreased. Nearly complete TGF-β1 trapping was detected in all dose groups (Fig. 1C). SHR-1701 at 30 mg/kg every 3 weeks exerted the best antitumor activity, without obviously aggravated toxicities compared with other dose levels (Additional file 1: Fig. S2 and Table S2 showing the data at cutoff date). Combined, 30 mg/kg every 3 weeks was determined as the recommended phase 2 dose.
Efficacy in select tumors
In the clinical-expansion part, 121 patients were enrolled in eight cohorts and received SHR-1701 at the recommended dose, including 35 with GC, 21 with HCC, 13 with BTC, 12 with UC, 10 with HNSCC, 10 with RCC, 10 with pancreatic cancer, and 10 with esophageal cancer (Additional file 1: Table S3 showing patient characteristics by tumor type).
In total, 15 of the 121 patients achieved confirmed objective responses according to RECIST v1.1 (Table 2), including two CRs (one patient with GC and one with UC) and 13 PRs (six patients with GC, two with HNSCC, two with RCC, one with UC, one with BTC, and one with HCC). Tumor shrinkage in target lesions was observed in 38 of 102 (37%) evaluable patients ( Fig. 2A), and a durable response was clearly observed in patients who had a reduction of 30% or more in the target lesion (Additional file 1: Fig. S3). Moreover, as assessed by iRECIST, another two GC patients (2%) achieved objective response from continued SHR-1701 treatment after immune unconfirmed progressive disease (iUPD).
Encouraging antitumor activity was also observed in HNSCC, RCC, and UC cohorts, with an ORR of 20.0%, 20.0%, and 16.7% (Table 2), respectively, and the median DoR in these cohorts had not been reached yet.
Study treatment was temporarily stopped due to grade ≥3 TRAEs in 19 (11%) patients. Among them, seven patients were rechallenged with SHR-1701, and only one TRAE recurred in one patient (hyponatremia, grade 4) and eventually resulted in permanent discontinuation of SHR-1701. Any grade immune-related adverse events assessed by the investigator occurred in 62 (36%) patients, and grade 3 or worse ones occurred in 16 (9%) patients. The most common immune-related adverse events with an incidence of more than 5% were hypothyroidism and rash (17 patients, 10% for each).
Discussion
The present study reported the clinical outcomes of SHR-1701 in pretreated patients with advanced solid tumors, aiming to assess its safety and tolerability and identify the population who could benefit from SHR-1701.
SHR-1701 showed a manageable safety profile with 22% of patients having grade 3 or worse TRAEs, which were similar to bintrafusp alfa (another bifunctional conjugate targeting TGF-β and PD-L1 under investigation) [8]. Squamous cell carcinoma (SCC) of skin and keratoacanthoma were potentially TGF-β-mediated cutaneous events [9][10][11] and occurred in 4% and 8% of patients treated with bintrafusp alfa, with 2% and <1% being grade 3 or worse in severity [12]. In our study, SHR-1701 treatment did not result in the occurrence of skin SCC or keratoacanthoma. Considering that skin color and exposure to UV radiation were reported to be significant risk factors for keratoacanthoma [13,14] or skin SCC [15], the differences in the enrolled population and their sunbathing habits were more likely to be the main explanations for the two skin toxicities, in addition to possible distinct actions of the two drugs. Besides, some patients suffered bleeding events following bintrafusp alfa, with the most common being epistaxis (12%), hemoptysis (7%), and Table 3 Treatment-related adverse events Data are present as n (%). Treatment-related adverse events that occurred in at least 5% of all treated patients are listed. Three (2%) grade 5 events were considered to be treatment related by the investigators, including one (<1%) caused by pneumonia and two (1%) unknown deaths gingival bleeding (5%) [12]. In this study of SHR-1701, the bleeding event that occurred in at least 5% of patients was gingival bleeding only, and most bleeding events were grade 1 or 2 in severity. Only one (<1%) patient suffered grade 3 or worse gastrointestinal hemorrhage. More bleeding events and anemia were found in patients with cervical cancer following SHR-1701 therapy [16], which might be attributed to different tumor types, prior treatments (such as radiotherapy), and/or complications. It has been reported that TGF-β signaling is involved in vascular development and stability [17,18], but whether the occurrence of these bleeding events is caused by TGF-β inhibition and whether anemia is secondary to the bleeding events still need to be investigated. Three deaths were judged possibly related to study treatment by the investigator. Two simultaneously suffered disease progression and associated complications, which might also relate to their deaths. The rest one had unexplained death after he dropped out of the study because of disease progression, so that conservatively judged as possibly related to study treatment.
All patients (N=171)
The most favorable response with SHR-1701 was observed in the GC cohort with an ORR of 20.0% and DoR of 7.0 months. Two more patients had delayed response after iUPD, resulting in an ORR of 25.7% as determined by iRECIST. The OS rate at 12 months was as high as 54.5%. For patients treated with bintrafusp alfa, the ORR was 16%, DoR was 8.7 months, and the 12-month OS rate was 41% [19]. With regard to the approved 2nd-or 3rd-line therapies, including chemotherapy (taxanes or irinotecan), targeted therapy (apatinib or ramucirumab), and immune checkpoint inhibitor (PD-1/PD-L1 blockade) monotherapy, the ORR was about 20%, or even less with targeted therapy and immune checkpoint inhibitor monotherapy, with the median OS of 5 to 9 months [20][21][22][23][24][25][26]. Overall, SHR-1701 showed quite encouraging efficacy, which might provide a new choice for pretreated GC.
SHR-1701 also showed clinical activity in pretreated HNSCC, RCC, and UC cohorts, with an ORR of 20.0%, 20.0%, and 16.7%, which were comparable to immune checkpoint inhibitor monotherapy (13-17% [27][28][29], 25% [30], and 17-26% [31][32][33][34], respectively). Low or no response was seen in other cohorts. As some patients were still on treatment, the ORR might change with extended follow-up, and delayed response after iUPD might occur. For the above tumor types, further studies are warranted to screen the potential benefit population of SHR-1701 monotherapy or improve the efficacy by combination strategies.
It has been reported that bintrafusp alfa had a higher ORR of 30.5% in human papillomavirus (HPV)related cancers, mainly including cervical cancer and HPV-positive HNSCC [35]. There were 10 HNSCC patients in this study. Most of these patients were diagnosed with nasopharyngeal carcinoma (7/10), for which HPV infection is not a predominant cause, and the HPV status in the rest three HNSCC patients was unknown. Whether HPV infection is associated with the response to SHR-1701 needs further investigations. Currently, a randomized, double-blind phase 3 study in patients with persistent, recurrent, or metastatic cervical cancer comparing SHR-1701 versus placebo in combination with platinum-based chemotherapy with or without bevacizumab biosimilar (NCT05179239) is underway.
We also aimed to identify biomarkers to determine patients who could benefit most from SHR-1701. Regardless of tumor types, patients with high PD-L1 CPS showed higher ORR, suggesting a certain predictive utility of PD-L1 expression. It had been reported that among gastric and gastroesophageal junction cancer patients with PD-L1 CPS ≥1, immune checkpoint inhibitor monotherapy provided a numerically higher ORR than those with PD-L1 CPS <1 ( [19]. In our study, SHR-1701 achieved a numerically higher ORR in patients with PD-L1 CPS ≥1 compared with <1 (21.7% vs 12.5%), and the rate was even higher in patients with CPS ≥5 (45.5%) and ≥10 (55.6%). It indicated that PD-L1 expression could partly predict the efficacy of SHR-1701 in GC. The value would be further confirmed in follow-up studies.
SMAD2 and SMAD3 are downstream transcription factors critical in the TGF-β pathway. Upon phosphorylation, SMAD2 and SMAD3 accumulate in the nucleus, form trimeric SMAD complexes with SMAD4, and further interact with varied cofactors to control downstream gene expression [36,37]. We firstly found a correlation of pSMAD2 level in tumor cells at baseline with a trend towards better ORR (H-score ≥235 vs <235: 36.4% vs 6.3%). Our results suggested that SHR-1701 might inhibit the SMAD2-dependent TGF-β pathway that contributes to tumor progression and immunosuppressive microenvironment. Due to the exploratory nature and small number of patients, these preliminary findings must be interpreted cautiously but highlight further investigations.
The main limitations of this study are typical of earlyphase clinical trials. Most clinical-expansion cohorts had a small sample size and required further investigations. The lack of a control arm makes it difficult to contextualize findings in the GC cohort relative to the historical comparator. In addition, the effects of SHR-1701 on TGF-β2 and TGF-β3 trapping need to be investigated. Based on the early sign demonstrated in this study, we have initiated a randomized, double-blind, placebo-controlled phase 3 study in GC patients assessing the addition of SHR-1701 on first-line chemotherapy (Clini calTr ials. gov, NCT04950322).
Conclusions
Overall, SHR-1701 showed an acceptable safety profile and encouraging antitumor activity in advanced malignancies. Albeit early, the data showed promising efficacy signals of SHR-1701 in advanced or metastatic GC. The PD-L1 expression and tumor cell pSMAD2 level might contribute in better patient selection, which needs future validation.
Additional file 1: Figure S1. Study design. Figure S2. Tumor response of patients in the dose-escalation and dose-expansion phase. Figure S3. Percentage change from baseline in target lesion tumour burden over time in patients with select tumors at the recommended dose (30 mg/ kg q3w). Figure S4. Receiver operating characteristic curve analysis of pSmad2 level in tumor cells for ORR per RECIST v1.1. Table S1. Pharmacokinetic parameters following a single infusion. Table S2. Summary of treatment-related adverse events and tumor response by dose in the dose-escalation and dose-expansion phase. Table S3. Characteristics of patients in clinical expansion cohorts by tumor types. Table S4. Serious treatment-related adverse events. Table S5. Tumor response by PD-L1 expression in all clinical expansion cohorts and in gastric cancer cohort. Table S6. Associations between tumor response and pSMAD2 level in clinical expansion cohorts. | 2022-10-25T13:14:21.112Z | 2022-10-25T00:00:00.000 | {
"year": 2022,
"sha1": "3d4f3f59cba73bc1b7ab91331df098d24e948407",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "6e6ca1fd0a6bdf57a067f20b7716cc0188b12be1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244528713 | pes2o/s2orc | v3-fos-license | Application of periostin peptide-decorated self-assembled protein cage nanoparticles for therapeutic angiogenesis
Peptides are gaining substantial attention as therapeutics for human diseases. However, they have limitations such as low bioavailability and poor pharmacokinetics. Periostin, a matricellular protein, can stimulate the repair of ischemic tissues by promoting angiogenesis. We have previously reported that a novel angiogenic peptide (amino acids 142-151) is responsible for the pro-angiogenic activity of periostin. To improve the in vivo delivery efficiency of periostin peptide (PP), we used proteins self-assembled into a hollow cage-like structure as a drug delivery nanoplatform in the present study. The periostin peptide was genetically inserted into lumazine synthase (isolated from Aquifex aeolicus) consisting of 60 identical subunits with an icosahedral capsid architecture. The periostin peptide-bearing lumazine synthase protein cage nanoparticle with 60 periostin peptides multivalently displayed was expressed in Escherichia coli and purified to homogeneity. Next, we examined angiogenic activities of this periostin peptide-bearing lumazine synthase protein cage nanoparticle. AaLS-periostin peptide (AaLS-PP), but not AaLS, promoted migration, proliferation, and tube formation of human endothelial colony-forming cells in vitro. Intramuscular injection of PP and AaLS-PP increased blood perfusion and attenuated severe limb loss in the ischemic hindlimb. However, AaLS did not increase blood perfusion or alleviate tissue necrosis. Moreover, in vivo administration of AaLS-PP, but not AaLS, stimulated angiogenesis in the ischemic hindlimb. These results suggest that AaLS is a highly useful nanoplatform for delivering pro-angiogenic peptides such as PP.
Mass spectrometry
The molecular mass of AaLS-PP subunit was analyzed using an ESI-TOF mass spectrometer (Xevo G2 TOF, Waters) interfaced with a water UPLC and autosampler.
Samples were loaded onto a MassPREP Micro-desalting column (Waters) and eluted with a gradient of 5-95 % (v/v) acetonitrile containing 0.1 % formic acid at a flow rate of 500 μL/min. ESI generally produces a series of variously charged ions, and the charges are distributed as a continuous series with a Gaussian intensity distribution. The molecular mass was determined from the charges and observed mass-to-charge (m/z) ratio values. Mass spectra were acquired in the range of m/z 500-3000 and deconvoluted using MaxEnt1 from MassLynx to obtain the average mass from multiple charge-state distributions. For clarity, only deconvoluted masses are presented.
Characterization of the protein cage nanoparticles
The hydrodynamic diameter of the AaLS-PP was measured using dynamic light scattering (DLS, Malvern Zetasizer) with a disposable rectangular polystyrene cuvette. Each sample solution was prepared in phosphate buffer (pH 7.4, 50 mM Na 2 PO 4 , 100 mM NaCl) and adjusted to 25 ℃ before introducing the instrument. The system was operated at 25 °C, equilibrated for 2 min, and the scattered light was measured at a 90° angle with the projected light. The samples were further analyzed by size exclusion chromatography (SEC, Superose ® 6 column, GE Healthcare). The system was operated at a flow rate of 0.5 mL/min with FPLC. TEM experiments were conducted on a JEOL-1400 Bio-TEM operated at an acceleration voltage of 120 kV. TEM samples were prepared by placing 10 μL of the samples on carbon-coated copper grids (Electron Microscopy Sciences). The samples were incubated on the grid for 1 min, and the residual solutions were removed with filter paper. The samples were negatively stained by applying 5 μL uranyl acetate (1 % w/v) onto the grid and incubating for 1 min. The excess uranyl acetate solution was removed with filter paper, and the samples were allowed to dry overnight before imaging.
Cell migration assay
ECFCs migration was assayed using a disposable 96-well chemotaxis chamber (ChemoTx,Neuro Probe). To coat the membrane filter of the upper chamber, 50 μL of 20 μg/mL collagen I (BC-354236, Corning) was placed on the lower side and dried overnight at RT. ECFCs were harvested with 0.05 % trypsin-EDTA, washed once, and suspended in EBM-2 at a concentration of 1×10 5 cells/mL. EBM-2 with each supplement of experimental groups was then placed in the lower chamber, and suspended cells were loaded onto the upper chamber at a density of 5×10 3 cells/well. After incubation at 37 °C for 12 h, the filters in the upper chamber were disassembled, and the upper side of the filter was wiped with a cotton swab to remove non-migrated cells. The cells that migrated to the lower side were stained with 5 μM Hoechst 33342 dye (H3570, Thermo Fisher Scientific) for 30 min in a 37 °C incubator, and the number of cells on each filter was determined by counting cells in four locations under a fluorescence microscope at ×100 magnification.
Tube formation assay GFR-Matrigel (BC356230, BD Biosciences) was added at 50 μL/well to a 96-well plate, maintained at 4 °C, and polymerized for 30 min in a 37 °C incubator. ECFCs were suspended in an EBM-2 medium containing 1 % FBS, which was the basal medium, and supplements were treated according to the experimental groups.
ECFCs were seeded at 1×10 4 cells/well on polymerized Matrigel and incubated at 37 °C in a 5 % CO 2 incubator for 12 h. The capillary-like tube structures were stained with 2 μM calcein AM (C1430, Thermo Fisher Scientific) at 37 °C in a 5 % CO 2 incubator for 30 min, and then photographed with a fluorescence microscope (Leica, Germany). Tube length was quantified using ImageJ software (version 1.50i).
Cell proliferation assay
To adhere the ECFCs, coverslips were placed in each well of a 24-well plate and coated with 0.1 % gelatin (G9391, Sigma-Aldrich) in a 37 °C incubator for 1 h.
Subsequently, 5×10 4 EPCs suspended in EBM-2 containing 0.1 % FBS were seeded on the well, followed by treatment with supplementation of experimental groups.
4
After incubation of the cells at 37 °C and 5 % CO 2 for 24 h, the cells were fixed with 4 % paraformaldehyde at RT for 30 min. The fixed cells were permeabilized with PBS containing 0.2 % Tween 20 for 15 min and blocked with 5 % BSA (A6003). The specimens were incubated with anti-Ki67 antibody (NCL-Ki67p, Leica Biosystems) for 2 h, and then with Alexa 488 goat anti-rabbit secondary antibodies for 1 h.
Antibodies were diluted in 5 % BSA, and after the incubation was completed, the specimens were washed three times with PBS for 15 min. Finally, the specimens were mounted in Vectashield medium containing 4′,6-diamidino-2-phenylindole (DAPI) (H1200, Vector Laboratories, Burlingame, CA). Images were collected with a confocal microscope (Olympus, Tokyo, Japan) and measured using ImageJ software (version 1.50i).
Immunocytochemistry analysis
For histological analysis of tissue specimens, the animals were sacrificed, and hind limb muscles were excised. The specimens were fixed in 4 % paraformaldehyde (HP2031, Biosesang) and embedded in paraffin. The paraffin-embedded specimens were sectioned into three 6 μm at 150 μm intervals. For analysis of angiogenesis in the hindlimb, sectioned specimens were stained with anti-CD31 and anti-αSMA antibodies. Subsequently, the cells were incubated with Alexa Fluor 568 goat anti-mouse and Alexa Fluor 488 goat anti-mouse antibodies and washed and mounted in a Vectashield medium containing DAPI to stain the nuclei. The stained sections were visualized under a laser confocal microscope (Olympus FluoView FV1000). Twelve randomly chosen microscopic fields from three serial sections in each tissue were examined for the CD31-positive capillary density and number of αSMA-positive arteries in each mouse. The numbers of CD31+ and αSMA + were quantified using ImageJ software. | 2021-11-25T06:22:45.995Z | 2021-11-24T00:00:00.000 | {
"year": 2022,
"sha1": "2610cf37a363750788b75ba8f089ebd90e8074a3",
"oa_license": "CCBYNC",
"oa_url": "https://www.bmbreports.org/journal/download_pdf.php?doi=10.5483/BMBRep.2022.55.4.137",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8c139c3b8b0d0060774508e0b777113195df36c",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119279756 | pes2o/s2orc | v3-fos-license | Long String Dynamics in Pure Gravity on AdS$_3$
We study the classical dynamics of a completion of pure AdS gravity in 3D, whose only degrees of freedom are boundary gravitons and long strings. We argue that the best regime for describing pure gravity is that of"heavy"strings, for which back-reaction effects on the metric must be taken into account. We show that once back-reaction is properly accounted for, regular finite-energy states are produced by heavy strings even in the infinite-tension limit. Such a process is similar to, but different from, nucleation of space out of a"bubble of nothing."
Introduction and summary
This paper is dedicated to Valery Rubakov in occasion of his 60th birthday. Valery has been a pioneer and a master in understanding the role of non-perturbative solutions to field equations in quantum field theory. This paper is devoted to a particular case of soliton dynamics. Though limited in scope, we believe that it contains some results worth reporting. We hope that its readers will consider it also a worthy tribute to Valery's work.
Pure gravity in three dimensions does not propagate local degrees of freedom, as a simple counting argument shows. six of the twelve Hamiltonian degrees of freedom of the 3D graviton, g µν , are removed by gauge invariances and the other ones are removed by 3+3 constraints that follow from Einstein's equations. So, 3D gravity does not propagate gravitational waves. In the presence of a negative cosmological constant, pure gravity still exhibits a nontrivial dynamics, because there exist boundary gravitons [1] and black hole solutions [2]. The Einstein-Hilbert action of pure gravity with negative cosmological constant, −1/l 2 , is Boundary gravitons exist because the asymptotic metric of 3D Anti de Sitter space (AdS) is preserved by a set of diffeomorphisms that act non-trivially on the boundary. Specifically, the condition of being asymptotically AdS 3 means that the metric has the form [1] g tt = −r 2 /l 2 + O(1), g tφ = O(1), g tr = O(r −3 ), These boundary conditions are preserved by diffeomorphisms with asymptotic form The allowed diffeomorphisms are parametrized by two arbitrary functions f (x + ), g(x − ), each depending on only one of the two boundary light cone coordinates (x ± = t/l ± φ). The time t and the angular coordinate φ ∼ φ + 2π parametrize the AdS 3 boundary, while r is its radial coordinate. The boundary is at r = ∞ and 2∂ ± = l∂/∂t ± ∂/∂φ. The classical Poisson brackets associated to the asymptotic diffeomorphisms (3) define two Virasoro algebras with equal central charge c = 3l/2G [1]; therefore, after quantization, the Hilbert space of any quantum gravity with the same asymptotics -whether pure or with mattermust fall into unitary representations of the Virasoro algebras. This purely kinematical fact has a deep consequence if one further assumes that quantum gravity on AdS 3 is dual to a 2D conformal field theory (CFT) [3]. Modular invariance of the CFT, discreteness of the spectrum and the existence of an Sl(2, C) invariant state with conformal weights ∆ =∆ = 0 then imply that the asymptotic density of states at levels (∆,∆) is [4] d(∆,∆) ≡ e S = exp 2π c∆/6 + 2π c∆/6 .
Rotating black hole solutions for pure 3D AdS gravity (2) do exist [2]. Their metric depends on two parameters: mass M and angular momentum J. The metric is [2] After the identification ∆ + c/24 = (Ml + J)/2,∆ + c/24 = (Ml − J)/2, the Cardy formula (4) matches the Bekenstein-Hawking formula for the entropy of rotating black holes [5] S = S BH = 2πr h /4G, The result of ref. [5] is general. In particular, it does not depend on the matter content of the AdS 3 bulk theory. Amusingly, pure gravity seems to defy general formulas (4,6). Indeed, as noticed in [6], the asymptotic dynamics of eq. (1) is described by a Liouville action. Upon quantization, Liouville theory becomes an unusual conformal field theory because of two features. The first is that its spectrum does not include an Sl(2, C) invariant state. Physical states obey instead the "Seiberg bound" [7] ∆,∆ > (c − 1)/24. The second is that physical states are only plane-wave normalizable, because the spectrum of Liouville theory is continuous. These properties are well established in consistent quantizations of Liouville theory at c > 1 [8].
The reduction of pure gravity to a boundary Liouville theory is most easily proven by writing the Einstein-Hilbert action (1) in terms of two Sl(2, R) Chern-Simons theories [9] Denoting by t a the three Sl(2, R) generators in the fundamental representation, the Chern-Simons action is The gauge potentials A,Ã are related to the dreibein e a and spin connection ω a by Some of the equations of motion derived from (7) are constraints. In the gauge A − =Ã + = 0 when the 3D space is topologically the product of a 2D disc D and the real line R, they imply Substituting the solution of the constraints into the Chern-Simons action, bulk terms disappear and the action reduces to a boundary term. This term is the 2D chiral Wess-Zumino action [10,11]. Further constraints, following from the requirement that A,Ã give an asymptotically AdS metric, reduce the Wess-Zumino action to a Liouville action [6]. An attentive reader should have noticed an unwarranted assumption here. We assumed that the 3D space was topologically global AdS 3 to arrive at a Liouville action. In the presence of black holes, i.e. horizons, or of time-like singularities associated with point-like particles in the bulk, the action at the r = ∞ boundary must be supplemented with other terms at the inner boundary/horizon. A possible interpretation of these terms is that they describe the states of the AdS 3 quantum gravity; more precisely the primary states in each irreducible representation (irrep) of the V irasoro×V irasoro algebra acting on the Hilbert space of quantum AdS 3 gravity 1 . The role of the boundary Liouville would be then simply to describe, in each irrep, the Virasoro descendants (cfr. [12]). In this interpretation, other information is needed to determine the spectrum of primary operators.
One hint that pure gravity could nevertheless have the same spectrum of primaries as Liouville theory comes from canonical quantization of pure gravity. Already in the 1990's, it was shown that the the wave functions obtained by quantization of Sl(2, R) Chern-Simons theory are Virasoro conformal blocks [13]. Two Sl(2, R) Chern-Simons actions are combined into the action of pure gravity so the Hilbert space of pure gravity must be (a subspace of) the product of each Chern-Simons Hilbert space. In a forthcoming publication we will argue that the pure gravity Hilbert space is the target space of conformal field theories with continuous spectrum and obeying the Seiberg bound [14] (cfr. [15]). Assuming from now on that this result holds, we conclude that pure gravity in AdS 3 should contain states that can reach the boundary at a finite cost in energy, since states confined to the interior of the AdS space have a discrete spectrum. So one natural question to ask is, what are those states?
The mass of such states must be large in AdS units: Ml ≫ 1, otherwise gravity could not be called "pure" in any sense. The states cannot be massive particles, which cannot reach the AdS boundary. Indeed, there is only a natural candidate for such states: they must be long strings. These states were already invoked as a possible solution to certain problems of the partition function of Euclidean pure gravity in [16].
The rest of this paper is devoted to studying the effect of long strings in AdS 3 gravity. Section 2 will summarize known features of long strings in the probe approximation, which holds when back-reaction on the metric and quantum string dynamics effect can both be neglected. This happens when the string tension T is in the range l −2 ≪ T ≪ G −1 l −1 .
Section 3 describes the case of "light" strings, which were studied in details in [17]: T l −2 . It is a regime where back-reaction can be neglected but quantum effects cannot. This is an interesting case, but far from pure gravity, as we will argue using some results of ref. [17].
Section 4 studies the "heavy" string case, T G −1 l −1 , when back-reaction cannot be neglected. We argue that this regime is the best suited to describe a pure gravity theory containing BTZ black holes and no state below the Seiberg bound. We further show that in order to recover the mass gap predicted by the Seiberg bound the string tension must be Planckian This is the limit T → ∞, which is nonsingular thanks to back-reaction effects. Finite mass BTZ states arise through a process similar to nucleation of the universe out of a "bubble of nothing." 2
Long Strings in Probe Approximation
If short string dynamics and back-reaction are negligible, as it happens when the string tension is in the intermediate range l −2 ≪ T ≪ G −1 l −1 , the effects of long strings can be described in the probe approximation. The long string probe is located at radial position r = R(φ) and its classical action is made of two terms [20]. One is proportional to the area spanned by the string world-sheet Σ, the other is proportional to the volume enclosed by the world-sheet.
The second term requires coupling the string to an antisymmetric two form. The world-sheet action of the string thus acquires a term The two-form B is analogous to the Kalb-Ramond form of fundamental strings. It possesses the gauge invariance B → B + dΛ and its bulk action is This action does not propagate any degree of freedom in 3D. So, the bulk theory in the presence of the form B is still pure gravity, but with a cosmological constant that depends on the value of the field strength H. The field strength is quantized in units of q, the two-form charge of the string [21,22].
The asymptotic value of the string action (10) is best written in terms of a redefined radial coordinate ϕ, the induced world-sheet metric h, and the world-sheet scalar curvature R [20] as To reach the boundary with finite energy, one must set q = 1. At q = 1 (13) becomes the Liouville action. Its central charge is c L = 1 + 12πT l 2 . Quantum effects can be neglected in the semiclassical regime for Liouville theory, that is when c L ≫ 1, hence when T ≫ l −2 . Crossing the brane, the cosmological constant changes and so does the central charge c = 3l/2G. If we call l + the AdS radius outside the brane and l − the radius inside, the central-charge change is Back-reaction effect can be neglected when ∆c/c ≪ 1, hence when T ≪ G −1 l −1 . This inequality on the other hand implies that the energy gap between the vacuum and the long string states, given by the Seiber bound with c = c L , is E = (c L − 1)/12 = πT l 2 ≪ (c − 1)/12. So, the theory contains states with energy well below the BTZ black hole threeshold. It is therefore doubtful whether we can call gravity plus strings in the regime l −2 ≪ T ≪ G −1 l −1 "pure." The most obvious method for increasing the gap is to make T G −1 l −1 and take full account of the back-reaction. This will be done in section 4. In the next section we examine a more exotic possibility. Namely, we study the dynamics of light strings with tension T ≪ l −2 . Though a theory with strings of tension smaller than the AdS scale contain a large number of light states, maybe it could still bear resemblance to pure gravity if these states decouple in the limit that the string coupling constant goes to zero. In next section we will use the results of [17] to argue against this possibility.
Light Strings and the Absence of BTZ States
Strings in AdS 3 with background NS forms can be studied to all orders in α ′ = l 2 s = 1/2πT . One can find in particular exact expressions for the generators of the target space Virasoro algebras [23]. The low-tension region l s l may seem quite the opposite of pure gravity, since it contains an abundance of light degrees of freedom. One exotic possibility is to decouple all the unwanted states by sending the string coupling constant g s to zero. Since g 2 s = G/l s [23], decoupling means that we are sending l s → ∞ while keeping G, l finite and l/G ≫ 1. The last condition guarantees that the AdS space is still macroscopic and concepts such as black hole, metric etc. are meaningful. The first condition may decouple stringy excitations leaving only BTZ states.
To check if decoupling is actually possible we must parametrize our theory in terms of quantities that remains valid beyond the point-particle limit. So, instead of the ratio l/l s we should use the level k of the Sl(2, R) world-sheet current algebra and use the target space central charge c instead of l/G. In the semiclassical, point particle limit, k = l 2 /l 2 s and c = 3l/2G. Ref. [17] argues that a sharp phase transition occurs at k = 1. For k > 1, the asymptotic density of states at high energy is dominated by BTZ black holes and the target space theory has an Sl(2, R) × Sl(2, R) invariant vacuum. For k < 1 neither the vacuum nor the BTZ black hole states are normalizable. The asymptotic density of states is dominated by weakly-coupled long strings. The first property agrees with expectations from canonical quantization of pure gravity and its similarities with Liouville theory. The second property seems to contradict the fact that BTZ black holes are the only primary states in pure gravity. Nevertheless, it could be that at k < 1 weakly-coupled long strings are just BTZ states in disguise.
The last possibility seems unlikely and in any case a better argument exists against the decoupling limit. The problem arises because in a conformal field theory where the lowest conformal weight of a physical primary operator is not ∆ = 0, but some ∆ m > 0, the effective central charge appearing in Cardy's formula (4) is c ef f = c − 24∆ m [4]. The Seiberg bound ∆ ≥ (c − 1)/24 then tells us that in "Liouville like" pure gravity c ef f = 1.
On the other hand, ref. [17] found that the effective target space central charge for the long string gas is 3 Setting c ef f = 1 we have k = 1/2 + O(g 2 ) for type II superstings 1/4 + O(g 2 ) for bosonic strings .
On the other hand, the target space central charge is [17] c = 6g −2 s k for type II superstings 6g −2 s (k + 2) for bosonic strings .
So, in both cases g s → 0 implies c → ∞, while we want to keep c ≫ 1 but finite.
If we had tried to keep c finite in the limit g s → 0 we would have also run into a contradiction because c ef f would have become either negative or larger than c. Both these possibilities are forbidden in unitary CFTs.
Heavy Long Strings
In the regime T G −1 l −1 the metric is deformed by the back-reaction of the string. The process that can lead to formation of massive point particles or BTZ black hole is the collapse of a long string arbitrarily close to the boundary of AdS 3 in the far past. This is what we examine at the classical level in this section. The collapse of shells of matter with various equations of state was considered in [24].
Consider first the collapse of a shell of matter with rotational symmetry -which is a closed string in two space dimensions-arriving from a radial position arbitrarily close to the boundary of an asymptotically AdS space in the far past. Inside the shell the metric is pure AdS 3 and outside is a non-rotating BTZ black hole. The metrics inside and outside a shell with world-sheet Σ are Here the subscript (−) is used for variables defined inside the shell and (+) for those outside it.
If the string has no angular motion, we can define "proper time" by moving on the world-sheet at fixed φ and parametrize Σ as where R(τ ) is the radius of the string. The discontinuity of the extrinsic curvature K ± ij across the shell is related to the stress-energy tensor of the string S ij on Σ by the so-called Israel boundary conditions; precisely It is convenient to study the string dynamics using a comoving frame, spanned by proper time and (1/R)∂ φ . In such frame S ij = diag (T, −T ), and the discontinuity in γ ± ij , which we call γ ij , is Here β ± = Ṙ2 − 8GM ± + R 2 /l 2 ± , while R(τ ) is the position of the string and M − = −1/8G. Although we can easily solve exactly the single equation obtained from (21,22), examining its asymptotic behavior is sufficient for our purpose.
The leading order term tells us that the string tension should be If the tension differs from this value, the string either cannot reach the boundary or reaches it with infinite radial speed. Eq. (24) is the generalization to heavy strings of condition q = 1 in section 2. We call a T obeying eq. (24) critical tension and the string with such tension a critical string. For a critical string to exist, we must have l + > l − , since T is positive. Then the subleading order term in asymptotic behavior (23) gives us an interesting bound on the mass of the collapsing string: We can understand this mass bound better by expressing it in AdS unit. At this point we have two length scale, l + and l − , both of which can be used to convert energies into dimensionless quantities. The radius of the asymptotically AdS metric outside the shell is l + , so, if we want to relate bulk energies to CFT weights, we have to use l + as our unit of lenght.
To compare with CFT and with section 2 it is convenient to redefine the AdS 3 energy as E ′ = E + 1/8G. The vacuum energy then vanishes, all masses are positive and the mass bound becomes The mass bound approaches zero as we send 8GT l + to zero, so the tensionless limit cannot be related with the boundary Liouville theory obtained in section 2.
If 8GM ′ l + ≫ 1, on the other hand, we have finite mass gap where c + = 3l + /2G. This agrees with the Seiberg bound in a theory with a large central charge c ≫ 1, as it is needed for classical geometry to make sense. If we insists that this mass bound equals the Seiberg bound, we find 8πGT l + = c + − 1. This implies that the tension is order of unity in Planck unit: T G 2 ∼ 1.
We also have l − ∼ G from critical tension condition (24). Therefore we are considering a process where a long string with large tension is nucleated at the boundary from an AdS 3 with Planckian curvature. Though similar to nucleation of an AdS "bubble of nothing" [19,18], the process is different. It is not a quantum transition but a classical process: the collapse of a long string located at the boundary at past infinity. It is only thanks to back-reaction effects that the two "infinities" involved in the process, 1/l − ∼ 1/G and T ∼ 1/G 2 cancel to give a finite result.
So, for T G 2 ∼ 1, long strings can produce the right mass spectrum, consisting only of BTZ black holes. Moreover, the large tension ensures that no unwanted low-energy states are being added to "pure" gravity.
We argued that long strings could account for BTZ black holes, but our attention was limited to non-rotating ones. We want to conclude this section with some comment on the rotating BTZ case. Inspired by previous consideration, it is tempting to use long strings to explain rotating BTZ black holes through the collapse of a shell formed by a rotating long string. We shall show next that this is impossible, as long as the world-sheet stress energy tensor S ij is diagonal. The simplest case to analyze is a rotating BTZ with a string rotating at constant angular velocity and fixed radius R. This case suffices to show the general problem that one encounters even in a more general setting.
Suppose that inside the shell we have pure AdS 3 as before, but that the outside metric is Notice that ds 2 − is diagonal while ds 2 + is not. To compute γ ij , however, the induced metrics on the world-sheet Σ must be the same, i.e. (ds 2 − )| Σ = (ds 2 + )| Σ . One way to accomplish this is by using a coordinate system spanned by where R is the radius of the string andẋ denotes the derivative of x with respect to the proper time τ . This means that outside the world-sheet we use a rotating frame with constant angular velocity ω =φ + /ṫ + , which, in general, may be different from that of the string. Both bases given in eq. (29) are orthonormal iḟ Hereβ ± = −8GM ± + R 2 /l 2 ± + 16G 2 J 2 ± /R 2 with M − = −1/8G, J − = 0 and J + = J. In these coordinates systems, one finds that the extrinsic curvatures are Notice that these equations give γ τ θ = −4GJ/R 2 . Since the origin of this term is the angular momentum of the BTZ black hole, we can not make it vanish by giving radial dynamics to the string. As long as we consider physical configuration with angular symmetry, radial dynamics and rotations are the only motions we can introduce at the classical level. This suggests, therefore, that we have to relax the string equation of state ρ = −p = T to explain rotating BTZ state. In fact, when the equation of state is p = −ρ the string stress-energy tensor S i j is diagonal in any coordinate frame, whether rotating or not.
One way to set p = −ρ is by exciting degrees of freedom on the string. One such degree of freedom, the radial coordinate R, is always present, but other may exist as they do in fundamental strings.
One amusing agreement between long strings with equation of state p = −ρ and Liouville theory is that the latter contains only primaries with equal left and right conformal weights ∆ =∆ [8]. Since BTZ states must be primaries of the would be CFT dual, such equality implies the vanishing of the BTZ angular momentum.
At this point the relation between long strings and rotating black holes is still unclear. It is possible that we would need a completely different description for states giving rise to rotating black holes by gravitational collapse. However, nothing so far seems to forbid excited strings to produce rotating BTZ black holes. In any case, production of BTZ black holes by long string collapse already showed intriguing features and it is well worth more study. | 2014-10-13T18:31:46.000Z | 2014-10-13T00:00:00.000 | {
"year": 2015,
"sha1": "f5b88caeeaa8b27778e80f02892595d0e1733320",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1410.3424",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f5b88caeeaa8b27778e80f02892595d0e1733320",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
269502924 | pes2o/s2orc | v3-fos-license | The Wnt/β-catenin/TCF/Sp5/Zic4 Gene Network That Regulates Head Organizer Activity in Hydra Is Differentially Regulated in Epidermis and Gastrodermis
Hydra head formation depends on an organizing center in which Wnt/β-catenin signaling, that plays an inductive role, positively regulates Sp5 and Zic4, with Sp5 limiting Wnt3/β-catenin expression and Zic4 triggering tentacle formation. Using transgenic lines in which the HySp5 promoter drives eGFP expression in either the epidermis or gastrodermis, we show that Sp5 promoter activity is differentially regulated in each epithelial layer. In intact animals, epidermal HySp5:GFP activity is strong apically and weak along the body column, while in the gastrodermis, it is maximal in the tentacle ring region and maintained at a high level along the upper body column. During apical regeneration, HySp5:GFP is activated early in the gastrodermis and later in the epidermis. Alsterpaullone treatment induces a shift in apical HySp5:GFP expression towards the body column where it forms transient circular figures in the epidermis. Upon β-catenin(RNAi), HySp5:GFP activity is down-regulated in the epidermis while bud-like structures expressing HySp5:GFP in the gastrodermis develop. Sp5(RNAi) reveals a negative Sp5 autoregulation in the epidermis, but not in the gastrodermis. These differential regulations in the epidermis and gastrodermis highlight the distinct architectures of the Wnt/β-catenin/TCF/Sp5/Zic4 network in the hypostome, tentacle base and body column of intact animals, as well as in the buds and apical and basal regenerating tips.
Introduction
Hydra is a freshwater hydrozoan polyp known for its exceptional regenerative capacities, including its capacity to regrow any missing part of its body, such as a new fully functional head in three to four days after a mid-gastric bisection (reviewed in [1]).Its anatomy is simple; it is a gastric tube composed of two myoepithelial layers known as the epidermis and gastrodermis along a single oral-aboral axis.This bilayered gastric tube connects the apical or head region at the oral side to the basal disc at the aboral side.The regenerative process relies on the rapid establishment of a head organizer in the regenerating tip, initially identified by Ethel Browne through transplantation experiments [2].Indeed, she showed that tissues isolated from the head of intact animals, from the head-regenerating tip or from the presumptive head of the developing bud, can instruct and recruit cells from the body column of the host to induce the formation of an ectopic head, a property later named organizer activity by Spemann and Mangold [3].Additional transplantation experiments confirmed that the head organizer is actively involved in developmental processes in Hydra such as 3D reconstruction of the missing head after decapitation at any level along the body column or formation of a new head during budding.In addition, the head organizer is also required in a homeostatic context, actively maintaining head patterning in intact animals [4][5][6][7][8][9].Hence, in Hydra, two types of head organizer activity take place, one in homeostasis and the other in developmental contexts, the latter ones giving rise to the former.
The principle of organizer activity was later shown to be also at work during embryonic development in vertebrates, initially in gastrulae [3,10] and later on during appendage and hindbrain development [11][12][13][14].These organizers are transient developmental structures with evolutionarily conserved patterning properties [15].Similarly, regenerating blastema that form after amputation can be considered as organizing centers, which exhibit patterning properties to reconstruct the missing structure due to the molecular instructions they deliver to the surrounding cells to modify their behavior [8,[16][17][18].Indeed, these recruitment and patterning properties can be observed by transplanting regenerative blastema.
The Hydra polyp, an animal easily maintained in the laboratory, provides a model to decipher the cellular and molecular basis of regeneration.Transplantation experiments identified two distinct activities for the head organizer named head activation and head inhibition, both with maximal activity apically and a theoretical parallel apical-to-basalgraded distribution along the body axis [5,7,19,20].In 1972, Gierer and Meinhardt proposed a reaction-diffusion model close to Turing's one to predict how the organizer acts and how it is restored after bisection, with both processes relying on the cross-talk between an auto-catalytic short-range activator and a longer-range inhibitor, interacting in a positivenegative feedback loop [21].According to this model, the activator positively acts on its own production as well as on that of the inhibitor, whereas the inhibitor negatively acts on the production/activity/stability of the activator.At any position along the animal length, the equilibrium between these two components is tightly controlled under homeostatic conditions, but is immediately disrupted upon amputation, resulting in the rapid restoration of the activity of the activating component, i.e., head activation, and the delayed restoration of the activity of the inhibitory component, i.e., head inhibition, given their respective rates of diffusion, self-regulatory capacity and cross-regulation.
Three decades later, Wnt/β-catenin signaling was proposed to actually act as the head activator, required to initiate apical morphogenesis and maintain apical differentiation in Hydra [22][23][24][25][26][27].More recently, the transcription factor Sp5, whose expression is regulated by Wnt/β-catenin signaling in many species including Hydra [28][29][30][31][32][33], was shown to restrict the activity of Wnt/β-catenin signaling, thus fulfilling the expected positive-negative feedback loop of the head inhibitor [33].Indeed, a transient knock-down of Sp5 suffices to induce a multiheaded phenotype characterized by ectopic head formation along the body column of intact animals and the regeneration or budding of animals with multiple heads [33].As anticipated, after a mid-gastric bisection, Sp5 and Wnt3 are up-regulated in the apicalregenerating tips, within two to three hours for Wnt3 and after eight hours for Sp5, and both of them remain expressed at high levels throughout the entire head regenerative process but not the foot one [33].Also, the transcription factor Zic4, whose gene expression is positively-regulated by Sp5, is responsible for the maintenance of tentacle differentiation and for their formation during apical development [34].
Hydra is populated by a dozen distinct cell types that are derived from three populations of adult stem cells, i.e., epithelial-epidermal, epithelial-gastrodermal and interstitial, which constantly self-renew in the body column to maintain Hydra homeostasis.In intact animals, Wnt3, Sp5 and Zic4 are predominantly expressed in epithelial cells of both the gastrodermis and epidermis [25,33,34], a finding confirmed by single-cell sequencing [35] (Figure S1).However, while Wnt3, Sp5 and Zic4 are expressed at their highest levels apically, their respective profiles in the apical region are very different: Wnt3 is detected at a maximum level at the tip of the head, around the mouth opening, where the organizing activity is located [25,27].In this region, Sp5 and Zic4 are in fact not detected, their expression being maximal at the base of the head where the tentacles are implanted and in the proximal region of the tentacles (Figure 1A).Also, Sp5 is the only one to be detected along the body column.
Pharmacological and genetic manipulations have shown that dynamic interactions between Wnt/β-catenin, Sp5 and Zic4 play a crucial role for apical development and the maintenance of apical patterning [33,34].However, although Wnt3 is predominantly expressed in the gastrodermis, the specific role of each epithelial layer in the formation and maintenance of the head organizer remains unknown.The aim of this study is to uncover the dynamics of the Wnt/β-catenin/Sp5/Zic4 gene regulatory network in the epidermis and gastrodermis.To test these regulations in homeostatic and developmental contexts, we generated two transgenic lines that constitutively express the HyAct-1388:mCherry_HySp5-3169:GFP reporter construct in either epithelial layer.We monitored mCherry and GFP fluorescence in parallel with the detection of GFP, Sp5, Wnt3 expression in intact, budding or regenerating animals, as well as in animals where Wnt/β-catenin signaling is either stimulated or knocked-down.In each of these contexts, we recorded distinct regulations of Sp5 in the epidermis and gastrodermis, notably an epidermal-specific negative autoregulation of Sp5 in the body column.These results point to distinct architectures of the Wnt/β-catenin/Sp5/Zic4 gene regulatory networks active in the epidermis and the gastrodermis, and we discuss their respective roles in the regulation and activity of the head organizer.
Mapping of the Transcriptional Start Sites (TSS)
Sp5 and Zic4 cDNAs sequences obtained by high throughput sequencing, available on HydrAtlas [37], Uniprot or NCBI, were aligned to the corresponding Hm-105 genomic sequences (Table S1) with the Muscle Align program (ebi.ac.uk/Tools/msa/muscle/) selecting a ClustalW output format.Next, the alignment was visualized with the MView tool [38] (ebi.ac.uk/Tools/msa/mview/) and the putative TSS deduced from the 5 ′ end of cDNAs.
Generation of the Hydra Transgenic Lines
To generate the Sp5 transgenic lines, gametogenesis was induced in the Hv_AEP2 strain by alternating the feeding rhythm from four times per week to once a week consequently.The HySp5-3169:GFP construct was injected into one-or two-cell-stage Hv_AEP2 embryos [39].Out of 330 injected eggs, 27 embryos hatched and 3/27 embryos exhibited GFP and mCherry fluorescence.The epidermal and gastrodermal HySp5-3169:GFP lines analyzed in this work were obtained through clonal propagation from a single embryo for each of them in which only a few cells were positive after hatching.By asexual reproduction of the original animal, i.e., budding, we obtained two transgenic animals with a complete set of mCherry-eGFP-positive epithelial cells, either epidermal or gastrodermal.The generation of the epidermal and gastrodermal HyWnt3FL:eGFP-HyAct:dsRED transgenic lines, renamed here as epidermal HyWnt3-2149:GFP and gastrodermal HyWnt3-2149:GFP, is described in [33].
RNA Interference
For the gene silencing experiments, we applied the procedure reported in [33].Briefly, four-day starved budless animals were selected from the Hv_AEP2 culture, rinsed 3× in water, incubated for 45-60 min in Milli-Q water and electroporated with 4 µM siRNAs, either targeting Sp5 or β-catenin or scramble as the negative control.For Sp5 and β-catenin, an equimolecular mixture of three siRNAs was used (siRNA1+siRNA2+siRNA3, see sequences in Table S2).Animals were electroporated once, twice or three times (EP1, EP2 and EP3) every other day as indicated.
Quantitative RT-PCR
At the indicated time-points after electroporation, 20 animals per condition were amputated either at an 80% level to obtain the apical region (100-80%) and the body column (80-0%) or at 80% and 30% levels to obtain the apical region as above, the central body column (80-30%) and the basal region (30-0%).Besides electroporated transgenic animals, non-electroporated wild-type Hv_AEP2 animals were used to provide the reference expression levels.The different parts of the animals were transferred to RNA-later (Sigma-Aldrich R0901) immediately after amputation and kept at 4 • C prior to RNA extraction.RNA extraction was performed using the E.Z.N.A. ® Total RNA kit (Omega, Norcross, GA, USA) and cDNA was synthesized with the qScript TM cDNA SuperMix (Quanta Biosciences, Beverly Hills, CA, USA).The cDNA samples were diluted to 1.6 ng/µL and the primer sequences used to amplify the Sp5, Wnt3, β-catenin, GFP and TBP genes were designed with Primer3-OligoPerfect (Thermo Fisher, Waltham, MA, USA) (Table S2).Quantitative RT-PCR was performed using the SYBR Select Master Mix for CFX (Applied Biosystems, Waltham, MA, USA) and a Biorad CFX96 TM Real-Time System.Relative gene expression levels were calculated as described in [40], using TBP to normalize all data.Fold change (FC) values at each time point or condition were calculated over the values obtained in non-electroporated animals.Finally, in each condition, the FC values measured in βcatenin(RNAi) or Sp5(RNAi) animals were divided by those measured in animals of the same condition exposed to scramble dsRNA.
Whole-Mount In Situ Hybridization (WM-ISH)
The animals were relaxed in 2% urethane/HM for 1 min, fixed in 4% PFA prepared in HM for 4 h at room temperature (RT), then washed several times with MeOH before being stored in MeOH at −20 • C. WMISH was performed as described in [33].For double WMISH, the Wnt3 riboprobe was labeled with DIG (Sigma, Roche-11277073910) and the Sp5 and GFP riboprobes were labeled with fluorescein (Sigma, Roche-11685619910); the Wnt3-DIG riboprobe was co-incubated with either the Sp5-FLUO or GFP-FLUO riboprobe during the hybridization step.The Wnt3-DIG riboprobe was first detected with NBT/BCIP (Sigma, Roche-11383213001) and the FLUO-labeled riboprobe was subsequently detected with Fast Red.To stop the NBT/BCIP reaction, the samples were washed several times in NTMT, then incubated in 100 mM glycine, 0.1%Tween (pH 2.2) for 10 min, washed in Buffer I (1× MAB; 0.1% Tween), then incubated in Buffer I supplemented with 10% sheep serum (Buffer I-SS) for 30 min at RT and prolonged for 1 h with fresh Buffer I-SS at 4 • C. Incubation with the anti-FLUO-AP antibody (1:4000, Roche-1142638910) was carried out at 4 • C overnight.Next, the samples were briefly washed in Buffer I then in 0.1M Tris/HCl (pH 8.2) 3× 10 min and then developed with Fast Red (SigmaFAST, F4648, St. Louis, MO, USA).To stop the reaction, the samples were washed several times in 0.1M Tris/HCl (pH 8.2) and fixed in 3.7% formaldehyde for 10 min at RT, rinsed in water and mounted in Mowiol.The co-detection of two riboprobes is technically challenging as the NBT/BCIP detection of the DIG-labeled riboprobe, which is normally far more sensitive than the Fast Red detection of the fluorescein-labeled riboprobe, is much less efficient when tissues are treated for Fast Red.Consequently, we first analyzed the expression pattern of each gene separately and subsequently co-detected Wnt3 and Sp5, or Wnt3 and GFP.In these conditions, we found the co-detection highly informative to record context-specific regulations.The plasmids used to produce the riboprobes are listed in Table S3.
Nuclear Extracts (NEs) and Electro-Mobility Shift Assay (EMSA)
NEs were prepared according to [41].Briefly, 100 Hm-105 or Hv_AEP2 animals were washed rapidly in HM and once in Hypotonic Buffer (HB: 10 mM Hepes pH7.9, 2 mM MgCl 2 , 5 mM KCl, 0.5 mM spermidine, 0.15 mM spermine), then placed in a 1 mL glass dounce with 1 mL HB and 20 strokes were given.After slowly adding (drop by drop) 210 µL of 2 M sucrose, 15 more strokes were given.The extract was centrifuged for 10 min at 3200 rpm at 4 • C, the pellet was washed twice with 800 µL Sucrose Buffer (0.3 M sucrose in HB), resuspended in 50 µL Elution Buffer (glycerol 10%, 400 mM NaCl, 10 mM Hepes pH 7.9, 0.1 mM EDTA, 0.1 mM EGTA, 0.5 mM spermidine, 0.15 mM spermine) and incubated for 45 min.The eluate was centrifuged at 4 • C for 20 min at 13,000 rpm and the supernatant was aliquoted and stored at −80 • C. All manipulations were carried out on ice and all buffers contained a mix of a protease inhibitor cocktail (Bimake B14012).
Production of Anti-Sp5 Antibodies
Two anti-Sp5 antibodies were generated.A rabbit polyclonal antibody was produced by Covalab (Bron, France) against three peptides: P1 (178-191), NEHHIKEYSEHSQA; P2 (398-411), CDENVMELEVNVEN; and P3 (155-175), PASPISWLFPQNIIQSHPSKV.After four immunizations, the sera were collected from a single rabbit and an ELISA test was performed to check the immunoreactivity.Next, the sera were purified by Covalab with the peptides P1 and P2 to remove any P3 cross-reactivity.The mouse monoclonal antibody was produced by Proteogenix (Schiltigheim, France) against a 6His-tag (MGSHHHHHHSG) coupled to a 218 AA-long Hydra Sp5 fragment (ISPLEQT---YSMSTSI) produced chemically.The Sp5-218 protein (24.5 kDa) was expressed in E. coli and injected into the animals.After four immunizations, spleen cells collected from two mice were fused to myeloma cells.The antibody, produced from one selected clone, was validated by IP analysis.
Cell Culture and Whole Cell Extracts (WCEs) and Western Blotting
The immortalized human embryonic kidney HEK293T cells were cultured in DMEM High Glucose, 10% fetal bovine serum (FBS), 6 mM L-glutamine and 1 mM NA pyruvate in 10 cm-diameter cell culture dishes (CellStar, Greiner Bio-One 664160, Kremsmünster, Austria).After a two-day growth, the cells were collected by scraping, counted and 15 × 10 4 cells per well were seeded in 6-well plates and grown for 19 h.Next, the cells were transfected with 2 µg of pCS2+empty or pCS2+HySp5 plasmid using the X-tremeGENE HP DNA transfection reagent (Sigma, 6366546001, St. Louis, MO, USA).To prepare the cell extracts 24 h later, the cells were resuspended in PBS 1× before being centrifuged for 3 min at 3000 rpm at 4 • C.After discarding the supernatant, the pellet was resuspended in fresh Lysis Buffer (LB): 50 mM Hepes pH 7.6, 150 mM NaCl, 2.5 mM MgCl 2 , 0.5 mM DTT, 10% glycerol, 1% Triton 100×, 0.1 mg/mL PMSF, 10% protease inhibitor cocktail (Bimake, B14012, Houston, TX, USA) and a lab-made phosphatase inhibitors cocktail (8 mM NaF, 20 mM β-glycerophosphate, 10 mM Na 3 VO 4 ).After a 30 min incubation on ice, the extract was centrifuged at 14,000 rpm for 10 min at 4 • C and the supernatant was aliquoted and stored at −80 • C. Next, 20 µg extracts, either WCEs or NEs, were diluted with Loading Laemmli buffer and then boiled for 5 min at 95 • C before being loaded onto a 10% SDS-PAGE, then electrophoresed and transferred onto a PVDF membrane (Bio-Rad 162-0177).Next, the membrane was blocked for 1 h at RT with 5% dry milk in TBS 1×, 0.1%Tween (TBS-T).Anti-Sp5 antibodies were added at a 1:500 dilution and incubated overnight at 4 • C. The membranes were washed 3× 10 min in TBS-T before being incubated for 2 h with the secondary anti-mouse-HRP-or anti-rabbit HRP antibody (1:5000, Promega anti-mouse, W4021; anti-rabbit W4011).The membranes were washed in TBS-T for 3× 10 min and developed with Western Lightning Plus-ECL reagent (Perkin Elmer NEL104).To produce the Sp5 protein in vitro, the pCS2+empty and pCS2*HySp5 plasmids were incubated using the TNT Quick Coupled Transcription/Translation Systems (Promega L2080, Madison, WI, USA) and 1 µL was loaded on 10% SDS-PAGE.
Chromatin Immuno-Precipitation and Quantitative PCR (ChIP-qPCR)
ChIP was performed with 300 Hm105 or Hv_AEP2 animals fixed in 1% Formaldehyde Solution (Thermo-Scientific 28906) for 15 min, then transferred into a Stop Solution (Active Motif 103922) for 3 min, briefly washed in cold HM before being resuspended in 5 mL Chromatin prep buffer containing 0.1 mM PMSF and 0.1% protease inhibitor cocktail (Active Motif 103923).The samples were transferred to pre-cooled 15 mL glass dounces and crushed with 30 strokes.The samples were incubated on ice for 10 min before being centrifuged at 4 • C for 5 min at 1250 rcf.Each pellet was resuspended in 1 mL Sonication Buffer (SB: 1% SDS, 50 mM Tris-HCl pH 8.0, 10 mM EDTA pH 8.0, 1 mM PMSF, 1% protease inhibitor cocktail) and incubated on ice for 10 min.The chromatin was then sonicated with a Diagenode Bioruptor Cooler (sonication conditions: Amp: 25%, Time: 20 s on, 30 s off, 2 cycles).The samples were centrifuged at 14,000 rpm for 10 min at 4 • C, the supernatant was sonicated (sonication conditions as above, but 3 cycles), centrifuged at 14,000 rpm for 10 min at 4 • C and the supernatant was recovered.After measuring the DNA with Qubit, 10 µg of the sonicated chromatin was diluted (1:5) in ChIP Dilution Buffer (DB: 0.1% NP-40, 0.02 M Hepes pH 7.3, 1 mM EDTA pH 8.0, 0.15 M NaCl, 1 mM PMSF, 1% protease inhibitor cocktail) and incubated with 1 µg of either the monoclonal or polyclonal α-Sp5 antibody or pre-immune serum antibody overnight at 4 • C on a rotating wheel.The sample was then loaded onto a ChIP-IT ProteinG Agarose Column (Active Motif 53039, Carlsbad, CA, USA), incubated on a rotating wheel for 3 h at 4 • C and washed 6 times with 1 mL Buffer AM1 before being eluted with 180 µL Buffer AM4.After, 1M NaCl and 3× TE buffer were added to perform decrosslinking overnight at 65 • C. Next, RNAse A (10 µg/µL) was added for 30 min at 37 • C followed by Proteinase K (10 µg/µL) for 2 h at 55 • C. Finally, the MiniElute PCR purification kit (Qiagen, 28004, Hilden, Germany) was used to purify the samples.DNA was eluted in 30 µL and 1 µL per condition that was used for qPCR.
Imaging
Live imaging to analyze the dynamics of mCherry and GFP fluorescence and the imaging of immunofluorescence on whole animals were performed on the Leica DM5500 microscope (Wetzlar, Germany).To quantify GFP fluorescence, the acquired data were analyzed with the Fiji ImageJ2 software.Optical sections were acquired using a Spinning disc confocal CSU (Yokogawa, Japan) mounted on an inverted Nikon Ti microscope (Tokyo, Japan) with both the bright-field and GFP channels merged.A confocal LSM780 Zeiss microscope (Oberkochen, Germany) was used to image the immunostained hypostome region of transgenic animals, as well as the budding region of live transgenic animals incubated in 1 mM linalool in HM for 10 min prior to imaging, then kept in the linalool solution between two coverslips separated by a 0.025 mm spacer.WMISH pictures were acquired with the Olympus SZX10 microscope.
Differential Sp5 Regulation in the Epidermal and Gastrodermal Layers along the Body Axis
To monitor the regulation of Sp5 expression in the epidermal and gastrodermal epithelial layers, we produced a tandem reporter construct, HyAct-1388:mCherry_HySp5-3169:GFP, where the Hydra Actin promoter (HyAct, 1388 bp) drives the ubiquitous expression of mCherry and the Hydra Sp5 promoter (HySp5, 3169 bp) drives eGFP expression (Figures 1B and S2).After injecting the reporter construct into Hv_AEP2 embryos, two transgenic lines were obtained by clonal amplification, one expressing the reporter in the epidermis (epidermal HySp5-3169:GFP) and the other in the gastrodermis (gastrodermal HySp5-3169:GFP) (Figures 1C,D and S3).Next, in the q-PCR analysis we compared the expression levels of Sp5, GFP and Wnt3 in the apical, central body column and basal regions of each transgenic line (Figure 1E,F).As expected, in both lines, we found Wnt3 expressed exclusively apically and Sp5 expressed in all regions but at maximal levels apically.By contrast, GFP appears differentially regulated along the two layers of the body axis, rapidly declining in its expression from the apical region to the upper gastric column in epidermal HySp5-3169:GFP animals, with a similar high level of expression in the apical and upper body column regions in gastrodermal HySp5-3169:GFP animals, followed by a low level of expression in the basal region (Figure 1E,F).These results indicate that the Sp5-3169 promoter is differentially regulated along the body column in the epidermal and gastrodermal layers.
contrast, GFP appears differentially regulated along the two layers of the body axis, rapidly declining in its expression from the apical region to the upper gastric column in epidermal HySp5-3169:GFP animals, with a similar high level of expression in the apical and upper body column regions in gastrodermal HySp5-3169:GFP animals, followed by a low level of expression in the basal region (Figure 1E,F).These results indicate that the Sp5-3169 promoter is differentially regulated along the body column in the epidermal and gastrodermal layers.This result was confirmed at the protein level by recording live fluorescent GFP (Figures 1G,H, S3 and S4) or by immunodetecting GFP (Figures 1I,J and S3).In epidermal transgenic animals, GFP fluorescence and GFP protein are detected in the epidermal layer over the whole hypostome, the tentacle ring, the proximal part of the tentacles and the upper body column (Figures 1G,I, S3A,C and S4A,C).In gastrodermal transgenic animals, GFP fluorescence and GFP protein extend over a broad domain in the gastrodermal layer, from the apical region throughout the body column (Figures 1H,J, S3B,D and S4B,D).However, the tip of the hypostome is free of gastrodermal GFP fluorescence and GFP protein (see enlarged head in Figure 1H, arrows in Figure 1J), an area where Wnt3 expression is maximal and endogenous Sp5 expression is minimal [27,33].
Together with the colocalization of endogenous Sp5 transcripts and immunodetected HySp5-3169:GFP (Figure S3C,D), these analyses show that 3169 bp sequences of the Sp5 promoter are sufficient to recapitulate the previously identified endogenous Sp5 expression pattern in the apical region and along the body column [33] and to highlight previously unrecognized differences in the expression between the epidermis and gastrodermis.
From the GFP and mCherry fluorescence profiles, we produced a relative GFP intensity profile for each animal that corresponds to the GFP/mCherry ratio at any point along the body axis (Figures 1K,L and S4C-F).By superimposing the profiles of 10 animals, we concluded that GFP fluorescence in the epidermis of live animals is graded apical-to-basal, from 100% to 70% of the body length, then maintained at low levels between the 70% to 10% positions (Figurs 1K and S4E).In contrast, in gastrodermal_HySp5-3169:GFP animals, the GFP levels are low at the most apical end (position 100-90%), reach a high plateau from the position of 90% up to 40% of the body length and then decrease towards the basal end (Figures 1K and S4F).For each transgenic line, we noted that the fluorescence intensity profiles of GFP in live animals corresponded to the profile of GFP immunodetected in the corresponding fixed samples (Figure 1L).In conclusion, a comparative analysis of GFP transcripts, GFP fluorescence and GFP protein converges to identify distinct patterns in the epidermis and gastrodermis, both apically and along the body axis, indicating that Sp5 is differentially regulated in these two layers.
Sp5 Regulation after Bisection Is Systemic in Gastrodermis but Localized in Epidermis
Next, we analyzed how HySp5-3169:GFP is regulated in developmental contexts.During regeneration, epidermal HySp5-3169:GFP animals show no or low GFP expression in their apical-regenerating (AR) tips fixed at 8 and 12 h post-amputation (hpa) (Figures 2A and S5A).Then, at 24 hpa, GFP expression is detected.At 48 hpa, tentacle rudiments that emerge do not express GFP whereas the tip of the developing hypostome strongly expresses GFP; at 72 hpa, the epidermal GFP pattern is typical, with maximal expression at the root of tentacles.In contrast, in gastrodermal HySp5-3169:GFP animals, GFP is detected immediately after bisection, presumably artifactually in injured tissue, then at high levels at 8 and 12 hpa in a broad domain encompassing the AR tips (Figures 2B and S5B).At 24 hpa, the gastrodermal GFP expression becomes restricted to the apical area, at 48 hpa the emerging tentacles and the tip of the future hypostome are free of GFP expression.At 72 hpa, apical GFP expression is mainly present in the tentacle ring, absent from the tentacles and hypostome and at a low level in the peduncle region.Along the body column of these animals, the gastrodermal GFP expression either forms stripes alternating higher and lower levels or is diffused throughout the animal.In conclusion, the HySp5-3169:GFP transgenic lines highlight the temporal and spatial layer-specific regulations of Sp5 linked to regeneration and budding.During apical E-H) Live imaging of budding HySp5-3961:GFP transgenic animals, either epidermal (E) or gastrodermal (F), pictured at indicated stages with the Olympus SZX10 microscope ((E,G), GFP fluorescence only) or the Zeiss LSM780 microscope ((F,H), GFP and mCherry fluorescence).On the parental polyp, yellow arrowheads point to the "budding belt" that forms in the budding zone; on the developing buds, red arrows point to the developing apical region, red arrowheads to the differentiating basal region and white arrowheads outlined red to fully differentiated basal discs.Scale bar: 250 µm.
During basal regeneration, GFP expression is excluded from the basal-regenerating (BR) half in epidermal_HySp5-3169:GFP animals at any time-point (Figures 2A and S5C).At 48 hpa, most animals have differentiated a new basal disc and GFP expression is slightly upregulated in the peduncle region.In gastrodermal HySp5-3169:GFP animals, the immediate GFP signals observed in the BR tips is presumably artifactual, linked to injury as in AR tips (Figures 2B and S5D).At subsequent stages, GFP expression is quite strong in the body column, in continuity with the apical domain, but becomes weaker at 24 hpa.In the BR tips, gastrodermal GFP expression is transient, becoming low or undetectable in most animals at 24 hpa.A new basal disc, free of GFP, is usually formed at 48 hpa, whereas some GFP expression remains in the adjacent peduncle region.
Regarding GFP fluorescence, it can be detected during apical regeneration at high levels in both epithelial layers at 8, 12 and 24 hpa, extending along the AR half, except in the peduncle region (Figures 2C,D and S6A,C).Subsequently, as the tentacle rudiments appear, epidermal GFP fluorescence becomes maximal in the apical region, while gastrodermal GFP fluorescence disappears from the tip of the forming head and becomes predominant in the tentacle ring and upper body column, resembling the homeostatic pattern.As expected, in epidermal HySp5-3169:GFP animals regenerating their basal half, no GFP fluorescence is observed, except in the apical region of origin (Figures 2C and S6B); in gastrodermal HySp5-3169:GFP animals, GFP fluorescence is widely distributed along the body axis, but excluded from the differentiating basal disc, as observed at 48 hpa (Figures 2D and S6D).
During budding, GFP fluorescence is detected throughout the process in the epidermal and gastrodermal layers of HySp5-3169:GFP animals, but with different patterns (Figure 2E-H).In the budless parental polyp, epidermal GFP fluorescence is first visible as a patch preceding bud formation (stage 1, Figure 2E), then in the budding belt, where GFP fluorescence persists until stage 6, it is seen as gradually forming well-defined boundaries (Figure 2E,F).In the gastrodermis, GFP fluorescence is also detected in the budding belt, but with diffuse boundaries, in continuity on the apical side with the GFP expression domain in the body column (Figure 2G,H).In the growing bud, GFP fluorescence is ubiquitously expressed in both layers, predominantly apical in the epidermis from stage 6 onwards.At stages 9 and 10, when the bud is mature and ready to detach, the epidermal and gastrodermal GFP fluorescence correspond to those observed in adult polyps, apically restricted in the epidermis, while they diffuse along the axis in the gastrodermis (Figure 2E,G).
In conclusion, the HySp5-3169:GFP transgenic lines highlight the temporal and spatial layer-specific regulations of Sp5 linked to regeneration and budding.During apical regeneration, Sp5 is up-regulated in the epidermis at an early-late phase (24 hpa) and not at all during basal regeneration.In contrast, gastrodermal GFP expression is broadly enhanced at an early stage, whatever the type of regeneration, apical or basal.This widespread increase in GFP expression in the gastrodermis reflects a systemic Sp5 response to amputation, specifically driven by the Sp5 promoter as such an increase is not observed with mCherry driven by the Actin promoter.Similarly, during budding, Sp5 expression is tightly regulated in the epidermis, but rather diffuse and systemic in the gastrodermis.
Layer-Specific Modulations of Sp5 Expression upon Alsterpaullone (ALP) Treatment
We then compared the phenotypic changes induced by the GSK3β inhibitor Alsterpaullone (ALP), which in the H. vulgaris Zürich L2 strain (Hv_ZüL2) leads to an increase in the level of nuclear β-catenin in the body column and the subsequent activation of Wnt3/β-catenin signaling [23] (Figure 3A).As a result, a two-day ALP treatment induces the formation of multiple ectopic tentacles along the body column of Hv_ZüL2 or Hv_Basel animals [23,33].However, in animals from the Hv_AEP2 strain, two-, four-or even sevenday ALP treatment only leads to the transient and partial development of a few ectopic tentacles along the body column, likely as a result of the lower sensitivity of Hv_AEP2 animals to drug treatments [42].Nevertheless, after a four-day treatment, we noticed additional morphogenetic changes such as the striking reduction in the size of both the original tentacles and the hypostome at the apical pole, together with the enlargement of the upper body column that appears globally "swollen", the progressive disappearance of the basal disc and the refinement of the basal extremity (Figures 3B and S7-S9).Given the positive feedback loop that operates between Wnt3/β-catenin, Sp5 and Zic4 and the negative one between Sp5 and Wnt3, we investigated how Sp5 expression is modulated in each epithelial layer when β-catenin signaling is constitutively activated.We thus exposed non-transgenic and HySp5-3169:GFP transgenic animals to ALP for two or four days and analyzed the concomitant modulations of Sp5 and Wnt3 and GFP and Wnt3 (Figures 3B, S7 and S8).
After two days, we observed in all conditions a transient extension of the apical Sp5 and HySp5-3169:GFP expression domain (i.e., positive for both Sp5 and GFP) below the head, forming a second tentacle ring, from which, in some animals, ectopic tentacles transiently emerge.In the body column, Sp5 is globally up-regulated along the gastrodermis while about half of the epidermal HySp5-3169:GFP animals form circular figures along the upper body column, possibly outlining regions where ectopic structures are transiently induced (Figures 3B and S8).We also noted, in both epidermal and gastrodermal HySp5-3169:GFP animals, a high level of GFP expression close to the basal extremity, including a GFP+ ring just above the basal disc.
After four days, most of the two-day ALP-induced changes had vanished: Sp5 and GFP were no longer detected apically, neither in the epidermis, nor in the gastrodermis, the Sp5/GFP epidermal figures along the body column had disappeared and the global epidermal GFP expression was dramatically reduced.In the gastrodermis, the HySp5-3169:GFP expression remained present in the central part of the body column in most animals (Figures 3B and S8).In summary, this analysis shows a similar silencing of epidermal and gastrodermal HySp5-3169:GFP in the apical region, but striking differences along the body column, with HySp5-3169:GFP transiently enhanced and forming circular figures in the epidermis, while remaining diffuse and long-lasting in the gastrodermis.
Layer-Specific Modulations of Wnt3 Expression Induced by ALP Treatment
In parallel, we tested the putative layer-specific modulations of Wnt3 by exposing it to ALP animals of the epidermal and gastrodermal HyAct-1388:mCherry_HyWnt3-2149:GFP transgenic lines (named HyWnt3-2149:GFP), where GFP expression is under the control of 2149 bp of the Wnt3 promoter [27,33] (Figure S9).With regard to Wnt3 regulation after a two-day ALP treatment, the epidermal HyWnt3-2149:GFP persists at the tip of the hypostome while being strongly up-regulated in the tentacle ring and in tentacle roots, whereas small dots expressing Wnt3 and HyWnt3-2149:GFP become visible along the body column.After a four-day ALP treatment, as expected, Sp5 is strongly down-regulated, while a dense network of Wnt3 or HyWnt3-2149:GFP dots is established along the entire body column in both layers (Figures 3B and S7-S9).We also noted a strong overall increase in gastrodermal HyWnt3-2149:GFP expression along the body column, indicating that the transactivation driven by the HyWnt3-2149 promoter in the gastrodermis is much greater than that driven by the full set of regulatory sequences of the endogenous HyWnt3 gene.
In summary, the analysis of GFP expression in these four transgenic lines helps identify the layer-specific regulation of Sp5 and Wnt3 in response to the ALP-induced activation of Wnt/β-catenin signaling (Figure 3C).The monitoring of GFP fluorescence in HySp5-3169:GFP animals confirmed these layer-specific differences, i.e., a transient Sp5 up-regulation in the body column after two or four days of ALP exposure, followed by a down-regulation when Wnt3/β-catenin signaling is highly active, mimicking the situation at the tip of the hypostome.Indeed, after a seven-day ALP treatment, HySp5-3169:GFP fluorescence is restricted to the modified apical and basal extremities in epidermal transgenic animals and shifted to the lower half of the body in gastrodermal ones (Figure 3D).
β-Catenin Knock-Down Differentially Impacts Sp5 Expression in Epidermis and Gastrodermis
To test a possible layer-specific regulation of Sp5 when β-catenin signaling is reduced, we knocked-down β-catenin in HySp5-3169:GFP transgenic animals (Figure 4A).As early as 24 h after the 1st electroporation (EP), we found the normalized levels of β-catenin transcripts significantly decreased by about 2-fold in each layer of the apical region and by 2-fold along the gastrodermis of the body column.Surprisingly, in gastrodermal HySp5-3169:GFP animals exposed to scrambled siRNAs, β-catenin transcript levels increase steadily after each EP in the apical region and body column.This observation suggests that the EP procedure leads to an unspecific stress-induced response that either activates β-catenin regulatory sequences and/or stabilizes β-catenin transcripts (Figures 4B and S10).
With regard to the Sp5 levels, we did not detect any specific modulation, with the exception of a transient decrease one day post-EP2 along the body column of gastrodermal HySp5-3169:GFP animals (Figures 4B and S11).Concerning the GFP levels, we recorded in the apical and body column regions of the epidermal HySp5-3169:GFP animals a progressive decrease below 25% in the apical region one day after EP3, a result that likely reflects the weaker activation of the HySp5-3169 promoter in the epidermis when β-catenin expression is decreased.We also noted in animals exposed to scrambled siRNAs a twofold increase in the level of epidermal GFP transcripts along the body column, again pointing to an EP-induced stress response.The consequences of β-catenin knock-down are different in the gastrodermis and are actually quite limited with a transient increase in the HySp5-3169:GFP transcript level in the apical region one day after EP2 without any significant modulation in the body column.It should be noted that the non-specific EP-induced increase in GFP observed in the epidermis is not observed in the gastrodermis.In summary, the β-catenin RNAi procedure efficiently reduces the level of β-catenin transcripts, up to twofold in both the epidermis (mainly apical) and the gastrodermis (apical and body column).This β-catenin reduction does not affect the levels of Sp5 transcripts outside the −1.41/+ 1.41-fold range (except at one time-point in the gastrodermal body column).Notably, it strongly affects the HySp5-3169:GFP transcript levels in the epidermis where a two-to-four-fold reduction (apical and body column) is noted.Such an effect is not observed in the gastrodermis.Finally, we noted that the RNAi procedure produces an EP-induced stress response in the body column leading to an unspecific increase in β-catenin levels in the gastrodermis and in HySp5-3169:GFP levels in the epidermis.
β-catenin Knock-Down Leads to Formation of Bud-like Structures Expressing Gastrodermal Sp5
We previously showed that a transient knock-down of β-catenin triggers a size reduction in Hv_Basel animals, as well as the formation of "bud-like structures", which grow from the body column similarly to buds, but without forming a complete head with a fully differentiated ring of tentacles [33].These bud-like structures are present in 100% of Hv_Basel animals one day after EP3 (Figure S11).Remarkably, as early as two days after EP1, well-defined regions along the parental polyp already strongly express Sp5, even though the bud-like structures are not morphologically visible yet (Figures 4C and S11).
Concerning GFP and mCherry fluorescence in epidermal_HySp5-3169:GFP animals, the control animals exposed to scramble siRNAs show the expected pattern of high-level GFP fluorescence in the apical region and a low one along the body column.Meanwhile, the β-catenin(RNAi) animals exhibit a loss of GFP fluorescence two days post-EP2 in well-defined areas of the apical region together with a globally reduced GFP fluorescence along the body column, except for some GFP-positive patches (Figures 4D and S12A).Three days post-EP3, newly formed bud-like structures become visible in 30% to 60% of the animals and they never show any epidermal GFP fluorescence.We confirmed these findings by immuno-detecting GFP and mCherry in epidermal HySp5-3169:GFP animals knocked-down for β-catenin.
We noted the loss of epidermal GFP expression in large parts of the apical region, the presence of GFP-positive patches along the body column and the lack of GFP protein in the bud-like structures (Figures 4E and S13).In gastrodermal HySp5-3169:GFP animals knocked-down for β-catenin, the gastrodermal layer remains GFP fluorescent in the tentacle ring and along the body column.The bud-like structures are all GFP fluorescent, with a positive signal in the presumptive apical region and often with a patchy pattern (Figures 4D and S11B).We confirmed these findings by immunodetecting GFP and mCherry six days post-EP2 in these animals (Figures 4E and S13).
In summary, β-catenin(RNAi) leads to the formation of bud-like structures in both Hv_Basel and Hv_AEP2 animals, a phenotype observed with a higher penetrance in the former (100% post-EP3) than in the latter (27% to 60%).These bud-like structures induced by β-catenin(RNAi) show low HySp5-3169:GFP/GFP expression in the epidermis, but high in the gastrodermis, in contrast to what is observed in natural buds (Figure 2E-H).This localized high level of Sp5 might explain why bud-like structures do not differentiate hypostome or tentacle rings.These results again point to a differential regulation of HySp5-3169:GFP by β-catenin signaling in the epidermis and gastrodermis.
Negative Auto-Regulation of Sp5 in the Epidermis
To determine whether the transcription factor Sp5 regulates its own expression, we knocked-down Sp5 in HySp5-3169:GFP transgenic animals and monitored in each layer changes in GFP, Wnt3 and Sp5 expression as well as changes in GFP fluorescence at different time points after EP1 and EP2.We anticipated that after Sp5(RNAi), GFP/GFP expression would be increased if Sp5 autoregulation was negative and decreased if Sp5 autoregulation was positive.As previously reported, we, however, noticed some unspecific EP-induced increases in the transcript levels in animals exposed to scramble siRNAs.Here, this corresponds to an unspecific increase in Wnt3 levels in both layers, maximal in the body column at 8 h post-EP1 and post-EP2.For Sp5(RNAi), we found in epidermal HySp5-3169:GFP animals, GFP transcripts more abundant 16 and 24 h after EP1 and EP2, two-to-four-fold in the apical region and above four-fold in the body column (Figures 5A and S14).This GFP modulation was not observed in gastrodermal HySp5-3169:GFP animals.Regarding Wnt3 and Sp5 transcript levels, we did not detect any significant modulation by qPCR analysis, neither in the epidermal nor in the gastrodermal line.The analysis of the GFP expression pattern in Sp5(RNAi) epidermal HySp5-3169:GFP transgenic animals confirmed the above results, with an increase in GFP levels at the same time-points (Figures 5B and S15A).At the protein level, we first noted at 8 h post-EP1 some weak epidermal GFP fluorescence along the body column of some control and Sp5(RNAi) animals, possibly linked to the EP-induced activation of the HySp5-3169 promoter.At 16 h post-EP1, we recorded a marked and specific increase in epidermal GFP fluorescence along the body column of Sp5(RNAi) animals, which is maintained high up to 24 h post-EP2 (Figures 5C and S16A).However, at two days post-EP2, when the ectopic epidermal GFP fluorescence is still detected, we found the GFP transcript levels in Sp5(RNAi) epidermal HySp5-3169:GFP transgenic animals significantly reduced in the apical and body column regions, implying that the Sp5(RNAi)-induced up-regulation of HySp5-3169:GFP is transient (Figure 5D).In gastrodermal HySp5-3169:GFP animals, we did not detect the global or localized modulation of GFP expression after Sp5(RNAi) (Figures 5A,B, S14B and S15B).We also did not record any significant change in GFP fluorescence after EP1 (Figures 5C and S16B).However, after EP2, we noted, in about half of the animals, ectopic spots of GFP fluorescence in the lowest part of body column or in the tentacles, indicating that Sp5-negative autoregulation might also take place in the gastrodermis, albeit more spatially restricted than in the epidermis.
Up-regulation of β-catenin after Sp5(RNAi) in Epidermis and Gastrodermis
We also performed concomitant q-PCR analysis of Sp5, Wnt3 and β-catenin transcript levels in HySp5-3169:GFP animals knocked-down for Sp5 at two days post-EP2.In epidermal HySp5-3169:GFP animals, we detected a significant increase in β-catenin transcripts in the body column and basal regions in the absence of significant modulations of Sp5 and Wnt3 levels (Figure 5D).In gastrodermal HySp5-3169:GFP animals, we likewise noted a significant increase in the levels of β-catenin transcripts in these two regions, alongside a slight reduction in the Sp5 and GFP transcript levels (Figure 5D).This up-regulation of β-catenin upon Sp5(RNAi), which is detected along the body column to a similar extent in both layers, is expected since the negative regulation played by Sp5 on Wnt/β-catenin expression is reduced when Sp5 is down-regulated, even transiently.Such regulation is, however, not detected in the apical region, consistently with previous results [33].
Despite the lack of the sustained down-regulation of Sp5 transcripts after Sp5(RNAi), we conclude that the two-step RNAi procedure we applied is effective, highlighting the dynamic regulation of Sp5 in each epithelial layer, with a 24 h-long up-regulation of HySp5-3169:GFP along the epidermis after each exposure to Sp5(RNAi).We did not record such a modulation of HySp5-3169:GFP in the gastrodermis.We interpreted the up-regulation of the epidermal GFP transcripts after Sp5(RNAi) as the consequence of knocking down the negative autoregulation played by the Sp5 transcription factor on its own expression.This is, however, only transient, as one day later, the GFP expression levels decreased by about twofold, probably as a consequence of the up-regulation of β-catenin expression that takes place in both layers of the body column, leading to a transient up-regulation of Sp5 between 24 and 48 h after EP2, producing the Sp5 protein at a level where it represses the Sp5 promoter and hence decreases the GFP transcript levels.However, two days after EP2, an ectopic GFP fluorescence was still visible in the epidermis (Figures 5E and S17), in keeping with the long lifespan of the GFP protein [43].
Identification of Five Active Sp5-Binding Sites within the Proximal Hydra Sp5 Promoter
The Hydra Sp5 transcription factor belongs to the Sp/KLF family, a class of DNAbinding proteins that bind GC-rich boxes or GT/CACC elements through their three zinc finger (ZF) domains [32,44].We previously identified, by ChIP-seq analysis performed with extracts from HEK293T cells, five Sp5-binding sites (Sp5-BS) and five TCF-binding sites (TCF-BS) within 2966 bp of the Sp5 promoter, clustered in two adjacent regions named PPA and PPB in the vicinity of the Sp5 transcriptional start site [33].To identify active Sp5-binding sites (Sp5-BS) in Hydra Sp5 promoter sequences, we raised antibodies against the Hydra Sp5 protein with the aim of performing a ChIP-qPCR analysis of the HySp5 promoter using Hydra extracts and comparing Sp5-binding sites with those previously identified in human cells expressing HySp5.We raised two antibodies against HySp5, one monoclonal and the other polyclonal, designed to target regions that do not contain evolutionarily conserved domains present in HySp5, such as the Sp box, the Buttonhead box and the ZF DNA-binding domain (Figures 6A and S18A).The two Sp5 antibodies specifically recognize the HySp5 protein, either as the HySp5-218 recombinant protein (24.5 kDA) used to raise the Sp5 monoclonal antibody, produced full length in vitro with the TNT reticulocyte transcription-coupled-translation system or expressed in transfected HEK293T cells (Figures 6B and S18B,C).
In the Hv_AEP2 extracts, the monoclonal anti-Sp5 detects the Sp5 protein at higher levels in the apical region than in the body column as expected.In contrast, in three independent experiments, the polyclonal anti-Sp5 antibody recognizes a band at the appropriate size, but exclusively in nuclear extracts (NEs) prepared from the lower body column and not from the apical region (Figure S17C).To evidence the possible cross-reactivity with the closely related Sp4 protein, we tested the polyclonal α-Sp5 antibody on the TNT-produced Hydra Sp4 protein but did not detect any band.We suspect that the polyclonal α-Sp5 antibody detects the Sp5 protein, but also cross-reacts with an unidentified Sp/KLF protein predominantly expressed in the body column and basal half of Hydra.
Next, we tested both the monoclonal and the polyclonal α-Sp5 antibodies in a ChIP-seq analysis of the Sp5-bound regions.We first used Hm-105 extracts to assay the amplification of 15 regions along the 3169 bp of the Sp5 promoter and 5 ′ UTR sequences after ChIP (Figures 6C and S18D).Of these 15 regions, we found only two 100 bp-long regions specifically enriched with either antibody, but not by pre-immune serum, namely the overlapping PP4 (−135/−36) and PP5 (−71/+29) regions located in the proximal promoter of Sp5.Interestingly, the enrichment of the regions PP4 and PP5 by ChIP-qPCR is similar when extracts from Hv_AEP animals are used (Figures 6D and S18D).Moreover, these two regions were also identified when the ChIP-qPCR analysis was performed with extracts from human cells expressing HySp5 [33].Each region contains three putative Sp5-BS, one of them (Sp5-BS3) being present in both PP4 and in PP5 (Figure 6E,F).
To test whether these putative Sp5-BS are functional in Hydra, we designed two doublestranded oligonucleotides (ds-DNAs) to perform an Electro-Mobility Shift Assay (EMSA) with ( 1) PPA (−135 to −67), which encompasses the identical Sp5-BS1 and Sp5-BS2 motifs, and (2) PPB (−71 to +2), which contains the distinct Sp5-BS3 and Sp5-BS4 motifs (Figure 6E-G).When Hydra NEs were incubated with biotin-labeled Sp5 ds-DNAs, we recorded a mobility shift, with two retarded bands for PPA and two distinct bands for PPB, no longer visible in the presence of a 200 fold excess of unlabeled oligonucleotides (Figure 6G).The mutation of Sp5-BS1 and Sp5-BS2 in the PPA region (CCGCCT -> CTTCCT) did not cancel the shift, but rather accentuated it.In contrast, when Sp5-BS3 (GCGCCA -> GTTCCA) and Sp5-BS4 (AGGCGT -> ATTCGT) in the PPB region were mutated, the shift almost disappeared.We, therefore, concluded that HySp5 likely binds the putative Sp5 binding sites in the PPA and PPB regions, with higher specificity in the latter; these results support the hypothesis that in Hydra, the Sp5 transcription factor is involved in Sp5 autoregulation.S2) tested along the 2992 bp-long Sp5 promoter by ChIP-qPCR using Hm105 extracts and the mSp5 antibody.Significant
The Sp5 Proximal Promoter Is Involved in Sp5-Negative Autoregulation
To determine whether some of these five proximal Sp5-binding sites are indeed involved in Sp5 auto-regulation, we tested these sequences in an ex vivo transactivation assay system (Figure 7A).We prepared seven reporter constructs where the expression of luciferase is driven either by the full HySp5 promoter (HySp5-2992:luciferase), by a shorter version where 164 bp of the proximal sequences are deleted (HySp5-2828:luciferase) or by the full HySp5 promoter where one of the five HySp5-BS is mutated (HySp5-2992-mBS1:luciferase, -mBS2, -mBS3, -mBS4 and -mBS5).Each of these reporter constructs were co-expressed in HEK293T cells either with the full Sp5 protein under the control of the CMV promoter (CMV:HySp5-420) or with a truncated version of the Sp5 protein lacking the DNA-binding domain (CMV:HySp5-∆DBD).In conditions where Sp5 is either not expressed or expressed but in its truncated version (HySp5-∆DBD), we recorded low luciferase activity, consistent with the fact that the HySp5-2992 promoter is poorly active in HEK293T cells (Vogg et al., 2019) [33].In contrast, in the presence of the HySp5-420 protein, we measured a 7 fold higher level of activity of the HySp5-2992 promoter and obtained similar levels when the HySp5-BS1, HySp5-BS2 and HySp5-BS4 sites were mutated (Figure 7A).However, surprisingly, when the proximal sequences are completely deleted, the transactivation levels are more than doubled, indicating that these sequences actively repress the activity of the HySp5-2992 promoter in the presence of the HySp5-420 protein.When the HySp5-BS3 and HySp5-BS5 sites are mutated, the activity is lower than that recorded when the HySp5-2992 promoter is complete, indicating that either these two sites play a positive role for the full activity of the HySp5 promoter or they restrict the repressive activity of the proximal sequences.These results show that the HySp5 promoter is subject to complex regulation, with a clear Sp5-dependent repressive role of the proximal sequences and an enhancing role of the more upstream ones.
The Zic4 Transcription Factor Positively Regulates Sp5 Expression
We recently showed that the Hydra transcription factor Zic4 (HyZic4), which is involved in the differentiation of tentacles and the maintenance of their identity, is a downstream target gene of HySp5 [34].We considered the possibility that Zic4 also regulates Sp5 expression in a feedback loop.We first searched for the presence of Zic-binding sequences (Zic-BS) as deduced from those identified in vertebrate or non-vertebrate gene promoters [45] (Table 1).We identified two putative Zic-BS in the HySp5-2992 upstream sequences at positions −670/−659 and −391/−369 (Table 1, Figure S2), as well two putative Zic-BS in the HyWnt3-2149 upstream sequences and five in the HyZic4 ones (Table 1, Figure S20).To test whether HyZic4 regulates HySp5, we expressed in HEK293T cells the HySp5-2992:luciferase construct together with either the full HyZic4 protein (CMV:Zic4-431) or a truncated form lacking its DNA-binding domain (CMV:HyZic4-∆DBD) (Figure 7B).In the presence of HyZic4, the HySp5-2992:luciferase activity is multiplied by almost 50 fold when human β-catenin is co-expressed and over 50 fold when human β-catenin is not coexpressed.Remarkably, the luciferase activity becomes basal when the Zic4 DNA-binding domain is deleted, indicating that HyZic4 can enhance HySp5 expression through directly binding to its promoter, independently of β-catenin.
Table 1.Zic-binding sites present in chordates' genes and in the upstream sequences of the Sp5, Wnt3 and Zic4 Hydra genes.CI: Ciona intestinalis (ascidian), HR: Halocynthia roretzi (ascidian), HS: Homo sapiens, HV: Hydra vulgaris, MM: mus musculus.Next, we tested the level of HyZic4 promoter activity in HEK293T cells, which contain two putative Sp5-binding sites [34], when Sp5 is co-expressed.We measured a 10 fold increase in the HyZic4-3505:luciferase activity when the full Sp5 protein (CMV:HySp5-420) is co-expressed, an increase no longer detected when the Sp5 protein lacks its DNAbinding domain (CMV:HySp5-∆DBD) (Figure 7C).As stated previously, this increase is observed at similar levels in the presence or absence of human β-catenin co-expression.This result indicates that Sp5 can significantly enhance HyZic4 expression through direct DNA-binding.
Gli-consensus
Similarly, we also measured a strong Zic4 auto-activation (~50 fold), which requires the DNA-binding domain (Figure 7G).By contrast, when we used the TOPFlash assay where six tandem TCF-binding sites can enhance luciferase expression [61], we found that the TOPFLASH luciferase activity decreased when Zic4 is co-expressed.This Zic4-dependent repression requires the Zic4 DNA-binding domain, proved to be Zic4 dose-dependent and is enhanced when human β-catenin is co-expressed (Figure 7E,F).However, when we tested the Zic4 activity on the Wnt3 promoter (2142 bp), we found a 10 fold increase in the HyWnt3-2142:luciferase activity, likely through a direct interaction as this increase is no longer observed when the Zic4 DNA-binding domain is deleted (Figure 7G).
We have summarized these interactions with those previously identified with Sp5 or Zic4 in HEK293T cells [28,29] in a scheme that presents a series of positive loops where Zic4 appears very potent, including in its own autoregulatory loop, and two negative regulations, with Sp5 repressing Wnt3 expression and Zic4 repressing TCF-regulated promoters (Figure 7H).In HEK293T cells overexpressing HyZic4, we also show by co-immunoprecipitation that the two transcription factors HyZic4 and human TCF1 can physically interact (Figure 7I), similarly to what we previously showed between HySp5 and TCF1 [33].These results suggest that in Hydra cells where Sp5 and Zic4 are co-expressed, they enhance each other's expression while repressing TCF/β-catenin transcriptional activity.
We tested this gene regulatory network (GRN) in Hydra and indeed found in animals knocked-down for Zic4 or β-catenin the Zic4 levels down-regulated at least two-fold and also reduced after Sp5 knock-down, but at a lower level (Figure 7J).We also found the levels of Sp5 moderately down-regulated in animals knocked-down for Zic4 or for β-catenin (Figure 7K).We concluded that the regulatory events recorded in HEK293T cells likely take place in Hydra cells where all components of this GRN are expressed, namely in the gastrodermal epithelial cells of the head region where Sp5 and Wnt3 are highly expressed and in the epithelial battery cells located in the epidermis of the tentacles (Figure 8A).highest levels of Sp5 in apical cells and in one sub-population of epithelial stem cells along the body column (Figure 8A).Therefore, we concluded that the HySp5-3169:GFP transgenic lines provide suitable and reliable tools to monitor the endogenous Sp5 regulation, with the 3 kb of Sp5 upstream sequences inserted in this construct being sufficient to direct GFP expression in a way that mimics endogenous Sp5 regulation in each layer.
Epithelial Layer-Specific Regulations of Sp5 in Intact Animals
To better decipher the dynamics of the interactions between the activating and inhibiting components of the head organizer in intact and regenerating Hydra, we compared the in vivo regulation of Sp5 in each epithelial layer, epidermal and gastrodermal, when developmental, pharmacological or genetic conditions vary (see Table A1).For this purpose, we used transgenic lines expressing the reporter construct HySp5-3169:GFP either in the epidermis or gastrodermis and analyzed GFP expression in intact or developing animals, either regenerating or budding.In intact animals, we found several differences between epidermal and gastrodermal GFP/GFP expression; in the epidermis, GFP/GFP is expressed throughout the hypostome, while the tip of the hypostome is free of gastrodermal GFP/GFP, in agreement with the fact that Sp5 transcripts are not detected in this area [33].In addition, epidermal GFP/GFP is maximal in the tentacle ring and uppest body column, while the gastrodermal one extends along the body axis.In animals regenerating their heads, we noted a differential temporal regulation of Sp5 in each layer, with GFP up-regulated during the early phase in the gastrodermis but one day later in the epidermis.
These layer-specific regulations of HySp5-3169:GFP were identified through different approaches: at the protein level, by monitoring in vivo GFP fluorescence and immunodetecting GFP protein expression, and at the transcript level, by performing qPCR and in situ hybridization to quantify and map endogeneous Sp5 and GFP expression along the body axis.In addition, these layer-specific regulations of Sp5 in intact animals are supported by the single-cell analysis of Sp5 expression [35] that shows a predominant expression of Sp5 in epithelial cells, with maximal levels observed in tentacle battery cells located in the epidermis, a lower level in epithelial cells of the hypostome and no expression in stem cells along the body column.In the gastrodermis, single-cell analysis detects the highest levels of Sp5 in apical cells and in one sub-population of epithelial stem cells along the body column (Figure 8A).Therefore, we concluded that the HySp5-3169:GFP transgenic lines provide suitable and reliable tools to monitor the endogenous Sp5 regulation, with the 3 kb of Sp5 upstream sequences inserted in this construct being sufficient to direct GFP expression in a way that mimics endogenous Sp5 regulation in each layer.
Three Architectures of the Wnt3/β-Catenin/TCF/Sp5/Zic4 GRN in Intact Hydra
This study adds a new level of understanding of how the head organizer works in Hydra.The parallel analysis of the Wnt3/β-catenin/TCF/Sp5/Zic4 GRN in human HEK293T cells and in Hydra epithelial layers reveals complex cross-regulatory interactions.We first could confirm several positive regulations, those of Wnt3/β-catenin/TCF on Sp5 and Zic4 expression and that of Sp5 on Zic4, and identify some new ones such as Zic4 on Sp5 and Zic4 on Wnt3 (at least in HEK293T cells).We also confirmed the down-regulation of Wnt3 by Sp5 and reported, as a new finding, the down-regulation of β-catenin by Sp5, as well as the down-regulation of β-catenin/TCF activity by Zic4 (at least in HEK293T cells).Finally, we identified autoregulatory loops, positive for Sp5/Sp5 and Zic4/Zic4 in HEK293T cells, positive for Wnt3/Wnt3 and β-catenin/β-catenin via TCF in Hydra and negative for Sp5/Sp5 in the Hydra epidermis (see Table A1).
By analyzing these interactions along the body axis of intact animals, we could characterize three distinct organizations of the Wnt3/β-catenin/TCF/Sp5/Zic4 GRN that correspond to three distinct patterning functions in three anatomical contexts (Figure 8).Indeed, in the analysis of Wnt3, β-catenin, Sp5 and GFP regulation in epidermal and gastrodermal HySp5-3169:eGFP and HyWnt3-2149:eGFP transgenic lines after ALP treatment, β-catenin(RNAi) or Sp5(RNAi) show that Sp5 is differentially regulated in the epidermis and gastrodermis (1) at the apex, (2) in the tentacle zone and (3) along the body column (Figure 8B-E).At the apex of intact animals, the tightly spatially restricted expression of Sp5 in the gastrodermis is crucial for maintaining maximal levels of Wnt3 expression at the tip of the head, where Wnt3/β-catenin/TCF signaling drives the constitutive head organizer activity leading to head maintenance.In the tentacle zone, the positive co-regulation of Sp5 and Zic4 in the epidermis of the tentacle zone is critical for tentacle formation with an unclear role for the gastrodermis.Along the body column, the high levels of Sp5 expression and Sp5 activity in the gastrodermis, possibly through positive autoregulation, are critical for keeping Wnt3/β-catenin/TCF signaling low and preventing the formation of ectopic tentacles, bud-like structures or ectopic heads.
Sp5-Negative Autoregulation in the Epidermis
Sp5 was identified as a target and a regulator of the Wnt transcriptional program in vertebrates [28][29][30][31][32], notably as a repressor [30,62].We also showed that Hydra or zebrafish Sp5 expressed in human cells acts as an evolutionarily conserved transcriptional repressor, including on transcriptional machinery and on Sp genes [33].This new study confirms this finding as in Hydra, Sp5 negatively regulates its own expression in the epidermis, as evidenced by the transient up-regulation of Sp5 after each exposure to Sp5 siRNAs.However, the response to Sp5(RNAi) in HySp5-3169:GFP transgenic animals is highly asymmetrical between the epithelial layers as in the epidermis, there is a transient but massive increase in GFP expression and GFP fluorescence along the body axis, together with a limited increase in Sp5 and Wnt3 levels along the body column.In contrast, in the gastrodermis, we recorded no overall increase in the GFP transcript level, but only a few ectopic spots of GFP overexpression or GFP fluorescence in the peduncle region and tentacles.
We interpret this transient GFP/GFP up-regulation in the epidermis together with that of Sp5, although more limited, as a release of the negative auto-regulation played by Sp5 in this layer.This epidermal-specific Sp5 negative autoregulation suffices to explain the lower constitutive Sp5 expression recorded in this layer.As such, this constitutive asymmetry in Sp5 expression also explains the more effective Sp5(RNAi) knock-down in the epidermis as Sp5 levels are low and epithelial cells are highly accessible to electroporated siRNAs, whereas in the gastrodermis, Sp5 levels are higher and epithelial cells are less accessible.
Among the 2992 bp Sp5 upstream sequences, the ChIP-seq analysis could identify only two areas in the Sp5 proximal promoter, PPA and PPB, which are enriched in the Sp5 protein when Hm-105 or Hv_AEP2 nuclear extracts are used.In addition, these two areas, which each contain two Sp5-binding sites previously identified by ChIP-seq analysis in human cells expressing HySp5, are necessary for Sp5-negative autoregulation.The role of Sp5-negative autoregulation in the GRN is further supported by these new findings, showing that Sp5 can directly regulate its own promoter.
Sp5 regulation is quite dynamic, as observed in HySp5-3169:GFP animals exposed to Sp5(RNAi) where GFP levels are first found up-regulated in the epidermis, as discussed above, then one day later, significantly down-regulated in the apical region and body column, providing evidence for a return to Sp5-repressive activity.In parallel, the Sp5 and Wnt3 transcript levels remain unaffected, possibly as the result of the highly dynamic cross-regulations that take place between these genes.In the gastrodermis, the Sp5 and GFP levels are only mildly decreased in response to Sp5(RNAi), whereas β-catenin is upregulated in both layers along the body column indicating that in intact animals, Sp5 acts as a head inhibitor by repressing not only Wnt3, but also β-catenin expression, thus reinforcing the feed-back loop that reduces head activation, i.e., Wnt3/β-catenin signaling activity.
Distinct Configurations of the Wnt3/β-Catenin/Sp5 GRN in the Homeostatic and Developmental Head Organizers
This study shows that the temporal and spatial regulation of Sp5 and Wnt3 in each epithelial layer is in fact different in the homeostatic and developmental organizers.In intact animals, the absence of HySp5-3169:GFP expression in the gastrodermal epithelial cells of the perioral region where Wnt3 expression is maximal indicates that a high level of Wnt/β-catenin signaling can only be stably maintained if Sp5 expression is repressed.This equilibrium situation is necessary to maintain homeostatic organizer activity.How Sp5 expression is maintained inhibited within gastrodermal cells of the homeostatic head organizer has not yet been identified.Since in each context where Wnt3 expression is highest, Sp5 is repressed, we infer that the highest levels of Wnt/β-catenin signaling induce the transcriptional repression of Sp5 and/or degradation of Sp5 transcripts.
In contrast, in the apical-regenerating tip, the Sp5 expression domain is broad in the gastrodermis, likely necessary for restricting head organizer activity, i.e., limiting Wnt3 and β-catenin up-regulation as well as the activation of Wnt3/β-catenin/TCF signaling, as such as a single head develops instead of multiple ones.The Wnt3 and Sp5 gastrodermal domains overlap at least for the first 24 h after a mid-gastric bisection, with Sp5 expressed at a high level in these cells.Therefore, we believe that in gastrodermal epithelial cells that have the power to develop organizing activity, as in apical-regenerating tips, Wnt3/βcatenin signaling is protected from Sp5 activity, either by the active inhibition of Sp5 repressive transcriptional activity on Wnt3 or β-catenin expression or by the enhanced degradation of the Sp5 protein.In conclusion, it remains to be understood how interactions between Wnt3/β-catenin signaling and the Sp5 transcription factor remain distinct in the homeostatic and developmental head organizers, possibly via transcriptional or posttranscriptional mechanisms in the former context and via translational or post-translational mechanisms in the latter one.The perturbations of the dynamic interactions within the Wnt3/β-catenin/TCF/Sp5/Zic4 GRN induce several phenotypes along the body axis such as (i) the loss of tentacle identity after Zic4(RNAi) [34], (ii) the formation of ectopic tentacles after ALP treatment [23] associated with an overall increase in gastrodermal Sp5 expression and a punctuated upregulation of Wnt3 in both layers, (iii) the formation of multiple ectopic heads induced by Sp5(RNAi) linked to the time-and space-restricted decrease in gastrodermal Sp5 associated with the localized activation of Wnt3/β-catenin/TCF signaling, as observed in Hv_Basel [33], and (iv) the formation of bud-like structures upon β-catenin(RNAi), which rarely differentiate a head.This latter phenotype is highly penetrant in Hv_Basel, was present in 100% of animals one day after EP3 and was identical but less penetrant in Hv_AEP2 transgenic animals.
The regulations of Sp5 in HySp5-3169:GFP transgenic animals knocked-down for β-catenin are layer-specific, consistent with the observed phenotype.In the epidermis, β-catenin(RNAi) leads to a drastic reduction in GFP expression and GFP fluorescence, consistent with the fact that Sp5 is directly positively regulated by Wnt/β-catenin signaling [33].In contrast, in the gastrodermis, the localized up-regulation of GFP expression and GFP fluorescence in the bud-like structures were unexpected.The preliminary results indicate that the RNAi-induced decrease in the β-catenin transcript levels leads to the rapid nuclear translocation of the β-catenin protein available in gastrodermal cells and the subsequent transactivation of β-catenin/TCF target genes such as Sp5.
This paradoxal response to β-catenin(RNAi) explains (a) the rapid growth of bud-like structures in starved animals that normally do not bud, (b) the high level of gastrodermal HySp5-3169:GFP and Sp5 expression in these structures and (c) the lack of head structure differentiation due to the high level of Sp5 and the inhibition of the head organizer.This twostep response to β-catenin knock-down provides an experimental paradigm for inducing the proliferative phase of the budding process in the absence of apical differentiation and for characterizing the molecular players required in parental tissues.
Variability of Head Organizer Inhibitor Strength across Hydra Strains
In this study, we observed great phenotypic variability between Hv_Basel and Hv_AEP2 after exposure to GRN modulators, whether ALP treatment or Sp5 knockdown.After ALP, Hv_Basel developed numerous ectopic tentacles along their body column within a few days, whereas Hv_AEP2 formed very few, even after seven days of ALP exposure.In Hv_Basel, a two-day exposure to ALP led to a first wave of Sp5 up-regulation, followed two days later by a second wave of Wnt3 up-regulation, inducing the formation of small Wnt3 spots along the body column and the subsequent emergence of multiple ectopic tentacles, as observed in Hv_ZüL2.
In contrast, in Hv_AEP2 animals, either non-transgenic or transgenic (Sp5-3169:GFP and Wnt3-2149:GFP animals), exposure to ALP led to two waves of gene regulation that were slightly different.In the first phase, large circles of cells transiently expressing Sp5 were formed in the epidermis and a more diffuse and intense expression of Sp5 was observed in the gastrodermis.This phase was followed by the appearance of Wnt3 spots along the body column, with just a few in the epidermis, but multiple ones in the gastrodermis together with a Wnt3 diffuse expression (Figure 8D,E).Nevertheless, only rare ectopic tentacles were formed in Hv_AEP2 animals.We infer that despite multiple spots of high Wnt3, Sp5 activity is constitutively higher along the gastrodermis when compared to Hv_Basel, repressing β-catenin which is kept minimal in SC2 epithelial stem cells.
After Sp5(RNAi), all Hv_Basel animals become multiheaded, whereas Hv_AEP2 animals treated in the same way do not.This highly penetrant multiheaded phenotype in Hv_Basel occurs without affecting the original head region, likely because Sp5 expression remains high there, less subject to modulation and sufficient to have a phenotypic impact (Figure 8B,C).In contrast, in Hv_Basel, the body column acquires the properties of a head organizer after Sp5(RNAi).However, this does not happen in Hv_AEP2 animals, where the absence of ectopic axis formation upon Sp5(RNAi) is explained by the fact that gastrodermal cells express Sp5 at higher constitutive levels than in Hv_Basel, maintaining Sp5 repression on Wnt3 and β-catenin expression and preventing ectopic axis formation.
In conclusion, the rarity of the ectopic tentacle phenotype after ALP or the absence of the multiple-head phenotype after exposure to Sp5(RNAi) in Hv_AEP2 animals result from the stronger activity of the head organizer inhibitor along the body column in these animals compared to that present along the body column of Hv_Basel or Hv_ZüL2 animals.These results are consistent with previous studies that revealed, through systematic transplantation experiments performed on a variety of Hydra strains, significant variations in the respective strengths of the head activation and head inhibition components along the apical-to-basal axis between Hydra strains [7,63,64].These results also highlight the predominant role of the gastrodermis in the negative regulation of the head organizer as previously demonstrated by producing chimeric animals with gastrodermal epithelial cells isolated from strains with low or high levels of head inhibition [65].
Conclusions
This study, based on the analysis of Sp5 and Wnt3 regulation in each epithelial layer of Hydra, reveals distinct architectures of the Wnt3/β-catenin/TCF/Sp5/Zic4 GRN in different anatomical regions of the animal.Each architecture is characterized by a specific relative weight for each component, providing a unique combination that controls or prevents a specific patterning process.In the context of tentacle formation, the β-catenin-dependent activation of a subset of the GRN, namely Sp5 and Zic4, plays the leading role in the epidermis and the head activator component is kept inactive.In the context of head maintenance or head formation, the head activator component, i.e., Wnt3/β-catenin/TCF signaling, plays the leading role in the gastrodermis.In the context of the body column, the head inhibitor component plays the leading role in the gastrodermis to keep the head organizer locked, even though the time-and space-restricted down-regulation of Sp5 can occur, supporting the localized activation of Wnt3/β-catenin/TCF signaling and further ectopic head formation.
The next step will be to compare the epidermal and gastrodermal chromatin signatures in the hypostome, the apical-regenerating tips and the tentacle ring along the body column to map the regulatory sites linked to the context-specific GRN architecture.The complete set of actors involved in each context as well as the role they play in relation to the Wnt3/β-catenin/TCF/Sp5/Zic4 GRN remains to be identified, e.g., Brachyury [66,67] and MAPK/CREB [24,41,[68][69][70] for head activation; notum [66,67], Dkk1/2/4, Thrombospondin and HAS7 [71][72][73][74] for head inhibition; and Alx and Notch signaling for tentacle formation [75][76][77].Also, further questions to investigate will be how this GRN, which is conserved across evolution [32,33,62], moves from one architecture to another, typically when Sp5 or β-catenin levels reach some threshold values or when additional players modify the GRN patterning function.More generally, an in-depth understanding of the molecular mechanisms behind the formation of the organizing center should enable us to transform a somatic tissue into one endowed with developmental organizing properties.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/biomedicines12061274/s1,Table S1: Accession numbers of the Hydra genes used in this study; Table S2: List of primers, siRNAs and ds-oligonucleotides; Table S3: List of reporter or expression constructs used in this work;
Figure 1 . 1 .
Figure 1.Differential regulation of HySp5-3169:GFP expression in the epidermal and gastrodermal layers.(A) Schematized view of Hydra anatomy that includes the apical region or head, formed from Figure 1.Differential regulation of HySp5-3169:GFP expression in the epidermal and gastrodermal layers.(A)Schematized view of Hydra anatomy that includes the apical region or head, formed from a dome-shape named hypostome, centered around the oral opening, surrounded by a ring of tentacles at its basis, the elongated or contracted body column and the basal disc or foot that can attach to substrates.The respective expression levels of Wnt3, Sp5 and Zic4 define three distinct domains in the apical region (see ref.[34]).(B) Structure of the HyAct-1388:mCherry_HySp5-3169:GFP reporter construct used to generate the epidermal and gastrodermal HySp5-3169:GFP transgenic lines, where epithelial cells from the epidermis and gastrodermis, respectively, express GFP and mCherry (sequence in FigureS2).TCF-BS: TCF-binding sites (orange); Sp5-BS: Sp5-binding sites (grey).(C,D) Optical sections of live transgenic animals expressing HySp5-3169:GFP in the epidermis
Figure 2 .
Figure 2. GFP regulation in regenerating and budding HySp5-3169:GFP transgenic animals.(A,B) GFP expression in regenerating halves from epidermal (A) and gastrodermal (B) HySp5-3169:GFP transgenic animals bisected at t0 and fixed at indicated times.Regen.: regeneration; hpa: hours postamputation; red arrows point to apical-regenerating (AR) regions, red triangles to the basal-regenerating (BR) regions, vertical bars indicate gastrodermal GFP expression along the body column, asterisks the original basal discs, white arrows outlined red to the regenerated heads, white triangles outlined red to the regenerated basal discs.See Figure S5.(C,D) GFP (green) and mCherry (red) fluorescence in AR and BR halves of HySp5-3169:GFP transgenic animals pictured live at indicated time points.White arrows point to apical regions of original polyps, red arrows to AR regions, white arrows outlined red to regenerated heads; white arrowheads to original mature basal discs, red arrowheads to the BR regions.See Figure S6.(E-H) Live imaging of budding HySp5-3961:GFP transgenic animals, either epidermal (E) or gastrodermal (F), pictured at indicated stages with the Olympus SZX10 microscope ((E,G), GFP fluorescence only) or the Zeiss LSM780 microscope ((F,H), GFP and mCherry fluorescence).On the parental polyp, yellow arrowheads point to the "budding belt" that forms in the budding zone; on the developing buds, red arrows point to the developing apical region, red arrowheads to the differentiating basal region and white arrowheads outlined red to fully differentiated basal discs.Scale bar: 250 µm.
Figure 2 .
Figure 2. GFP regulation in regenerating and budding HySp5-3169:GFP transgenic animals.(A,B) GFP expression in regenerating halves from epidermal (A) and gastrodermal (B) HySp5-3169:GFP transgenic animals bisected at t0 and fixed at indicated times.Regen.: regeneration; hpa: hours post-amputation; red arrows point to apical-regenerating (AR) regions, red triangles to the basal-regenerating (BR) regions, vertical bars indicate gastrodermal GFP expression along the body column, asterisks the original basal discs, white arrows outlined red to the regenerated heads, white triangles outlined red to the regenerated basal discs.See Figure S5.(C,D) GFP (green) and mCherry (red) fluorescence in AR and BR halves of HySp5-3169:GFP transgenic animals pictured live at indicated time points.White arrows point to apical regions of original polyps, red arrows to AR regions, white arrows outlined red to regenerated heads; white arrowheads to original mature basal discs, red arrowheads to the BR regions.See Figure S6.(E-H) Live imaging of budding HySp5-3961:GFP
Figure 3 .
Figure 3. Alsterpaullone-induced modulations of GFP, Wnt3 and Sp5 expression along the epidermis and gastrodermis of HySp5-3169:GFP and HyWnt3-2149:GFP transgenic animals.(A) Schematic representation of the activating effect of ALP on Wnt/β-catenin signaling.(B) Co-detection of GFP (purple) and Wnt3 (red) (left half) or Sp5 (purple) and Wnt3 (red) (right half) in wild-type Hv_AEP animals or in transgenic animals that constitutively express the HySp5-3169:GFP or HyWnt3-2149:GFP constructs, after 2-or 4-day ALP exposure.White arrows: Wnt3 expression at the tip of the hypostome; black arrows: expression immediately below the apical region; black arrowheads: ALP-induced GFP expression in the peduncle zone; blue and grey arrows: ALP-induced circular zones of GFP and Sp5 expression respectively along the body column; yellow arrows: ALP-induced ectopic Wnt3 expression in the apical or basal regions; orange arrows: Wnt3-expressing spots along the body column; vertical black bars: areas of ALP-induced GFP expression along the body column; s.t.: short tentacles; Te: testis.(C) Schematic representation of the ALP-induced modulations of Wnt3 and Sp5 in the epidermal and gastrodermal HySp5-3169:GFP and HyWnt3-2149:GFP transgenic lines.See Figures S7-S9.(D) Live imaging of mCherry and GFP fluorescence in epidermal and gastrodermal HySp5-3169:GFP animals treated for 2, 4 and 7 days with ALP or DMSO.For each condition, GFP fluorescence is shown on the left and the merged GFP (green) and mCherry (red) fluorescence on the right.Vertical white bars indicate areas of ectopic GFP fluorescence.Scale bar: 250 µm.
Figure 4 .
Figure 4. Impact of β-catenin(RNAi) on Sp5 expression in Hv-Basel and HySp5-3169:GFP transgenic animals.(A) Schematic view of the procedure: After one, two or three electroporations (EP1, EP2 and EP3) with scramble or β-catenin siRNAs, animals were either dissected in apical and body column regions for RNA extraction (grey triangles), or fixed for whole-mount in situ hybridization (ISH, blue triangles) or imaged live (red triangles) at indicated time-points (d pEP: day(s) post-EP).(B) Q-PCR analysis of Sp5, β-catenin and GFP transcript levels in apical (100-80%, left) and body
Figure 4 .
Figure 4. Impact of β-catenin(RNAi) on Sp5 expression in Hv-Basel and HySp5-3169:GFP transgenic animals.(A) Schematic view of the procedure: After one, two or three electroporations (EP1, EP2 and EP3) with scramble or β-catenin siRNAs, animals were either dissected in apical and body column regions for RNA extraction (grey triangles), or fixed for whole-mount in situ hybridization (ISH, blue triangles) or imaged live (red triangles) at indicated time-points (d pEP: day(s) post-EP).(B) Q-PCR
Figure 5 .
Figure 5. Ectopic GFP/GFP expression in HySp5-3169:GFP animals knocked-down for Sp5.(A) RNAi procedure applied in experiments depicted in panels A-C.At 8 h, 16 h and 24 h post-EP1 (pEP1) and 8 h, 16 h and 24 h post-EP2 (pEP2, red triangles), animals were either fixed for RNA extraction, or imaged live and fixed for whole-mount ISH.Q-PCR analysis of Sp5, β-catenin and GFP transcript levels in apical (100-80%, left) and body column (80-0%, right) regions of epidermal (left) and gastrodermal (right) HySp5-3169:GFP transgenic animals exposed to scramble siRNAs or to Sp5 siRNAs.In each panel, the colored line corresponds to the Fold Change (FC) values between Sp5 RNAi animals (continuous grey lines) and control animals exposed to scramble siRNAs (dotted grey lines), which are each expressed as FC relative to non-electroporated animals at time 0, just before EP1.See Figure S14.(B) GFP expression detected by WM-ISH at indicated time-points after Sp5(RNAi), as depicted in (A).Vertical black bars along the body column and white arrows in the lower body column indicate regions where GFP is up-regulated.See Figure S15.(C) GFP (green) and mCherry (red) fluorescence in Sp5 (RNAi) epidermal (left) or gastrodermal (right) HySp5-3169:GFP animals as depicted in (A).See Figure S16.(D) RNAi procedure applied in experiments depicted in panels D and E. Q-PCR quantification of Sp5, GFP, Wnt3 and β-catenin transcripts in the apical (100-80%), central body column (BC, 80-30%) and basal (30-0%) regions of epidermal (left) and gastrodermal (right) HySp5-3169:GFP animals dissected two days post-EP2 (2d pEP2).ns: non-significant value, other statistical values as indicated in Materials & Methods.(E) GFP (green) and mCherry (red) fluorescence in epidermal (left) or gastrodermal (right) HySp5-3169:GFP animals 2d pEP2.White bars indicate areas of ectopic GFP fluorescence along the body column, white arrows point to spots of ectopic gastrodermal GFP fluorescence in tentacles.See Figure S17.Scale bars correspond to 250 µm, except in (B) where it is 200 µm.
Figure 7 .Figure 7 .
Figure 7. Functional analysis of the Hydra Sp5, Zic4 and Wnt3 promoters.(A-G) Luciferase reporter assays performed in HEK293T cells to measure the Relative Luciferase Activity (RLA) driven by various promoters when Hydra proteins are co-expressed.Each data point represents one biologically independent experiment.(A) RLA levels driven by the HySp5 promoter either when 2992 bp long (HySp5-2992), or when deleted from its proximal region (HySp5-2828) that contains five Sp5binding sites (Sp5BS), or when one of these 5 Sp5BSs is mutated (HySp5-2992-mBS1, HySp5-2992-mBS2, …).Each construct was tested either in the absence of any protein co-expressed (CMVempty), or in the presence of co-expressed proteins, full-length Sp5 (HySp5-420) or Sp5 lacking its DNA-Binding Domain (HySp5-ΔDBD).(B) RLA levels driven by the HySp5-2992 promoter when the full-length HyZic4 (HyZic4-431) or the truncated HyZic4 lacking its DNA-Binding Domain (HyZic4-ΔDBD) are co-expressed.(C,D) RLA levels driven by the HyZic4-3505 promoter when full-Figure 7. Functional analysis of the Hydra Sp5, Zic4 and Wnt3 promoters.(A-G) Luciferase reporter assays performed in HEK293T cells to measure the Relative Luciferase Activity (RLA) driven by various promoters when Hydra proteins are co-expressed.Each data point represents one biologically independent experiment.(A) RLA levels driven by the HySp5 promoter either when 2992 bp long (HySp5-2992), or when deleted from its proximal region (HySp5-2828) that contains five Sp5-binding sites (Sp5BS), or when one of these 5 Sp5BSs is mutated (HySp5-2992-mBS1, HySp5-2992-mBS2, . ..).Each construct was tested either in the absence of any protein co-expressed (CMV-empty), or in the presence of co-expressed proteins, full-length Sp5 (HySp5-420) or Sp5 lacking its DNA-Binding Domain (HySp5-∆DBD).(B) RLA levels driven by the HySp5-2992 promoter when the full-length HyZic4 (HyZic4-431) or the truncated HyZic4 lacking its DNA-Binding Domain (HyZic4-∆DBD) are co-expressed.(C,D) RLA levels driven by the HyZic4-3505 promoter when full-length or truncated HySp5 (HySp5-420, HySp5-∆DBD in (C)) or full-length or truncated HyZic4 (HyZic4-431, HyZic4-∆DBD in (D)) are co-expressed.(E,F) RLA levels driven by the TOPFlash or FOPFLASH reporter constructs that contain 6× TCF-binding sites either consensus or mutated when HyZic4-431 (E,F) or HyZic4-∆DBD (E) are co-expressed.(G) RLA levels driven by the Wnt3-2142 promoter when HyZic4-431 or HyZic4-∆DBD are co-expressed.(H) Diagram showing the regulations detected in HEK293T cells on the Hydra Wnt3, Sp5 and Zic4 upstream sequences when the human β-catenin and/or HySp5 and HyZic4 proteins are co-expressed (this work, [33,34]).(I) Immunoprecipitation (IP) of HA-tagged HyZic4-431 expressed in HEK293T cells together or not with huTCF1.IP was performed with an anti-HA antibody and co-IP products were detected with the anti-TCF1 antibody.Same results were obtained in two independent experiments.(J,K) Zic4 (J) and Sp5 (K) transcript levels measured by qPCR in Hv_Basel animals exposed twice to scrambled (scr) RNAs or Zic4, Sp5, β-catenin (b-cat), Wnt5 or Wnt8 siRNAs.Levels are normalized to those measured in control animals exposed to scr RNAs.In all panels, error bars indicate Standard Deviations and statistical p values are as indicated in Materials & Methods (unpaired t-test).
Figure 8 .Figure 8 .
Figure 8. Schematic summary of the layer-specific Sp5 regulation along the Hydra body axis.(A) Dot plot view of Zic4, Wnt3, Sp5, TCF, Wnt5A, Wnt8 and β-catenin expression in cells from the epithelial lineages, either epidermal or gastrodermal, and the interstitial lineage as deduced from Hydra sin-Figure 8. Schematic summary of the layer-specific Sp5 regulation along the Hydra body axis.(A) Dot plot view of Zic4, Wnt3, Sp5, TCF, Wnt5A, Wnt8 and β-catenin expression in cells from the epithelial lineages, either epidermal or gastrodermal, and the interstitial lineage as deduced from Hydra singlecell transcriptome analysis [35].Along the central body column, single-cell sequencing has identified two populations of epithelial stem cells in the epidermis (SC1 and SC2) and three in the gastrodermis (SC1, SC2 and SC3).See other abbreviations in Figure S1.(B,C) Schematic representation of the predicted genetic regulation network (GRN) at work in the epidermal and gastrodermal layers of the hypostome (B) and tentacle (C) regions in Hydra.(D,E) Schematic view of GFP fluorescence (green), Sp5 expression and predicted GRNs at work in the epidermal (D) and gastrodermal (E) layers of the body column (BC) of HySp5-3169:GFP transgenic animals, either maintained in homeostatic conditions (left) or ALP-treated, or knocked-down for Sp5 or β-catenin (right).Bold letters, black arrows and thick red bars indicate a stronger activity. | 2024-05-03T13:11:22.247Z | 2024-04-29T00:00:00.000 | {
"year": 2024,
"sha1": "e5ab0b14e8dadbfe72617e50ee36e2f330605962",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/12/6/1274/pdf?version=1717813121",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "570f09c9fa66a0e362171f8d9243513a45d3d000",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
251554677 | pes2o/s2orc | v3-fos-license | Topology of critical points and Hawking-Page transition
Using the Bragg-Williams construction of an off-shell free energy we compute the topological charge of the Hawking-Page transition point for black holes in AdS. A computation following from a related off-shell effective potential in the boundary gauge dual matches the value of topological charge obtained in the bulk. We also compute the topological charges of the equilibrium phases of these systems, which follow from the saddle points of the appropriate free energy. The locally stable and unstable phases turn out to have topological charges opposite to each other, with the total being zero, in agreement with the result obtained from a related construction [arXiv:2208.01932].
Phase transitions and critical phenomena are interesting topics in thermodynamics, and particularly, in the context of black holes. Black hole thermodynamics [2][3][4] allows us a thorough investigation of microscopic degrees of freedom of quantum gravity [5], apart from exploring the physics of strong gravitational phenomena [6,7]. Considering for the moment, general thermodynamic systems, a convenient approach to study phase transitions is via a mean field approximation. Assuming the order parameter to be small and uniform at the transition point, Landau's theory can be applied by making a series expansion of free energy in terms of the small order parameter [8]. Though, this procedure gives useful information for second order phase transitions, applicability of this method for first order transitions is ambiguous, as the order parameter may be large and jump discontinuously. In this situation, a more reliable and unified approach to phase transitions is the one due to Bragg-Williams [9,10], with the possibility of several applications starting from order-disorder transitions in alloys to black holes [8,11,12].
The key feature of this approach is to construct a putative off-shell free energy in terms of an appropriate order parameter, whose equilibrium value minimizes the free energy, giving the required phase structure. While the nature of critical points of general thermodynamic and statistical systems, including black holes, have been studied extensively, it is important to explore neoteric methods for distinctive perspectives.
Recently, a certain topological approach 1 to classify second order critical points has emerged [18] in the context of (extended) thermodynamics of black holes [19][20][21][22][23][24][25][26]. The key idea is to start from temperature of the black hole written as a function of entropy S and pressure P and other variables, and then find a potential by eliminating the pressure, such that the zeroes of vector field constructed from this potential correspond to the critical points. Existence of a conserved current ensures that a topological charge can be assigned to each critical point, which can then be used to classify their nature [18,27,28]. In case of systems with multiple critical points or points where new phases appear/disappear a more careful analysis may be required to decipher the true nature of critical points [29]. All the recent works have explored the case of second order critical points in black hole systems [1,18,[29][30][31], with the phase transition happening in space-time. Also, the topological classification of critical points in black holes [1,18,[29][30][31] relied heavily on the extended thermodynamic phase space set up, where the cosmological constant is taken to be dynamical, giving rise to pressure [20][21][22][23][24][25][26]. Associating topological nature to critical points should be valid more generally, irrespective of whether the transition happens in real space or in some parameter space, independent of the formalism used to study it. For instance, in the Ising model the order parameter for paramagnetic to ferromagnetic second order phase transition is the magnetic moment. Also, phase transitions in general field theories, such as gauge theories dual to gravity in the bulk via AdS/CFT correspondence, the order parameter is typically some parameter, such as, charge, angular momentum etc.. Further, it was known long back that charged black holes in AdS undergo various phase transitions in the non-extended set up [19], though the panoply of such transitions is much richer in extended thermodynamics [26]. It is thus imperative to check whether the topological classification of critical points studied in [1,18,[29][30][31] holds in general thermodynamic situations, such as for first order phase transitions. 1 see [13][14][15][16][17] for other motivations following from the study of light-rings 2 The aim of this note is to report some progress in addressing the issues posed above. First, we extend the ideas in [18,[29][30][31] to show that it is possible to assign topological charge to first order phase transitions as well. We specifically compute the topological charge of the Hawking-Page transition point of black holes in AdS spacetimes. As we elaborate in the next section, an offshell formalism such as the Bragg-Williams approach is quite useful in setting up the discussion around the HP transition point. Hawking-Page transition [32] of course continues to evoke remarkable interest, partly due to a dual interpretation in terms of gauge/gravity duality [33][34][35], where it corresponds to confinement-deconfinement transition. To check the validity of the result found in the bulk, we also compute the topological charge of the confinement-deconfinement transition from an effective potential approach in dual gauge theory. The two topological charges agree and turn out to be +1, even though the order parameters are quite different, i.e., horizon radius r + for the HP transition and a charge parameter Q for the confinement-deconfinement transition in the boundary gauge theory. Since, the free energy constructed from the Bragg-Williams is off-shell by nature, its saddle points typically give all the equilibrium phases of the system [12,[36][37][38][39][40]. Some of the phases are stable and others unstable. By a slight modification of the vector field motivated from [18], it is possible to assign topological charges to these phases, which correspond to different black hole solutions. The topological charges of stable and unstable black holes turn out to be opposite in character, in agreement with a recent construction explored in a slightly different set up [1].
Rest of the note is organized as follows. In section-(1), we explain the basic construction required to understand the Bragg-Williams approach to describe phase transitions of black holes, though the discussion is valid for any general thermodynamic system. Section-(2) is devoted to the calculation of topological charge of Hawking-Page transition in Schwarzschild and charged black holes in AdS, giving a value +1. This computation is then also computed in the gauge theory via an effective potential constructed using AdS/CFT relations in section-(3). In section-(4), we compute the topological charges for the equilibrium phases following from the saddle points of free energy. We end summary of our results and some remarks in the concluding section-(5).
Hawking-Page transitions using Bragg-Williams approach
We first present how the Bragg-Williams (BW) method clearly captures the Hawking-Page (HP) transitions for black holes in AdS. The idea is to construct an off-shell free energy function directly from the action, in terms of a suitably chosen order parameter and study the behavior around its saddle points [12,[36][37][38][39][40].
Considering the horizon radius r + as an order parameter, and using the thermodynamic quantities of the black hole, the BW free energyf (r,T ), in (n + 2)-dimensional spacetime, can be written as [12,38] Here, the temperatureT = lT is a scaled free parameter. Other quantities, such as the horizon radiusrl = r + , energyĒ = lE, and the entropySl n = S, are all scaled to absorb the dependence on the AdS length l. The functionf (r,T ) along withT andr take in general non-equilibrium values. It is only at that minima of the function that all these quantities acquire their equilibrium forms. The behavior of the free energyf (r,T ), as a function of order parameterr, for various temperatures is as shown in Fig. 1a. The AdS phase (r = 0) is identified with the zero point of free energy. There exists a temperatureT HP , below which the black hole free energy is higher than the AdS free energy, thus the black hole phase is globally unstable. However, for temperatures higher thanT HP and thus the black hole phase is globally stable as it has lower free energy than the AdS phase. Thus, at the temperatureT HP , there is a phase transition in which the preferred phase switches from AdS to black holes. This is a first order phase transition as the order parameterr shows the discontinuous change, called as the Hawking Alternatively (which will be useful for later purpose), one can also find the Hawking-Page transition point using the first condition of eqn. 1.2, which gives, The Hawking-Page transition point can then be located at the minima of the curveT 0 , as shown in the Fig. 1b. Next, we consider the case of Hawking-Page transition exhibited by charged-AdS black holes in the grand canonical ensemble (i.e., fixed potentialμ). The corresponding Bragg-Williams free energy functionf (r,T ,μ), turns out to be [12,37]: Here,T andμ are treated as the external parameters,Q is the charge of the black hole and c = 2(n − 1)/n. In this case, the HP transition happens at (from the eqn 1.2) Further, the coexistence curveT 0 (r,μ) of the black hole phase and AdS phase becomes (from the conditionf = 0),T One can see that, for a fixed potentialμ, the behaviors of the free energyf and the coexistence curveT 0 , are similar to the previous case. In the following section, we make a set up to find the topological charge associated with the Hawking-Page transition point in Schwarzschild-AdS black hole case and charged-AdS black hole case as well.
Assigning Topological charge to Hawking-Page transition
We first consider the Schwarzschild-AdS black holes case. In order to assign the topological charge to HP transition point, we employ the temperatureT 0 (eqn. 1.4) obtained from the Bragg-Williams free energy landscape, which serves to define the vector filed φ(φr, φ θ ) [18], in the following way: where, Φ = 1 sinθT0 (r). A key outcome of this procedure is the existence of a topological current j µ satisfying the condition ∂ µ j µ = 0, which is non-zero only at the points where the vector field φ a is identically zero, i.e., φ a (x i ) = 0. In the present case, this vector field φ vanishes exactly at the Hawking-Page transition point, which can be seen clearly from the normalized vector field n = ( φr ||φ|| , φ θ ||φ|| ) plot in the Fig. 2a. The definition of topological charge ensues from the above construction as [18,27,28] contained in a region Σ. Here, w i denotes the winding number of the i-th point corresponding to zero of φ. If now Σ encompasses the available parameter space of the thermodynamic system, the phase transition points of thermodynamic systems can fall into different topological classes. This can be seen from the fact that, Q t can be positive or negative, with the possibility of total topological charge being zero as well. To be precise, we consider a piece-wise smooth (positively oriented) contour C, in a certain θ −r plane, which we chose to parameterize by the angle ϑ ∈ (0, 2π), as [16,18]: Along the contour C, the deflection angle Ω(ϑ) of φ is,
Whose integration reveals the topological charge
The behavior of the deflection angle Ω(ϑ), for the given contours in the Fig. 2a, is as shown in Fig 2b, from where we find that the topological charge associated with the HP transition point would be Q t HP = 1 2π Ω(2π) = +1 (given by the contour C 1 ).
We note here that, if one uses the black hole temperatureT BH (obtained from the condition ∂f ∂r = 0, in eqn. 1.2), instead ofT 0 , to define the vector field φ, then one could assign the topological charge to the point where the black hole possesses the lowest temperature (T min ) 2 . In fact the topological charge for this point is +1. However, we do not consider this situation, as the black hole at this point is globally unstable.
Next, we consider the Hawking-Page transition in charged-AdS black holes case. The vector field φ, in this case, turns out to be (employing eqn. 1.7): where, Φ = 1 sinθT0 (r,μ). The vanishing of the vector field φ at the HP transition point can be seen from the Fig. 3a. The behavior of the deflection angle Ω(ϑ) along the contours C 1 and C 2 ,
Topological charge of confinement-deconfinement transition
In section-(1), an off-shell free energy was employed which captured the stable, unstable and metastable phases of black holes in AdS. A topological charge was assigned specifically to the Hawking-Page transition point in section- (2). It is also possible to assign topological charge to each of the phases of the system, resulting in a topological classification of black hole solutions in different regions of the thermodynamic phase space. Before doing this, we first indicate how results of section-(2) can be extended to a boundary field theory set up, where the phase transition may be studied following from an appropriate potential, in terms of an order parameter different from the bulk. Sometime back, it was shown how given a free energy in the bulk, an off-shell phenomenological effective potential can be constructed in the gauge dual, whose equilibrium points exactly correspond to various phase of the theory. We should note that a general construction of an effective potential directly in the gauge theory is a non-trivial task, but the AdS/CFT conjecture allows a slightly different route. Motivated from the free energy in eqn. (1.5), an effective potential (which may not be unique) in the gauge theory dual to charged-AdS black holes (in grand canonical ensemble) can be constructed as [37]: where, N c stands for number of colors. The critical temperature for the confinement deconfinement transition can be obtained on satisfying the following two conditions simultaneously: where Q is treated as the order parameter for the transition. This gives, T c above of course matches with the Hawking-Page transition temperature written earlier in eqn. (1.6). Alternatively, one can also find T c using the first condition of eqn. 3.2, i.e., W = 0 which first gives, (3.4) The confinement-deconfinement point can be located at the minima of the curve T 0 , as shown in the Fig. 4. Following methods discussed earlier, we define the vector field φ(φ Q , φ θ ) as: where, Φ = 1 sinθ T 0 (Q). This vector field φ vanishes exactly at the confinement-deconfinement transition point, as can be seen from the Fig. 5a. The topological charge corresponding to this transition point turns out to be Q t c = 1 2π Ω(2π) = +1 (given by the contour C 1 in Fig. 5b). This charge exactly matches the one obtained from the bulk calculation at the Hawking-Page transition point.
Topological charge of equilibrium phases
Since, the free energy and effective potential discussed in sections-(1) and (3) respectively capture all the phases of the system, it should be possible to assign a topological charge to the phases themselves. A related idea has recently been advanced in [1], where the black hole solutions have been identified as the topological defects (where, the vector field φ vanishes) in the thermodynamic parameter space. In this scenario, different black hole solutions have been classified into different topological classes based on the topological charge (winding) number they carry. Here we assign the topological charge (winding number) to the various phases of Schwarzschild-AdS black holes and charged-AdS black holes in grand canonical ensemble in a general situation. The extended thermodynamic set up used in [1] is though not required for the following set up.
This vector field φ vanishes exactly at the local extremal points off (r,T ), as shown in the Fig. 6. Further, as we know, forT >T min , the local maxima off represent the small black holes (SBH), while its local minima represent the large black holes (LBH). However, forT =T min , only one extrema point exists forf , which is neither local maxima nor minima, that represents a black hole possessing the lowest temperatureT min .
The computation of topological charge (winding number) for these extremal points off , reveal that all the small black holes possess the topological charge −1, and for large black holes it is +1, while it vanishes for the black hole with lowest temperature (see Fig. 6). It is not difficult to check that the topological charge (winding number) associated with the charged-AdS black holes in grand canonical ensemble would be −1/ + 1/0, for SBH/LBH/black hole with lowest temperature, respectively. An analogous calculation can be set up on the boundary using the effective potential construction and is expected to give identical results.
Conclusions
In this paper, we proposed a set up to find the topological charge associated with the Hawking-Page transition point, by employing the off-shell Bragg-Williams free energy landscape used to analyze first order phase transitions. We considered the HP transitions exhibited by Schwarzschild-AdS black hole system, and charged-AdS black hole system (in grand canonical ensemble). In both the systems, we found that the HP transition point turns out to have the topological charge +1. This is a novel topological charge, according to the classification of topological charges in 18.
We also showed that the same value of topological charge emerges from considerations of an effective potential in the dual gauge theory, which is computed at the confinement-deconfinement transition point of the boundary gauge theory. Further it shows, first-order HP transition point and second-order critical point in the black holes exhibiting the standard van der Waals fluid behavior, belong to different topological class [18]. This study opens up an important question, whether the Hawking-Page transition simultaneously triggers the topological transition between the black hole and its background space. This requires further study for various black hole systems and also one needs to identify the topological charge corresponding to the background space-time. Further, it would be interesting to find the topological charges associated with the reverse HP transitions [41] and reentrant HP transitions [42]. We then continued to study the different equilibrium phases of the system by computing the topological charge (winding number) to the Schwarzschild-AdS black hole solutions and charged-AdS black hole solutions in grand canonical ensemble. The results are summarized in the Table 1. Our results are in accord with the conjecture of [1], that +1/-1 topological charge (winding number) indicates the locally stable/unstable black hole solutions. The total topological charge (winding number) is found to be zero for both Schwarzschild-AdS black holes and charged-AdS black holes in grand canonical ensemble, which shows that these two black hole systems belong to the same topological class. Further, charged-AdS black holes in canonical [1] and in grand canonical ensemble studied here, belong to different topological classes. It would be nice to continue this topological classification of more general black holes too. Though, we note that one does not require the extended thermodynamic set up to study the topological charges of critical points.
Encouraged by the fact that one is able to assign a topological charge to the deconfinement transition, we end this note with the following question. Can we associate topological charge at the phase transition points exhibited by well known models often used for condensed matter systems. To that effect, let us consider, for example, the Ising model in n dimensions. It is expected to have a critical point at a finite temperature, beyond which the magnetization vanishes. The BW free energy density of the model is given in [8] and has the form | 2022-08-15T01:16:19.692Z | 2022-08-12T00:00:00.000 | {
"year": 2022,
"sha1": "9a4b5afb5c1b56936cae6503e62727f82124fc98",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9a4b5afb5c1b56936cae6503e62727f82124fc98",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257643633 | pes2o/s2orc | v3-fos-license | Pretreatment with P2Y12 Receptor Inhibitors in Acute Coronary Syndromes—Is the Current Standpoint of ESC Experts Sufficiently Supported?
Excessive platelet reactivity plays a pivotal role in the pathogenesis of acute myocardial infarction. Today, the vast majority of patients presenting with acute coronary syndromes qualify for invasive treatment strategy and thus require fast and efficient platelet inhibition. Since 2008, in cases of ST-elevation myocardial infarction, the European Society of Cardiology guidelines have recommended pretreatment with a P2Y12 inhibitor. This approach has become the standard of care in the majority of centers worldwide. Nevertheless, the latest guidelines for the management of patients presenting with acute coronary syndrome without persisting ST-elevation preclude routine pretreatment with the P2Y12 receptor inhibitor. Those who oppose pretreatment support their stance with trials failing to prove the benefits of this strategy at the cost of an increased risk of major bleeding, especially in individuals inappropriately diagnosed with an acute coronary syndrome, thus having no indication for platelet inhibition. However, adequate platelet inhibition requires even up to several hours after application of a loading dose of P2Y12 receptor inhibitors. Omission of data from pharmacokinetic and pharmacodynamic studies in the absence of data from clinical studies makes generalization of the pretreatment recommendations difficult to accept. We aimed to review the scientific evidence supporting the current recommendations regarding pretreatment with P2Y12 inhibitors.
Introduction
Excessive platelet activation plays a major role in the pathogenesis of acute myocardial infarction (AMI) either with ST elevation (STEMI) or non-ST elevation (NSTEMI) [1]. STEMI is typically associated with a sudden, total occlusion of a coronary artery with a thrombus which forms on a ruptured atherosclerotic plaque [2]. Destruction of the natural integrity of coronary endothelium exposes lipids, building plaque as well as the collagen fibers which are responsible for the initiation of platelet activation leading to the formation of a clot. Contrary to STEMI patients, those presenting with NSTEMI are generally expected to have critical stenosis of a particular coronary artery rather than total occlusion [3].
The latest European Society of Cardiology (ESC) guidelines on the management of patients presenting with both STEMI and NSTEMI recommend a 12-month course of dual antiplatelet therapy (DAPT) comprising aspirin and a P2Y 12 receptor inhibitor [4,5]. Among the currently available P2Y 12 receptor inhibitors, two thienopyridines-clopidogrel and prasugrel-are inactive pro-drugs that require hepatic activation through more (clopidogrel) or less complex (prasugrel) metabolic pathways, whereas ticagrelor and cangrelor are active and reversible agents. Cangrelor is the only intravenous P2Y 12 inhibitor characterized by the rapid onset and offset of action [6].
The term "pretreatment" with a P2Y 12 receptor inhibitor is used to describe the strategy of the loading dose administration at first medical contact, after the diagnosis of acute coronary syndrome (ACS) is made without any data on the coronary anatomy [7]. Such an approach was recommended in a previous edition of the non-ST elevation (NSTE) ACS 2015 ESC guidelines irrespective of the initial therapeutic strategy, with the exception of prasugrel, which was limited to individuals who qualified for PCI after coronary angiography [8]. The potential benefits of pretreatment with a P2Y 12 inhibitor include a reduction in the rate of ischemic events while waiting for invasive treatment, the prevention of earlystent thrombosis, and a reduction in glycoprotein IIb/IIIa bail-out use. [9] On the other hand, patients preloaded with P2Y 12 inhibitors are considered to be burdened with an increased risk of bleeding, especially if femoral access is used for coronary angiography or if further cardiac surgery is required. Therefore, the optimal timing of P2Y 12 receptor inhibitor administration has become a subject of scientific debate. Early methods of platelet inhibition included the administration of Glycoprotein IIb/IIIa inhibitors at different timepoints after the diagnosis of ACS. This standpoint was based on findings from several trials in which particular agents' use was associated with better clinical outcomes. There was, however, an increased risk of bleeding events when Gp IIb/IIIa were administered [9][10][11]. Based on the latest issues of ESC guidelines for both STEMI and NSTE-ACS, prehospital administration of these agents is not recommended due to lack of benefit and increased bleeding rates [4,5]. The first trial to provide beneficial effects of pretreatment with clopidogrel in NSTE-ACS patients was the PCI-CURE study. Pretreatment resulted in lower rates of death, myocardial infarction, refractory ischemia, and urgent revascularization. There was no significant increase in bleeding episodes in the clopidogrel arm in comparison with the placebo arm. In this study, clopidogrel was administered very early, before PCI, which contributed to beneficial effects of pretreatment, taking into account that this agent requires up to 24 h to inhibit platelet function if a 300 mg dose is administered or 5-7 days in case of a 75 mg dose [12]. Moreover, the latest ESC guidelines no longer recommend upstream therapy with these agents in NSTE-ACS patients [4], considering it that novel P2Y 12 receptor inhibitors require only 1-2 h to sufficiently inhibit platelet activity in the majority of patients. However, it has been shown that in case of STEMI, concomitant therapy with opioids, in critically ill patients, or in those presenting with cardiogenic shock, effective platelet inhibition with prasugrel or ticagrelor may require more than 4 h or even may not be achieved within the first 24 h in the most severe cases [13,14]. A schematic presentation of the arguments for and against pretreatment is shown in Figure 1.
Pretreatment in STEMI
According to the ESC Guidelines, patients diagnosed with STEMI should qualify for primary PCI. This strategy assumes immediate access to 24/7 hemodynamic facilities with availability of trained and adequately equipped ambulance teams to diagnose STEMI, administer initial pharmacotherapy, and stabilize a patient if necessary [15][16][17]. A strong recommendation to perform primary PCI is valid for patients with a recent onset of symptoms, i.e., less than 12 h if there is a persistent ST segment elevation or even over 12 h in cases of ongoing symptoms of ischemia, life-threatening arrhythmias, or hemodynamic
Pretreatment in STEMI
According to the ESC Guidelines, patients diagnosed with STEMI should qualify for primary PCI. This strategy assumes immediate access to 24/7 hemodynamic facilities with availability of trained and adequately equipped ambulance teams to diagnose STEMI, administer initial pharmacotherapy, and stabilize a patient if necessary [15][16][17]. A strong recommendation to perform primary PCI is valid for patients with a recent onset of symptoms, i.e., less than 12 h if there is a persistent ST segment elevation or even over 12 h in cases of ongoing symptoms of ischemia, life-threatening arrhythmias, or hemodynamic instability-class of recommendation I, level of evidence A [18,19]. Moreover, primary PCI should be considered in STEMI patients even if they present with typical symptoms up to 48 hours-class of recommendation IIa, level of evidence B [20,21]-while ultimately, only asymptomatic cases with a late diagnosis, i.e., over 48 h after AMI, should be disqualified from primary PCI-class of recommendation III, level of evidence A [22,23]. As stated in the ESC guidelines, prasugrel or ticagrelor should be administered before (or at least at the time of) PCI in STEMI patients. In the case of unavailability of those two agents, clopidogrel should be used [5]. Nevertheless, data supporting such a standpoint are limited. The recommendation is based on the fact that pretreatment with either prasugrel or ticagrelor was allowed in studies, which led to the approval of those agents with the TRITON-TIMI 38 trial and the PLATO trial, respectively [24,25]. The trials referred to in the ESC guidelines are marked with (*).
The CIPAMI trial (*), a small clinical study conducted by Uwe et al. and published in 2012, aimed to evaluate the clinical effects of pretreatment with clopidogrel in STEMI patients. Overall, 337 subjects were enrolled and randomized to receive a loading dose of clopidogrel in the prehospital phase (n = 166) or after coronary angiography, directly prior to PCI (n = 171). The study revealed no significant differences in terms of primary endpoint, which was defined as TIMI 2/3 patency in the culprit vessel before PCI (49.3% in the pretreatment arm vs. 45.1% in the no-pretreatment arm, p = 0.5). Moreover, rates of TIMI 3 in a culprit vessel before PCI did not differ significantly between study arms (32.6% vs. 27.4%, respectively, p = 0.3). Additionally, the difference in the rate of a composite of death, re-infarction, and urgent target vessel revascularization was not statistically significant (3.0% vs. 7.0%, p = 0.09); however, a trend favoring pretreatment was clearly visible. It must be underlined that no increase in major bleeding complications in the pretreatment arm was found (9.1 vs. 8.2%, p = 0.8) [26].
A multicenter Austrian registry (*) of patients undergoing primary PCI due to STEMI [27] evaluated the clinical outcomes of pretreatment with clopidogrel. A total of 5955 patients were included in the analysis based on clopidogrel administration strategy, pretreatment (n = 1635), or periprocedural use (n = 4320). Pretreated individuals had a lower rate of inhospital mortality (p < 0.01) when compared to the no-pretreatment arm. Moreover, the risk of bleeding was not significantly increased (p = 0.90).
A sub-analysis of STEMI patients undergoing primary PCI who were identified in the Swedish Coronary Angiography and Angioplasty Registry (SCAAR) (*) was performed to evaluate the effects of pretreatment with clopidogrel on the reduction in the 1-year death/MI rate. Overall, 13,847 patients were included in the analysis. The rates of 1-year death/MI as well as 1-year death alone were significantly reduced (HR 0.82, 95% CI 0.73-0.93 and HR 0.76, 95% CI: 0.64-0.90, respectively); however, no reduction was observed in 1-year MI (HR 0.90, 95% CI 0.77-1.06). Data regarding bleeding were available in 12,548 patients. The risk of bleeding was similar in the analyzed arms [28].
Another study, the Load&Go randomized trial conducted by Ducci et al., tested the clinical efficacy of the prehospital administration of two doses of clopidogrel, 600 mg or 900 mg, vs. the periprocedural use of 300 mg of clopidogrel in STEMI patients undergoing primary PCI. The study population included 168 participants randomized in a 1:1:1 ratio to receive (1) no pretreatment, (2) 600 mg of clopidogrel in the prehospital phase or (3) 900 mg of clopidogrel in the prehospital phase. The study failed to prove the benefits of pretreatment in STEMI patients. The rate of primary endpoint, thrombolysis in myocardial infarction perfusion grade 3 (TMPG 3), did not differ significantly (64.9% for pretreatment with either 600 mg or 900 mg in the clopidogrel arm vs. 66.1% in the no-pretreatment arm; p = 0.88), and there were also no significant differences between 600 mg vs. 900 mg of clopidogrel in terms of TMPG 3 rate (57.1% vs. 72.7%, respectively; p = 0.12). The results of the study also did not reveal any significant differences between rates of bleeding episodes. Platelet reactivity (in platelet reactivity units-PRUs) assessed with the Verify-Now tool was comparable between the pretreatment vs. no-pretreatment arms (342 ± 59 in pretreated individuals vs. 333 ± 72 in the no-pretreatment arm; p = 0.20). A direct comparison between the 900 mg, 600 mg, and no pretreatment groups also revealed no differences (337 ± 48 vs. 356 ± 52 vs. 333 ± 72; p = 0.080, respectively) [29].
The only randomized trial aiming to evaluate the outcomes of ticagrelor administration at different timepoints in STEMI patients was "The Administration of Ticagrelor in the Cath Lab or in the Ambulance for New ST Elevation Myocardial Infarction to Open the Coronary Artery"-the ATLANTIC trial (*) [30]. Overall, 1862 patients with a recent diagnosis of STEMI (<6 h) were randomized to receive a loading dose of ticagrelor either during transport to the cath lab (prehospital) or directly prior to coronary angiography in the cath lab (in-hospital). The study showed no significant differences in either of the co-primary endpoints, with an absence of at least 70% ST segment resolution before PCI was observed in 86.8% and 87.6% of patients in the prehospital and in-hospital arms, respectively (p = 0.63), and an absence of the thrombolysis in myocardial infarction (TIMI) 3 flow at initial angiography was found in 82.6% and 83.1% of patients, respectively (p = 0.82). Among secondary endpoints, a very pronounced trend favoring prehospital administration of ticagrelor was observed in terms of number of patients who did not achieve at least 70% ST segment resolution after PCI-42.5% vs. 47.5% in the prehospital and in-hospital arms, respectively, p = 0.05. The bleeding rates were nearly identical between the study arms, while definite stent thrombosis occurred significantly more often in the in-hospital arm (0 vs. 8 patients, p = 0.008, within 24 h post PCI and 2 vs. 11 patients, p = 0.02, within 30 days post PCI in the prehospital and in-hospital arms, respectively) [30].
The results of a PCI-CLARITY randomized trial conducted by Sabatine et al. showed benefits of the early administration of clopidogrel in STEMI patients on fibrinolytic therapy. Patients underwent randomization in a 1:1 ratio into two study arms: (1) Administration of a loading dose of 300 mg of clopidogrel followed by 75 mg daily or (2) administration of a placebo. Treatment was continued until coronary angiography, i.e., 2-8 days after the index event. Pretreatment with clopidogrel was associated with a lower rate of MI or stroke before PCI than it was in the placebo arm (4.0% vs. 6.2%, respectively, p = 0.03). The difference between study arms was also significant with regard to a composite of cardiovascular death, MI, or stroke after PCI (3.6% vs. 6.2%, respectively, p = 0.008). Overall, the rate of a composite of cardiovascular death, MI, or stroke before and after PCI was significantly lower in the pretreatment arm than in the placebo group (7.5% vs. 12%, respectively, p = 0.001). Throughout the study, rates of both major and minor TIMI bleeding episodes did not differ significantly between pretreatment and placebo arms (0.5% vs. 1.1%, p = 0.21, and 1.4% vs. 0.8%, p = 0.26, for major and minor TIMI bleeding, respectively) [31].
In a recently conducted multicenter randomized ISAR-REACT 5 trial, the first head-tohead comparison of ticagrelor and prasugrel, a total of 4018 patients with a diagnosis of ACS were randomized to receive a predefined P2Y 12 receptor inhibitor in a 1:1 ratio [32]. The study did not directly compare pretreatment and delayed loading with antiplatelet agents. The major assumption in terms of pretreatment was based on the fact that patients treated with prasugrel were loaded with the drug after diagnostic coronary angiography, while those in the ticagrelor arm generally received pretreatment. Overall, the study population included 41.1% STEMI patients. However, the primary endpoint of the trial-1-year incidence of a composite of death, MI, or stroke-although numerically higher for ticagrelor, was not statistically significant (n = 83 (10.1%) vs. n = 64 (7.9%); odds ratio (OR) 1.31 [0.94-1.81], p = ns). It must be highlighted, however, that the ISAR-REACT 5 study caused multiple controversies and became the subject of vivid scientific discussions mainly due to serious limitations, including improbably high adherence to treatment, controversial follow-up of the patients (only 10% underwent in-center visits), and an unacceptably high proportion of participants being excluded from certain steps of the analysis [33,34].
Pretreatment in NSTE-ACS
The latest issue of the ESC Guidelines for the management of NSTE-ACS patients brought about a major change in terms of pretreatment with P2Y 12 receptor inhibitors. With the publication of the document, routine pretreatment became not recommended (class of recommendation III, level of evidence A) [4]. Its authors, however, note that despite the unquestionable necessity to achieve early and efficient platelet inhibition in NSTE-ACS patients undergoing PCI, it is mainly due to the lack of large clinical trials supporting pretreatment that all practitioners should change their habits from now on. Data on pretreatment can be obtained from five randomized controlled trials, one registry, and three meta-analyses, among which only three are referred to in the latest issue of the ESC Guidelines-these studies were marked with (*).
The study that is being referred to by opposers of pretreatment is the abovementioned ISAR-REACT 5 trial (*). Among all the participants, 42.6% presented with NSTEMI, while 12.7% presented with unstable angina (UA). Taking into account the entire population, the primary endpoint of the study-a composite of death from any cause, MI, or stroke at 1 year after randomization-occurred significantly more often in the ticagrelor arm when compared with the prasugrel arm (9.3% vs. 6.9%, respectively, p = 0.006). As far as treatment safety is concerned, rates of Bleeding Academic Research Consortium (BARC) type 3, 4, or 5 bleeding episodes did not differ significantly (5.4% vs. 4.8% for ticagrelor and prasugrel, respectively, p = 0.46). According to the authors, the factor that mainly contributed to the final results was the difference in number of MI events, which was noticeably lower in the prasugrel arm than in the ticagrelor arm (n = 60 (3.0%) vs. n = 96 (4.8%), respectively, hazard ratio (HR) 1.63 [1.18-2.25]). To summarize, the presented results of the ISAR-REACT 5 trial do not promote pretreatment. Except for the previously mentioned limitations that bias the construction of the study and data analysis, no definite analysis of pretreatment vs. in-hospital administration of P2Y 12 was performed, but it was rather a consequence of the fact that patients treated with prasugrel were loaded with the drug only after diagnostic angiography and qualification for PCI, while patients on ticagrelor were allowed to receive it earlier [32].
The Comparison of Prasugrel at the Time of Percutaneous Coronary Intervention or as Pretreatment at the Time of Diagnosis in Patients with Non-ST Elevation Myocardial Infarction (ACCOAST) trial (*) was a large clinical study conducted by Montalescot et al. aiming to evaluate the effects of administration of prasugrel immediately after making the diagnosis of NSTE-ACS or just after diagnostic coronary angiography in patients who qualified for PCI. A total of 4033 patients were enrolled in the study. They had to be diagnosed with NSTEMI and qualify for invasive angiography 2-48 h after randomization, which was performed in a 1:1 ratio to the following two groups: (1) The pretreatment group, in which patients received 30 mg of prasugrel before angiography and another 30 mg in case of indication for PCI and (2) the control group, in which patients were given a placebo before coronary angiography and 60 mg of prasugrel if PCI was indicated. If coronary artery bypass graft (CABG) surgery was indicated, individuals in the pretreatment group were not given the additional 30 mg of prasugrel, and those in the control group did not receive prasugrel at all. Safety outcomes included TIMI bleeding episodes, which were analyzed to determine whether they were related to CABG or not. There were no differences between the study arms in terms of a composite of cardiovascular death, MI, stroke, urgent revascularization, or rescue use of glycoprotein IIb/IIIa within 7 days following randomization (10.0% vs. 9.8%, p = 0.81 for the prasugrel and control groups, respectively). Moreover, rates of particular components of the primary endpoint did not differ significantly either at 7 days or at 30 days following randomization. Ischemic complications within the period of waiting for coronary angiography occurred in 0.8% of patients in the pretreatment arm and in 0.9% of those in the control arm (p = 0.93). In patients who underwent PCI, rates of the primary endpoint also did not differ at 7 or at 30 days post randomization (13.1% vs. 13.1%, p = 0.93, at day 7; 14.1% vs. 13.8% at day 30 for the pretreatment and control arms, respectively, p = 0.77). Evaluation of rates of bleeding episodes revealed a significant increase in the pretreatment group both at 7 and 30 days after randomization when compared to the control group. All CABG-related and non-CABG-related major TIMI bleeding events occurred in 2.6% vs. 1.4% patients, respectively, p = 0.006, at day 7 and in 2.8% vs. 1.5% patients, respectively, p = 0.002, at day 30. Significant differences were also observed in non-CABG-related major TIMI bleeding-1.3% vs. 0.5%, respectively, p = 0.003, at day 7 and 1.6% vs. 0.6%, respectively, p = 0.002, at day 30. Nevertheless, there were groups associated with a lower risk of bleeding throughout the study, especially younger patients (<75 years of age), patients with a body weight over 60 kg, or those who underwent PCI through radial access. The subgroup analysis revealed that in patients who received a loading dose of prasugrel earlier than the median delay time of 15 h post symptom onset, the incidence of primary endpoint was reduced by 24% (0.76, 0.57-1.01, p = 0.004) without any significant increase in bleeding episodes (p = 0.23). In summary, it must be pointed out that the results of the ACCOAST trial do not support routine pretreatment with prasugrel in NSTE-ACS patients, but they support the idea of pretreatment in individuals with a recent diagnosis of ACS [35].
The aforementioned SCAAR registry (*) [36] is another dataset used to determine the outcomes of pretreatment in NSTE-ACS patients with all available oral P2Y 12 receptor inhibitors. A total of 64,857 NSTE-ACS patients who underwent PCI procedures were included in the analysis. A total of 59,894 patients (92.4%) were pretreated with a particular P2Y 12 receptor inhibitor: 43.7% with clopidogrel, 54.5% with ticagrelor, and 1.8% with prasugrel. The primary endpoint of the study was 30-day mortality rate. Data were obtained from the Swedish National Population Registry, which impacts the completeness and reliability of death numbers. There is, however, no detailed information regarding causes of death; thus, only all-cause mortality could be evaluated. Baseline characteristics of the study population revealed a noticeable imbalance between the pretreatment and control arms regarding age, diabetes, arterial hypertension, history of smoking, prior CABG, or history of MI. The analysis of procedural aspects of PCI also revealed non-negligible differences. The percentage of patients undergoing PCI through radial access was significantly lower in the pretreatment arm (78.6% vs. 81.8%, p < 0.001) and were more frequently administered GP IIb/IIIa inhibitors (2.6% vs. 1.9%, p = 0.002), bivalirudin (15.9% vs. 8.8%, p < 0.001), and clopidogrel as the pretreatment agents (45.3% vs. 18.9% for clopidogrel, 52.9% vs. 78.8% for ticagrelor, and 1.8% vs. 2.3% for prasugrel, p < 0.001). The primary endpoint of the study, if adjusted only for age and sex, was significantly lower in the pretreatment arm than in the control group (1.4% vs. 2.5%, respectively, p < 0.001). After inclusion of the remaining variables into the instrumental variable analysis, i.e., diabetes, prior MI, prior PCI or CABG, smoking status, severity of coronary artery disease, hypertension, hyperlipidemia, indication for PCI, type of P2Y 12 receptor antagonist, and completeness of revascularization, mortality rates did not differ significantly (adjusted OR 1.44; 95% confidence interval (CI), 0.78-2.62; p = 0.36). Similarly, there were no significant differences between study groups in terms of 1-year mortality (4.3% vs. 7.1% for the pretreatment vs. control group, adjusted OR 1.34 (0.77-2.34), p = 0.3) or definite stent thrombosis at day 30 (0.2% vs. 0.2%, adj. OR 1.17 (0.64-2.16), p = 0.6). Bleeding episodes occurred less frequently in the pretreatment arm than in controls (6.0% vs. 7.5%, respectively), but after the adjustment, the risk was higher in pretreated individuals (adj. OR 1.49 (1.06-2.12), p = 0.02). This result remained valid even after exclusion of minor bleeding events (adj. OR 2.31 (1.34-3.98), p = 0.002). Moreover, in-hospital bleeding was associated with an increase in 30-day and 1-year mortality rates (adj. OR 8.68 (7.54-9.98), p < 0.001 and adj. OR 3.05 (2.73-3.42), p < 0.001, respectively).
Despite presenting the real-life data of NSTE-ACS patients in Sweden, the SCAAR registry is biased due to several aspects, including the lack of information regarding patients mistakenly diagnosed with NSTE-ACS, patients who died before admission to the hospital, or patients who were treated with any P2Y 12 receptor antagonist beforehand. Moreover, the registry lacks subgroup analysis in terms of time since symptom onset or patients' clinical condition, which impacts urgency for intervention. It also must be pointed out that propensity score matching resulted in a noticeable change in the raw data analysis for both efficacy and safety outcomes.
In a randomized Early and Sustained Dual Oral Antiplatelet Therapy Following Percutaneous Coronary Intervention (CREDO) trial, the authors evaluated outcomes of 12-month clopidogrel use in patients presenting with NSTE-ACS who underwent PCI as well as the potential benefits of pretreatment with this agent. Patients were randomized to receive either 300 mg clopidogrel (n = 1053) or placebo (n = 1063) between 3 and 24 h preceding coronary angiography. After PCI, all patients received clopidogrel until day 28. Patients who were enrolled in the clopidogrel arm continued therapy with clopidogrel for up to 1 year, while those in the placebo arm were switched to placebo on day 29. The primary endpoint in the CREDO trial was a composite of 1-year mortality, MI, or stroke. A 12-month treatment with clopidogrel reduced the rate of the primary endpoint when compared to the placebo group (8.5% vs. 11.5%, respectively, relative risk reduction (RRR) 26.9% (3.9-44.4), p = 0.02). Pretreatment with clopidogrel reduced the risk of the combined endpoint (death, MI, and urgent revascularization of the target vessel) at day 28 by 18.5%, but the difference was not significant (p = 0.23). If clopidogrel was administered over 6 h before PCI, the reduction in the combined endpoint was far more pronounced (38.6%, 95% CI; −1.6% to 62.9%, p = 0.051), as should be expected taking into account the pharmacokinetics of clopidogrel [37].
Another study, Downstream Versus Upstream Strategy for the Administration of P2Y 12 Receptor Blockers In Non-ST Elevated Acute Coronary Syndromes With Initial Invasive Indication (DUBIUS), was designed to evaluate differences between upstream (pretreatment) and downstream (no-pretreatment) administration of the potent agents ticagrelor and prasugrel. A total of 1449 patients were randomized in a 1:1 ratio to upstream (with ticagrelor) or downstream therapy. Those in the downstream group who qualified for PCI underwent another randomization to receive prasugrel or ticagrelor. The primary endpoint of the study was defined as a composite of death from vascular causes, nonfatal MI, nonfatal stroke, and major fatal bleeding (BARC type 3, 4, and 5) at day 30 following randomization. There was no significant reduction in the primary outcome of the study between the downstream and upstream groups at 30 days (2.9% vs. 3.3%, respectively, absolute risk reduction (ARR): -0.46; 95% CI: -2.87 to 1.89, p = 0.5). BARC 3, 4, and 5 episodes occurred with a similar frequency in both groups. The study was prematurely terminated due to low incidence of events, both ischemic and bleeding. Therefore, as the authors stated in the manuscript, there is a very low likelihood that either of the tested strategies would surpass the other [38].
The authors of the ESC Guidelines stated that patients diagnosed with NSTE-ACS planned for an early invasive strategy (coronary angiography in less than 24 hours) should not be routinely pretreated with a P2Y 12 receptor inhibitor. This standpoint is supported in the paragraph regarding differential diagnosis of NSTE-ACS, where several serious medical conditions mimicking ACS are listed (Table 1). Increased risk of bleeding in cases of aortic dissection, tension pneumothorax, chest/cardiac trauma, cholecystitis, etc., is undoubtedly an undesired phenomenon. On the other hand, the authors conclude that pretreatment may be considered to pretreat patients without a high bleeding risk who are not planned for an early invasive strategy (class of recommendation IIb, level of evidence C). The authors of the ESC Guidelines do not mention the three available meta-analyses that investigate the issue of pretreatment in ACS patients. One study by Bellemain-Appaix et al. published in 2014 included seven clinical studies, four randomized controlled trials, three observational studies, and one observational analysis based on data from a randomized controlled trial. A total of 32,383 patients were included. The obligatory criterion for the study to be included in the analysis was that it reported all-cause mortality and major bleeding episodes as outcomes. Of all the included patients, 55% were treated with PCI. Pretreatment with thienopyridines was associated with a non-significant reduction in all-cause mortality (OR 0.90, 95% CI: 0.95-1.07, p = 0.24). The difference was more pronounced, but still not significant, in randomized controlled trials (OR 0.78, 95% CI: 0.71-1.14, p = 0.39). All patients who were pretreated with thienopyridine had an increased risk of major bleeding by 30-45% (OR 1.32, 95% CI: 1.16-1.49, p < 0.0001). This meta-analysis does not support routine pretreatment in NSTE-ACS due to its negative influence on the risk of bleeding without definite benefits in the rate of cardiovascular events [39].
A more recent meta-analysis by Nairooz et al. published in 2017 included 16 trials and 61,517 patients diagnosed with ACS (both STEMI and NSTE-ACS). The aim of the analysis was to compare effects of pretreatment with clopidogrel in individuals treated invasively. At 30 days, the rate of major adverse cardiovascular events was significantly lower in pretreated patients than in those who did not receive pretreatment (7.67% vs. 9.46%, respectively, p < 0.0001). Similarly, all-cause mortality was significantly reduced (2.8% vs. 4.1%, p = 0.0003). There was no difference in the rate of major bleeding events between study arms (1% vs. 2.78%, p = 0.89) [41].
It needs to be mentioned that in terms of pretreatment with oral P2Y 12 receptor inhibitors, the ESC Guidelines do not refer to any pharmacodynamic studies presenting the delay of adequate platelet inhibition after the administration of a loading dose of a particular agent. Early inhibition of platelet function may be expected in stable patients who receive prasugrel or ticagrelor [42,43]. The IMPRESSION trial revealed that even 4 h after the administration of a loading dose of ticagrelor followed by morphine, the percentage of high-platelet-reactivity patients can reach unpredictably high levels (20%, 37%, and 23% for multiple electrode aggregometry, VASP, and Verify-Now, respectively). Even if only patients who did not receive any morphine were taken into account, those numbers reached 17%, 17%, and 8%, respectively [13]. Similar worrisome results were found by Schoergenhofer et al. in a trial that tested the pharmacodynamics of prasugrel in critically ill patients admitted to the intensive care unit. Among them, poor response to prasugrel resulting in a high percentage of individuals with platelet reactivity was very common (65%, 95% CI, 43-84%). Moreover, low plasma concentrations of both prasugrel and its active metabolite were found among study participants. As was found in the study, high plasma concentrations of c-reactive protein were associated with a lower peak plasma concentration of prasugrel (r = −0.51, p = 0.02) [14].
A brief summary of the studies described in the text is presented in Table 2. ACS-acute coronary syndrome, CV-cardiovascular, MACE-major adverse cardiovascular events, MImyocardial infarction, ns-non-significant, NSTE-ACS-non-ST-elevation acute coronary syndrome, STEMI-ST segment elevation myocardial infarction, TVR-target vessel revascularization.
Discussion
Undoubtedly, contemporary scientific data regarding pretreatment in ACS are scarce. There are multiple aspects to consider when making the right decision resembles walking on a thin line balancing between increased risk of bleeding and greater ischemic complications. Pretreatment with a P2Y 12 receptor inhibitor may subconsciously seem to be an obvious approach due to its reasonable rationale. Patients with a new diagnosis of STEMI will most commonly require percutaneous treatment as the majority of cases are caused by total occlusion of a coronary artery. Early inhibition of platelet function plays a pivotal role in this setting.
The main factor that negatively influenced the results of the ALTANTIC trial was a short time difference (31 min) between the tested therapeutic strategies [30]. As mentioned above, registered clinical trials for both prasugrel and ticagrelor allowed pretreatment, which, consistently with pharmacokinetic/pharmacodynamic studies, supports the early administration of P2Y 12 receptor inhibitors in STEMI.
Contrary to STEMI, the case of NSTE-ACS patients receiving pretreatment with a P2Y 12 receptor inhibitor has recently become a subject of numerous debates. Unfortunately, superficial analysis of the available data from various clinical studies may lead to misleading assumptions. The authors of the latest issue of the ESC Guidelines for the management of patients presenting with NSTE-ACS no longer recommend routine pretreatment with P2Y 12 receptor inhibitor based on the ISAR-REACT 5 study results. However, the authors did not take into account several critical limitations regarding this study. Despite being an international, multicenter study, ISAR-REACT 5 was conducted only in two countries with an unacceptable disproportion in the distribution of study sites (21 sites in Germany and only 2 in Italy). Moreover, adherence to treatment exceeded 99%, which makes it hardly believable (in the PLATO trial, which was a registration trial for ticagrelor, the adherence was 82.8%). Controversies in terms of the design of the study are also associated with the schedule of follow-up visits. Only 10% attended an on-site visit, while the following 83% were contacted by telephone and the remaining 7% by mail. Moreover, due to the fact that the analysis of the results was based on an intention-to-treat method, the results were undoubtedly impacted by the fact that over 20% of participants were discharged from the hospital with a different treatment agent than they were assigned to at randomization. As it turned out later, the intention-to-treat method led to the inclusion of 1299 patients who were not treated with the medication they were initially assigned to into the analysis.
Taking into account the above, as well as the fact of exclusion of unacceptably high numbers of participants from the final analysis, it is difficult to call the ISAR-REACT 5 trial results ground-breaking [34].
There is common approval for the results of the ACCOAST trial. The strategy of limiting the administration of prasugrel only to patients who are candidates for PCI after diagnostic coronary angiography was the standard in both the 2015 and 2020 ESC Guidelines for the management of patients presenting with NSTE-ACS. Pretreatment with prasugrel was not associated with the reduction in the primary efficacy endpoint of the study but was associated with an increased risk of bleeding. Nevertheless, these results remain valid only if no subgroup analysis is taken into account. The ACCOAST trial clearly does not support routine pretreatment with prasugrel in NSTE-ACS, but noticeable improvement in clinical outcomes is seen in patients who received the loading dose of this agent early after symptom onset [4,8].
With regard to the SCAAR registry, which is a valuable source of data regarding pretreatment and potential benefits as a result of it, bleeding episodes included all events such as: cardiac tamponade, prolonged compression treatment, surgical intervention, a decrease in hemoglobin of at least 2 g/dL, pseudoaneurysms, puncture site hematomas, or transfusions, all classified as BARC type 2 or 3. Despite being consistent with other Swedish registries, such classification impacts the statistics mainly due to the increase in the rate of minor episodes. Moreover, there was a significantly higher percentage of patients in the pretreatment arm who underwent procedures through other than radial access and thus were more predisposed to bleeding complications. It is also worth noting that except for increased risk of bleeding, pretreatment was not worse in terms of efficacy endpoints.
It may be assumed that in cases of such a clear standpoint precluding routine upstream administration of oral P2Y 12 receptor inhibitors, the authors of the latest issue of the ESC Guidelines for the management of patients presenting with NSTE-ACS would strongly support cangrelor as a solution to numerous aspects of potentially inefficient antiplatelet therapy. Administration of this intravenous inhibitor was associated with a reduction in ischemic events, including stent thrombosis [44,45]. Due to its rapid onset and offset of action, cangrelor has the potential to solve all the aforementioned issues regarding pretreatment. Nevertheless, as stated in the document, administration of cangrelor may be considered in P2Y 12 -naïve patients undergoing PCI (class of recommendation IIa, level of evidence A). Another promising approach to achieve quick and reversible platelet inhibition may be a subcutaneous administration of a novel P2Y 12 receptor inhibitor, selatogrel. In phase 1 and phase 2 studies, this agent successfully inhibited platelet activity in approximately 90% of patients as fast as 30 min after self-administration. Subcutaneous administration of the drug potentially allows to overcome all previously described limitations of oral agents. Nevertheless, to date, the drug has not been approved by either the Food and Drug Administration (FDA) or the European Medicines Agency (EMA) [46][47][48][49]. However, most probably, the wide availability of these agents for in-hospital use would result in the termination of the dispute regarding pretreatment in NSTE-ACS patients [4].
Summary
As stated in the ESC Guidelines, "although a rationale for pretreatment in NSTE-ACS may seem obvious, for achieving sufficient platelet inhibition at the time of PCI, large-scale randomized trials supporting a routine pretreatment strategy with either clopidogrel or the potent P2Y 12 receptor inhibitors-prasugrel and ticagrelor-are lacking". Nevertheless, the authors conclude that "Based upon the available evidence, it is not recommended to administer routine pretreatment with a P2Y 12 receptor inhibitor in NSTE-ACS patients in whom coronary anatomy is not known and an early invasive management is planned". Successful treatment of ACS patients is definitely a complex issue comprising multiple multi-directional aspects. The issue of pretreatment with a P2Y 12 receptor inhibitor was the subject of numerous randomized controlled or observational clinical trials, as well as observational registries. The aim of this review was to discuss the latest recommendations included in the ESC guidelines based on the available data obtained from particular clinical studies. It is hard to agree with the standpoint presented in the guidelines as a generalization for all NSTE-ACS patients as the arguments supporting it seem far too weak. Several questions remain unanswered after analysis of the available data on pretreatment: -Who may benefit from pretreatment and for whom would this strategy be harmful?
Based on the results of the ACCOAST trial, patients receiving early pretreatment with P2Y 12 receptor inhibitor are expected to benefit most from such strategy regardless of the type of ACS. On the other hand, pretreatment administered late after the onset of symptoms may be harmful. Therefore, it is not recommended.
-Which approach is the most appropriate in the highest-risk patients?
Highest-risk patients are often characterized with an impaired absorption from the gastrointestinal tract due to multiple causes including centralization of circulation or concomitant therapy with opioids, which makes parenteral administration of P2Y 12 receptor inhibitors the best approach.
As a simple and generalized point of view may be misleading due to diversity of the NSTE-ACS population, an up-to-date, large-scale randomized controlled clinical study with stratification of patients depending on risk and time from symptom onset would be required to evaluate the clinical outcomes of pretreatment in this clinical setting.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-03-22T15:24:47.623Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "aeac36b6a79338426e4604c29b7b9cff5f4a05f6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/6/2374/pdf?version=1679213185",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e07f7d3c4ac798d3d005a27ba0bb3f442ba0ce22",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228080860 | pes2o/s2orc | v3-fos-license | The Effects of Pre-Storage Leukoreduction on the Conservation of Bovine Whole Blood in Plastic Bags
Simple Summary Blood transfusion is a life-saving veterinary therapeutic procedure. While fractionated blood components are used in humans, whole blood is most commonly used in animals, especially for farm animals. Whole blood contains white blood cells that can cause a transfusion reaction in animals. Here, we proposed that using a blood bag with leukocyte filtration is sufficient for blood conservation under field conditions and thus can be an option for transfusion medicine in the case of farm animals. The filtered bag was efficient in removing white cells from cattle whole blood and could be used under field conditions. Blood stored after white blood cells were removed showed less acidic load. Further experimental studies are required to prove that blood without white cells results in a decrease in transfusion reactions in cattle. Abstract Leukoreduction (LR) is a technique that consists of reducing the number of leukocytes in whole blood or blood components that can contribute to decreasing storage lesions and the occurrence of post-transfusion complications. We propose that using a blood bag with pre-storage leukocyte filtration is sufficient for blood conservation under field conditions. Ten healthy Nelore cows were used. Whole blood was sampled from each animal and stored at 2 to 6 °C in CPD/SAG-M (citrate phosphate dextrose bag with a saline, adenine, glucose, mannitol satellite bag) triple bags (Control) and in CPD/SAG-M quadruple bags with a leukocyte filter (Filter). At baseline and after 7, 14, 21, 28, 35, and 42 days (D0, D7, D14, D21, D28, D35, and D42, respectively), complete hematological, blood gas, and biochemical evaluations were determined. The filtered bag removed 99.3% of white blood cells from cattle blood, and the entire filtration process was performed in the field. There was a reduction in the number of red blood cells (RBCs) in both groups from D14 onward, with a decrease of 19.7% and 17.1% at D42 for the Control and Filter bags, respectively. The hemoglobin (Hb) concentration had variation in both groups. Potassium, pO2, pCO2, and sO2 increased, and sodium, bicarbonate, and pH decreased during storage. The filtered bag was efficient in removing white cells from cattle whole blood and could be used under field conditions. Blood stored after LR showed differences (p < 0.05) in blood gas analysis towards a better quality of stored blood (e.g., higher pH, lower pCO2, higher sO2). Further experimental studies are required to prove that blood without white cells results in a decrease in transfusion reactions in cattle.
Simple Summary: Blood transfusion is a life-saving veterinary therapeutic procedure. While fractionated blood components are used in humans, whole blood is most commonly used in animals, especially for farm animals. Whole blood contains white blood cells that can cause a transfusion reaction in animals. Here, we proposed that using a blood bag with leukocyte filtration is sufficient for blood conservation under field conditions and thus can be an option for transfusion medicine in the case of farm animals. The filtered bag was efficient in removing white cells from cattle whole blood and could be used under field conditions. Blood stored after white blood cells were removed showed less acidic load. Further experimental studies are required to prove that blood without white cells results in a decrease in transfusion reactions in cattle.
Abstract: Leukoreduction (LR) is a technique that consists of reducing the number of leukocytes in whole blood or blood components that can contribute to decreasing storage lesions and the occurrence of post-transfusion complications. We propose that using a blood bag with pre-storage leukocyte filtration is sufficient for blood conservation under field conditions. Ten healthy Nelore cows were used. Whole blood was sampled from each animal and stored at 2 to 6 • C in CPD/SAG-M (citrate phosphate dextrose bag with a saline, adenine, glucose, mannitol satellite bag) triple bags (Control) and in CPD/SAG-M quadruple bags with a leukocyte filter (Filter). At baseline and after 7, 14, 21, 28, 35, and 42 days (D0, D7, D14, D21, D28, D35, and D42, respectively), complete hematological, blood gas, and biochemical evaluations were determined. The filtered bag removed 99.3% of white blood cells from cattle blood, and the entire filtration process was performed in the field. There was a reduction in the number of red blood cells (RBCs) in both groups from D14 onward, with a decrease of 19.7% and 17.1% at D42 for the Control and Filter bags, respectively. The hemoglobin (Hb) concentration had variation in both groups. Potassium, pO 2 , pCO 2 , and sO 2 increased, and sodium, bicarbonate, and pH decreased during storage. The filtered bag was efficient in removing white cells from cattle whole blood and could be used under field conditions. Blood stored after LR showed differences (p < 0.05) in blood gas analysis towards a better quality of stored blood (e.g., higher pH, lower pCO 2 , higher sO 2 ). Further experimental studies are required to prove that blood without white cells results in a decrease in transfusion reactions in cattle.
Introduction
The development of different preservative solutions is essential for the long-term storage of blood, especially to facilitate the use of blood transfusion in human and animal medicine. However, stored whole blood components undergo changes known as storage lesions [1,2]. The leukocyte degradation in the storage blood results in the release of cytokines, histamine, serotonin, elastase, and acid phosphatase that contribute to hemolysis and post-transfusion complications [3][4][5].
Leukoreduction (LR) consists of reducing the number of leukocytes in whole blood or blood components such as red blood cell (RBC) concentrate, platelet concentrate, or plasma. LR has been useful in decreasing febrile non-hemolytic transfusion reaction (FNHTR), and decreasing the mortality of patients undergoing cardiac surgery [6][7][8][9][10]. In veterinary medicine, the majority of studies have been performed in dogs, in which LR resulted in a 98% leukocyte and 95.1% platelet count reduction [11]. LR in dogs' blood was effective in preventing the release of vascular endothelial growth factor [12], preventing the increase in interleukin-8 concentration [13], and attenuating the generation of phosphatidylsering-expressing microparticles [14] during storage. LR eliminated the post-transfusion inflammatory response in dogs receiving stored packed red blood cells [15].
Although blood fractionation techniques are used to obtain individual blood components, the conservation and use of whole blood is still the most used method [16][17][18] in large animal clinics where it is more difficult to use fractionated blood. At the field level, it is necessary to invest in more infrastructure and equipment to perform fractionation. Due to the difficulty of performing fractionation in the field, the use of a leukocyte filtering bag would advance blood conservation and transfusion therapy for large animals, as it would allow for the removal of leukocytes immediately after blood collection, without the need for a field-level centrifuge. We propose that using a human blood bag with pre-storage leukocyte filtration will improve blood quality for conservation purposes under field conditions, being a novel option for veterinary transfusion medicine.
Materials and Methods
This study was approved by the Ethics Commission of Animal Use of the Federal University of Western Pará, Santarém, PA, Brazil, protocol number 01002-2016. Ten adult female Nellore cattle were used, weighing an average of 406.5 ± 42.69 kg each. Animals were healthy under physical examination, had normal hematological parameters, and had negative smears to blood parasites (Babesia sp., Anaplasma sp., and Trypanossoma sp.). For blood sampling, cattle were placed in a cattle crush. Complete fur removal and antisepsis were performed in the neck region over the jugular vein using povidone iodine and then 70% alcohol. The reason for the sample size of ten animals was because the same animal could provide blood for the two bags, which was collected at once, thus reducing the variability between groups.
From each animal, 900 g of blood was sampled, 450 g of which was stored in a CPD/SAG-M triple bag (CompoSampler; Fresenius Kabi, São Paulo, Brazil), and 450 g was stored in a CPD/SAG-M quadruple bag with an in-line leukocyte filter (CompoFlow ® Select; Fresenius Kabi, São Paulo, Brazil), comprising the Control and Filter groups, respectively. Both bags contained a preservative solution composed of citrate, phosphate, and dextrose in the primary bag, and a solution of mannitol and sodium chloride in the satellite bag. Only one venipuncture was performed for blood collection, the blood being stored in one type of bag until full, which was then exchanged for the other type, using the same venous access. The order that defined the type of bag in the sampling sequence (first or second) was alternated for each animal.
For the Filter group, following the manufacturer's instructions, pre-storage filtration for leukocyte removal was performed one hour after sampling and prior to mixing in the additive from the satellite bag. For the filtration, we used the provided blood bags moving the whole blood using gravidity from one bag to another through an in-line filter. The filtration time varied from 15 to 30 min, after which the additive solution (from the satellite bag) was mixed into the filtrate, homogenized, and stored in the refrigerator. The Control group followed previously described procedures [19]. The bags were stored in a refrigerator with a controlled temperature of 2 to 6 • C. The refrigerator was placed in the lab and was restricted for this experiment. The blood bags were homogenized every two day. Around 45 mL of blood was lost in the white blood filter, with a red blood recovery of 90% using this in-line filtration system.
Laboratory evaluations of the filtered and whole blood stored in the bags were performed at seven different times as follows: immediately after collection (D0), 7 days after collection (D7), 14 days after collection (D14), 21 days after collection (D21), 28 days after collection (D28), 35 days after collection (D35), and 42 days after collection (D42).
Prior to the evaluation of the blood stored in the bags, the blood was homogenized for 10 to 20 min, followed by the withdrawal of 15 mL of blood using sterile syringes to measure the hematological, blood gas, and biochemical variables. In addition, a microbiological examination was performed at only D0 and D42.
The red and white blood cell counts were determined manually in a Neubauer chamber with a macrodilution technique [20]. The platelet count was not performed. The packed cell volume (PCV) was obtained using a microhematocrit centrifuge, and total hemoglobin (Hb) through the cyanmethemoglobin method [21]. The mean corpuscular cell volume (MCV) was calculated using the equation described by Jain [22]. Measurements of pH, partial O 2 pressure (pO 2 ), partial CO 2 pressure (pCO 2 ), saturation of O 2 (sO 2 ), and bicarbonate (HCO 3 ) values were performed using a portable blood gas analysis device (i-STAT ® System, Abbott Laboratories, USA) using commercial cartridges (CG8+, Abbott Laboratories, Chicago, IL, USA).
Whole blood samples were centrifuged under refrigeration (4 • C) for 10 min at 1000× g to obtain plasma for the assessment of lactate, glucose, cholesterol, sodium, and potassium concentration using an automatic biochemical analyzer (Rx Daytona, Randox, Antrim, UK) with commercial kits (Randox), with the exception of sodium and potassium which were determined using a flame photometer (CELM, São Paulo, Brazil).
The Hemobac Triphasic System (ProBac do Brasil, São Paulo, Brazil) was used for microbiological analysis, with the culture media (chocolate agar, Sabouraud's agar, and MacConkey's agar) being kept in an incubator for seven days at a temperature of 35 • C.
Data were assessed for Gauss distribution using the Kolmogorov-Smirnov test. Data showing normal distribution were subjected to analysis of variance (ANOVA) using the PROC MIXED procedure of SAS (Statistical Analysis System, SAS Institute Inc., Cary, NC, USA) for repeated measures over time, considering the effects of treatment (Control and Filter bags), time (different time points), and the interaction of time and treatment. The Bonferroni test was used to determine the differences between D0 and each of the other time points. A P-level of 0.05 was considered statistically significant.
Results
There was no microbiological contamination at D0 or at the end of the study according to the results from the Hemobac culture test. The blood bag with the leukocyte filter proved to be efficient, removing 99.3% of the white cells from cattle blood, with the entire filtration process being performed in the field. The mean number of leukocytes before filtration was 12.1 ± 1.0 × 10 3 , which was reduced to 0.068 ± 0.01 × 10 3 after filtration using the in-line system in the storage bag. The filter was efficient at removing white cells by 99.3%. Although we did not count platelets, stored blood after LR had probably a marked decrease in platelet count.
There was no difference between the Control and Filter groups for the variables of RBC, Hb, and PCV (Table 1). There was a decrease in the total number of RBCs in both groups, from D14 to D42. There was no change in PCV over time. Hb increased in both bags at D14 and onwards. There was no difference between the Control and Filter groups for the MCV; however, there was an increase in MCV for the Control group after D35.
Higher pH and concentrations of pCO 2 , pO 2 , and sO 2 were observed in the blood stored in the filtered bags as compared to the control bags (Table 2). When comparing D0 with the other time points, there was a reduction in blood pH in the Control group after D14, whereas in the Filter group there was a reduction only at D42. Only the Control group stock showed variations in pCO 2 in relation to time, with an increase starting at D21.
In relation to pO 2 , there was an increase after D28 and D14, in the Control and Filter groups, respectively. The values of sO 2 increased for both groups from D14 onward. For bicarbonate, the Control group decreased after D7, and the Filter group after D14.
There was no difference between the groups for the concentrations of glucose, cholesterol, lactate, potassium, or sodium (Table 3). However, when comparing D0 with the other time points, glucose decreased in both groups after D28, and potassium increased and sodium decreased in concentration after D7. Lactate increased in concentration after D14 in both groups, whereas no difference was observed over time for cholesterol.
Discussion
The filtered bag proved that it can be used under farm conditions, as this experiment was performed in a commercial farm and none of the samples had microbiological contamination. Greenwalt et al. [23] and Heaton et al. [24] state that the presence of leukocytes in the bags of RBCs contributes significantly to an increase in hemolysis during storage, mainly due to the release of various chemicals and enzymes, but especially leukocyte proteases. However, the absence of a difference in the number of RBCs and in the concentration of Hb between the bags suggests that the filtered bag did not confer additional benefits in the preservation of erythrocytes in bovine species when compared to the triple bag without leukoreduction.
Nunes Neto et al. [19] working with buffalo whole blood described that PCV showed no differences during storage, but showed differences between types of bags due to the differences in preservative solutions. In this study, as the bags used had the same conservative solution and volume, the absence of differences between the bags is justified. Despite the reduction in the number of RBCs in the stored whole blood, the PCV remained stable due to the increase in the MCV as observed in other studies [25,26]. Regarding total Hb, Tavares et al. [18] and Barros [27] found similar results, with significant differences between volumes, and an increase between time points.
In the filtered bags, the number of leukocytes before and after filtration was compared at D0, where the statistical difference is evident, showing that the filter was efficient at removing white cells by 99.3%. Moreover, we did not perform a platelet count; for human blood, using the same blood bag with the in-line filter removes~97% of platelets, therefore we can assume that the filtered blood had a substantial reduction in platelet count [28]. Further studies with cattle blood should confirm this assumption. Studies show that the leukoreduction of blood components has been a viable and effective resource in reducing the occurrence of transfusion incidents. However, the filters used in this procedure must be able to remove at least 90% of the leukocytes present in the RBC concentrate so that the risks associated with transfusions are reduced [29].
In this study, there was a reduction in the number of leukocytes in the control bags during storage, which is compatible with previous studies in different species [17][18][19]. However, even after 42 days of storage, the blood had a significative amount of leukocytes, which can contribute to a higher occurrence of post-transfusion febrile reaction [6,30]. Other studies indicate that the presence of leukocytes can have indirect deleterious effects on stored erythrocytes, as these cells contribute to the consumption of glucose from the conservative solution and also release bioreactive substances. During storage, these cells rupture, promoting the release of immunomodulators. Therefore, their presence in stored blood products can accelerate hemolysis and increase extracellular potassium [15,31]. Thus, the blood from the filtered bags had those advantages when used for transfusion in cattle.
The pH reduction was more accentuated in the control bags, occurring due to the production of acid metabolites, such as lactate, by the stored red cells [17,32,33]. Although the pH reduction is related to the degradation of 2,3-diphosphoglycerate, in ruminants, this metabolite has no effect on the affinity of hemoglobin for O 2 , since the cattle Hb has a larger preferential binding with chorine ions [34]. This difference in pH between the bag types can be attributed to the absence of leukocytes, as the Control group showed a greater pH reduction and the Filter group showed less variation. Additionally, the presence of leukocytes can increase the consumption of glucose and consequently the production of lactate, influencing the pH drop in the control bags. Although there was no difference between the bags for glucose and lactate, it is evident that the control bags presented a greater variation of these variables throughout the study. However, another factor that may have contributed to blood acidification is pCO 2 , which was higher in the control bags.
The increase in pCO 2 in stored blood is mainly due to the neutralization of lactic acid produced by cellular metabolism, resulting in the production of CO 2 . The increase in pCO 2 is another factor that causes a decrease in Hb affinity through two mechanisms-first, by decreasing blood pH and, second, by promoting a direct combination of CO 2 with Hb-forming carbamino compounds [35]. PO 2 was higher in the blood stored in the filtered bags, but increased in both bags over time. The increase in pO 2 during storage has been observed in the stored blood of canines, bovines, sheep, and donkeys [17,27,33,36]. Like pO 2 , sO 2 was higher in the filtered bags. The affinity of O 2 to Hb can be represented by a Hb dissociation curve in which the higher the partial O 2 pressure values, the higher the O 2 saturation values [25].
Moroz [37] showed that the leukoreduced RBC concentrate in dogs had higher saturation and O 2 pressure both in the blood stored in CPDA-1 and CPD/SAG-M bags when compared with non-leukoreduced concentrate in the same types of bags. This is similar to the results found in our study with cattle whole blood.
The reduction in blood values of HCO 3 occurs due to its consumption of lactate in the control of acidity [38]. Ribeiro Filho et al. [32] associates the gradual reduction of HCO 3 levels with the increase in lactate, in order to neutralize this acid. The limited variation in cholesterol in the storage period can be explained by Roback et al. [39], where they suggest that over the conservation period, erythrocytes, when aging, make repairs to their membranes, preferably using phospholipids such as phosphoglycerol, with cholesterol remaining with its concentrations unchanged. Cholesterol contributes to the stabilization and fluidity of the erythrocyte plasma membrane. The data found corroborate the findings of Nunes Neto et al. [19], who also found no variation in the cholesterol of buffalo blood during 42 days of storage.
The increase in lactate levels during blood storage is consistent with and described in the blood conservation assessments of humans [39], donkeys [27], sheep [17], and buffalo [19]. During conservation, the glucose consumed to produce ATP generates a series of metabolites, including lactate. This process, called anaerobic glycolysis, is an alternative mechanism for energy production in tissues with insufficient amounts of O 2 or in cells without mitochondria such as RBCs [39]. Kaneko et al. [40] describes that, during anaerobic glycolysis, two lactate molecules are produced for each metabolized glucose molecule. The consequences of the increase in lactate are related to the decrease in pH levels during the conservation period, and rapid blood infusions or transfusions to seriously injured patients should be avoided in order to avoid acidemia [41].
High values of potassium in the stored blood are dependent on hemolysis, and intra-erythrocyte potassium concentrations between species, with high concentrations found in the red cells of humans, horses, goats, and cattle, and low values for dogs and cats [25]. Like Hb, the increase in extracellular potassium concentration indicates one of the first events that are related to the reduction in the quality of red cells [42].
The bags with the leukocyte filter contributed to the better quality of the stored blood since they presented less acidity, which consequently influences greater efficiency in the release of O 2 by Hb. However, further studies are needed to assess the accumulation of bioreactive substances such as cytokines, free radicals, and pro-inflammatory products, as well as their post-transfusion effects.
Another potential benefit of LR is related to the reduction of the transmission risk of infectious diseases through blood transfusion. Experimental studies in animal models have confirmed that LR provides a high, but not absolute, protection against prion disease transmission by blood transfusion [43]. Leukodepletion was beneficial in the removal of human pathogens such cytomegalovirus, Orientia tsutsugamushi, and Trypanosoma cruzi, and may reduce the Leishmania transmission hazard [44]. A recent study with dogs shows that LR reduced but did not eliminate Rickettsia conorii in stored whole blood [45]. Further studies are required to evaluate the effectiveness of LK in the transfusion-transmitted infectious diseases important to the cattle industry.
Conclusions
The whole blood of cattle stored in CPD/SAG-M triple bags and CPD/SAG-M quadruple bags with in-line leukocyte filtration changed during the storage period of 42 days under refrigeration; however, the blood remained viable for transfusion. The two types of bags evaluated can be indicated for conservation and transfusion of whole blood in the bovine species. In assessing the efficiency of the leukocyte filter, the Filter blood bag proved to be efficient, removing 99.3% of the white cells from bovine whole blood, and can be used under field conditions. | 2020-12-10T09:07:02.566Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "919fbd3e6e179d3d482515d773cbd56136e57953",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/biology9120444",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93ab1eeef90ab43f2e97a69dee1ff4980121ba6f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
258972076 | pes2o/s2orc | v3-fos-license | Bernstein-Type Operators on the Unit Disk
We construct and study sequences of linear operators of Bernstein-type acting on bivariate functions defined on the unit disk. To this end, we study Bernstein-type operators under a domain transformation, we analyze the bivariate Bernstein–Stancu operators, and we introduce Bernstein-type operators on disk quadrants by means of continuously differentiable transformations of the function. We state convergence results for continuous functions and we estimate the rate of convergence. Finally some interesting numerical examples are given, comparing approximations using the shifted Bernstein–Stancu and the Bernstein-type operator on disk quadrants.
Preliminaries
In 1912, S. Bernstein ([2]) published a constructive proof of the Weierstrass approximation theorem that affirms that every continuous function f (x) defined on a closed interval can be uniformly approximated by polynomials. For a given function f ∈ C[0, 1], Bernstein constructed a sequence of polynomials (lately called Bernstein polynomials) in the form for 0 x 1, and n 0.
Clearly, B n f is a polynomial in the variable x of degree less than or equal to n, and (1.1) can be seen as a linear operator that transforms functions defined on [0, 1] to polynomials of degree at most n.
Hence, in the sequel, we will refer to B n as the nth classical univariate Bernstein operator.
If we define Among others, classical Bernstein operators satisfy the following properties ( [13]): • They are linear and positive operators acting on the function f and preserve the constant functions as well as polynomials of degree 1, that is, The Bernstein operators admit a complete system of polynomial eigenfunctions. However, each eigenfunction depends on n and, thus, is associated with the nth Bernstein operator B n . Another inconvenience of Bernstein operators associated with an adequate function f is its slow rate of convergence toward f . For years, several modifications and extensions of Bernstein operators have been studied. The modifications have been introduced in several directions, and we only recall a few interesting cases and cite some papers. For instance, it is possible to substitute the values of the function on equally spaced points by other mean values such as integrals, as was stated in the pioneering papers of Durrmeyer ([9]) and Derriennic ([6, 7]). In [4], the operator is modified in order to preserve some properties of the original function. Another group of modifications given by the transformation of the function by means a convenient continuous and differentiable functions is analyzed in [5]; and, of course, the extension of the Bernstein operators to the multivariate case. The most common extension of the Bernstein operator is defined on the unit simplex in higher dimensions ( [1,13,16,17], among others), since the basic polynomials (1.2) can be easily extended to the simplex.
In this paper, we are interested in finding an extension of the Bernstein operator to approximate functions defined on the unit disk. In this way, we will consider two kinds of modifications: by transformation of the argument of the function to be approximated, and by definition of an adequate basis of functions as (1.2). We present and study two Bernstein-type approximants, and we compare them by means of several examples.
The structure of the paper is as follows. Section 2 is devoted to collecting the properties of univariate Bernstein-type operators that we will need along the paper. In Sect. 3, we recall the method introduced by Stancu ( [17]) for obtaining Bernstein-type operators in two variables by the successive application of Bernstein operators in one variable. In Sect. 5 and Sect. 6, we define the shifted nth Bernstein-Stancu operator and the shifted nth Bernstein-type operator and study their respective approximation properties. Section 7 is devoted to describing an extension of certain linear combinations of univariate Bernstein operators that give a better order of approximation. The last section is devoted to analyzing several examples, comparing the approximation results for both Bernstein-type operators on the disk, and the linear combinations introduced in Sect. 7.
Univariate Bernstein-Type Operators
In this section, we recall the modified univariate Bernstein-type operators that we will need later. We start by shifting the univariate Bernstein operator.
Using the change of variable α β we have that Bernstein basis on [α, β] (see Fig. 1) satisfies the following properties: • p n,k (α) = δ 0,k and p n,k (β) = δ k,n , where, as usual, δ ν,η denotes the Kronecker delta, • If n = 0, then p n,k (x; [α, β]) has a unique local maximum on [α, β] at x = (β − α) k n + α. This maximum takes the value For every function f defined on I = [α, β], we can define the shifted univariate nth Bernstein operator as Note that B n [ f (x), I ] is a polynomial of degree at most n. In this way, In the sequel, we will use the following Bernstein-type operator studied in [5] and [10]: where τ is any function continuously differentiable as many times as necessary, such that τ (0) = 0, τ(1) = 1, and τ (x) > 0 for x ∈ [0, 1]. Throughout this work, it will be sufficient for τ to be continuously differentiable.
In [5], the following identities were given: We have the following result. Since as n → +∞, the result follows from taking the limit on both sides of C τ We also introduce the following shifted Bernstein-type operator: where τ (x) is any function that is continuously differentiable, such that τ (α) = α, τ (β) = β, and τ (x) > 0 for x ∈ [α, β]. Since as n → +∞, the result follows from taking the limit on both sides of C τ
Bivariate Bernstein-Stancu operators
In 1963, Stancu [17] studied a method for deducing polynomials of Bernstein type of two variables. This method is based on obtaining an operator in two variables from the successive application of Bernstein operators of one variable. Let φ 1 ≡ φ 1 (x) and φ 2 ≡ φ 2 (x) be two continuous functions such that φ 1 < φ 2 on [0, 1]. Let ⊆ R 2 be the domain bounded by the curves y = φ 1 (x), y = φ 2 (x), and the straight lines x = 0, x = 1. For every function f (x, y) defined on , taking into view (3.1) let us define the function where 0 t 1.
The nth Bernstein-Stancu operator is defined as where each n k is a nonnegative integer associated with the kth node x k = k/n, and t is given by (3.1). Writing (3.3) explicitly, we have If we denote by B (t) n the univariate Bernstein operator acting on the variable t, then the Bernstein-Stancu operator can be written as We have the following representation of B n in terms of a matrix determinant.
Remark 3.2
Observe that the step size of the partition of the x-axis is 1/n and, for a fixed node x k = k/n, the step size of the partition of the t-axis is 1/n k . Therefore, the step size of the partition of the y-axis is 1/m k , where and, thus, We point out that, in general, B n [ f (x, y), ] is not a polynomial. However, it is possible to obtain polynomials by an appropriate choice of φ 1 , φ 2 , and n k . For instance: (1) The Bernstein-Stancu operator on the unit square Q = [0, 1] × [0, 1] (see for instance [13], [17]) is obtained by letting φ 1 (x) = 0 and φ 2 (x) = 1. Hence, for a function f defined on Q, we get n k j=0 f k n , j n k p n,k (x) p n k , j (y).
Note that when n k is independent of k (e.g., n k = m for some positive integer m), B n is the tensor product of univariate Bernstein operators on Q.
Theorem 3.3 ([17]) Let f be a continuous function on
converges uniformly to f (x, y) as n → +∞.
Stancu only gave a detailed proof of the approximation properties of B n on triangles. In Sect. 5, we consider a slightly general operator and prove the uniform convergence on any bounded domain , and we recover Stancu's result when = T 2 .
is a Bernstein operator on Q. Indeed, for every function f defined on Q, we define the function F : Q → R as Then, using the transformation x = 2u − 1 and y = 2v − 1 which maps Q into Q, we get (2) An alternative way to obtain the Bernstein-Stancu operator on the simplex T 2 is by considering the Duffy transformation which maps Q into T 2 . Let f be a function defined on T 2 . We can define the function Then, the operator is a Bernstein-type operator on the simplex since, using the Duffy transformation, we get is not a polynomial unless n − k − n k 0. We recover the usual Bernstein-Stancu operator on the simplex by setting n k = n − k.
(3) Consider the unit ball in R 2 : For every function f defined on B 2 , we can define the function F : Q → R 2 as The operator is a Bernstein operator on the unit ball since Observe that, in this case, In contrast with the previous two cases, there is no obvious choice of n k such that is a polynomial. Nevertheless, notice that for y = 0, we have and for x = 0 we have is a polynomial on the x-and y-axes for any choice of n k . In Fig. 2, the representation of the mesh in this case for n = n k = 20 is given. (4) Let which maps each quadrant to Q. The corresponding Bernstein operators on the quadrants are: Indeed, for every function f defined on B 2 , we can define the functions on Q: Then, In this case, observe that for k = 0, the mesh corresponding to B 1 and B 2 , and similarly to B 3 and B 4 , coincides on the y-axis (see Fig. 3). Moreover, for j = 0, the mesh corresponding to adjacent quadrants coincides on the x-axis. Therefore, we can define a piecewise Bernstein operator on B 2 as follows: (4.1)
Proposition 4.1 For any function f on
Similarly, for y = 0 and is continuous on the x-and y-axes.
Shifted Bernstein-Stancu Operators
Motivated by the examples of Bernstein operators on different domains introduced in the previous section, now we define the shifted nth Bernstein-Stancu operator and study its approximation properties. Let φ 1 and φ 2 be two continuous functions, and let I = [a, b] be an interval such that φ 1 < φ 2 on I . Let ⊂ R 2 be the domain bounded by the curves y = φ 1 (x), y = φ 2 (x), and the straight lines x = a, x = b. Observe that for a fixed x ∈ I , the polynomials p n,k (y; For every function f (x, y) defined on , define the function The shifted nth Bernstein-Stancu operator is defined as , where n k = n − k or n k = k for all 0 k n. Written in terms of the univariate Bernstein basis, we get The following result plays an important role when studying the convergence of the shifted Bernstein-Stancu operator.
where B n denotes the univariate shifted Bernstein operator acting on the variable x. Since B n converges uniformly for a continuous function, we have n .
(v) Finally, if f (x, y) = y 2 in (5.1), then we get Then, Observe that Together with (5.2), we get If n k = n − k, then n k=0 p n,k (x; , I , , I .
The convergence of the operator is clear from Lemma 5.1 and Volkov's theorem ( [18]). Now, we study the approximation properties of the shifted Bernstein-Stancu operators.
Theorem 5.3 Let f be a continuous function on . Then,
Proof Let δ 1 , δ 2 > 0 be real numbers.
Note that on we have B n [1, ] = 1, Taking into account the inequality (see, for instance, [16,17]) Therefore, Recall that the univariate shifted Bernstein satisfies the following Voronowskaya type asymptotic formula: Let f (x) be bounded on the interval I , and let x 0 ∈ I at which f (x 0 ) exists. Then, Now, we give an analogous result for the Bernstein-Stancu operator.
Theorem 5.4 Let f (x, y) be a bounded function on
y φ 2 (x)}, and let (x 0 , y 0 ) ∈ be a point at which f (x, y) admits second-order partial derivatives, and φ i (x 0 ), i = 1, 2, exist. Then, Proof Let us write the Taylor expansion of f (u, v) at the point (x 0 , y 0 ): Applying B n to both sides, we get: where we have omitted for brevity. We deal with each term separately. From Lemma 5.1 (ii), we get B n [u − x 0 ] u=x 0 = 0. Next, from the proof of Lemma 5.1 (iii), we have But using (5.3), we get Similarly,
Now we deal with the last term
Fix a real number ε > 0. Then, there is a real number δ > 0 such that if ||(u, v) − (x 0 , y 0 )|| < δ, then |h(u, v)| < ε. Let S δ be the set of k and j such that 1 δ 2 F k n , j n k > 1. Then, Moreover, we have Thus, Putting all the above together, we get and the result follows.
Shifted Bernstein-Type Operators
We define the shifted bivariate Bernstein-type operator. Let φ 1 and φ 2 be two continuous functions, and let I = [a, b] be an interval such that φ 1 < φ 2 on I . Let ⊂ R 2 be the domain bounded by the curves y = φ 1 (x), y = φ 2 (x), and the straight lines where τ is any continuously differentiable function on I , such that τ (a) = a, τ(b) = b, and τ (x) > 0 for x ∈ I , and for each fixed x ∈ I , σ x is any continuously For every function f (x, y) defined on , define the function for 0 u 1, and 0 v 1, where φ i , i = 1, 2, are defined in (5.1).
The shifted bivariate Bernstein-type operator is defined as
for (x, y) ∈ , where n k = n − k or n k = k for 0 k n. Written in terms of the univariate classical Bernstein basis, we get f (x, y). Now, we study shifted Bernstein-type operators defined on each quadrant of B 2 , denoted by B i for i = 1, 2, 3, 4. We will choose T and n k such that, for any function, the approximation given by Bernstein-type operators on each quadrant is a polynomial. (i) For x ∈ [0, 1], let τ (x) = x 2 and, for each fixed value of x, let σ x (y) =
The polynomials L [2k]
n f (x) satisfy the recurrence relation and, if f (2k) exists at a point x ∈ [0, 1], then Using (7.1), we can obtain the following explicit expressions for the constants α j 's, In [14], May considers a slightly more general operator is a polynomial of degree 2 k n. However, May proved that if f (2 k+1 ) exists, then and Although we do not study the approximation behavior of these operators here, the numerical experiments in the following section suggest a better rate of convergence than B n and C n .
Numerical Experiments
In this section, we present numerical experiments where we compare the shifted Bernstein-Stancu operator B n on B 2 , and the shifted Bernstein-type operator C n in (6.1). To do this, we consider different functions defined on B 2 . For each function f (x, y), we compute B n [ f (x, y), We use a set of points randomly distributed on the unit disk (generated by mesh function in Mathematica) to compare the function to its approximations. For B n [ f (x, y), B 2 ], we use 630 points (x i , y i ). We set z i = f (x i , y i ), 1 i 630, andẑ i equal to the value of B n [ f (x, y), B 2 ] at the respective point (x i , y i ), and compute the root-mean-square error (RMSE) as follows: Similarly, for C n [ f (x, y), B 2 ], we use randomly distributed 1082 points (x j ,ȳ j ). We set w j = f (x j ,ȳ j ), 1 j 1082, andw j equal to the value of C n [ f (x, y), B 2 ] at the respective point (x j ,ȳ j ), and compute the RMSE as follows: .
In each case, we plot the RMSE for increasing values of n using Mathematica. For each operator, the set of points used to compute the RMSE consists of a fixed number of points. On the other hand, the number of mesh points used to represent each operator depends on n.
We represent C n [ f (x, y), B 2 ] on each quadrant using different colors as shown in Fig. 4. We take n = 100, then the mesh required to obtain the operator for each quadrant consists of 20200 points.
For B n [ f (x, y), B 2 ], we take n = 200. Then, the mesh required to obtain the operator for all the unit disk consists of 40401 points.
We note that the operator C n requires two evaluations at the mesh points on the common boundaries of two adjacent quadrants. Therefore, the operator B n needs a smaller number of evaluations than the operator C n since, for a fixed n, B n and C n are composed of (n + 1) 2 and 2 (n + 1) (n + 2) evaluations, respectively.
Additionally, we compute the RMSE for S [1] n [ f (x, y), 2 j ] and R [1] n [ f (x, y), 2 j ] using the same set of randomly distributed points as before.
Example 1
First, we consider the continuous function The graph of f (x, y) is shown in Fig. 5, and the approximations C n [ f (x, y), and B n [ f (x, y), B 2 ] are shown in Fig. 6. We list the RMSE of both approximations for different values of n in Table 1 and plot them together in Fig. 7, where the characteristic slow convergence inherited from the univariate Bernstein operators is observed. Moreover, the corresponding RMSEs are shown in Table 2 and Fig. 8 for S [1] n [ f (x, y), 2 j ] and R [1] n [ f (x, y), 2 j ], where a seemingly better approximation behavior can be observed. f (x, y).
Example 2
Now, we consider the continuous periodic function Its graph is shown in Fig. 9. It can be observed in Fig. 10 that the approximation error for both operators is larger at the maximum and minimum values of the function. Table Fig. 11 contain further evidence of this larger error. Moreover, in comparison with the previous example, it seems that the rate convergence of C n [g(x, y), B 2 ] is significantly faster than the rate of convergence of B n [g(x, y), B 2 ]. Table 4 and Fig. 12 show the errors corresponding to R [1] n [ f (x, y), 2 j ] and S [1] n [ f (x, y), 2 j ]. In comparison with B n and C n , R [1] n , and S [1] n appear to have a better approximation behavior.
Example 3
Here, we consider the continuous function h(x, y) = e x 2 −y 2 − x y, (x, y) ∈ B 2 , (see Fig. 13). Both approximations are shown in Fig. 14, and their respective RMSEs are listed in Table 5 and plotted in Fig. 15. Observe that, in this case, the RMSEs for both approximations are significantly smaller than in the previous examples. Moreover, based on Fig. 15, it seems that for sufficiently large values of n, the rate of convergence of both approximations is considerably similar to each other. Table 6 and Fig. 16 also show similar approximation behavior between S [1] n [ f (x, y), 2 j ] and R [1] n [ f (x, y), 2 j ].
Example 4
In this numerical example, we are interested in observing the behavior of shifted Bernstein-type and shifted Bernstein-Stancu operators at jump discontinuities. Let us consider the following discontinuous function: if 0.5 x 2 + y 2 0.8, 0.5, if 0.8 < x 2 + y 2 1.
The graph of η(x, y) is shown in Fig. 17 and the approximations are shown in Fig. 14. It is interesting to observe the behavior of the approximations at the points of jump discontinuities and, thus, we have included Fig. 19, where we show a cross-sectional view of the approximations with increasing values of n. As in the univariate case, it seems that the Gibbs phenomenon does not occur. Finally, Table 7 and Fig. 20 expose a significantly slow convergence rate for this discontinuous function in comparison with the previous continuous examples. As can be seen in Table 8 and Fig. 21, it seems [1] n [η(x, y); 2 j ] S [1] n [η(x, y); 2 j ] that a better approximation can be obtained with the operators S [1] n [ f (x, y), 2 j ] and R [1] n [ f (x, y), 2 j ]. | 2023-05-31T14:12:09.189Z | 2023-05-30T00:00:00.000 | {
"year": 2023,
"sha1": "6a2ef73cd3fb8482e09d3f16ed2c795c9cd8eda4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40840-023-01520-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "3b2ee04781a4b402cf18ca02e6f90b992395b52c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
256165827 | pes2o/s2orc | v3-fos-license | KIF1A-Associated Neurological Disorder: An Overview of a Rare Mutational Disease
KIF1A-associated neurological diseases (KANDs) are a group of inherited conditions caused by changes in the microtubule (MT) motor protein KIF1A as a result of KIF1A gene mutations. Anterograde transport of membrane organelles is facilitated by the kinesin family protein encoded by the MT-based motor gene KIF1A. Variations in the KIF1A gene, which primarily affect the motor domain, disrupt its ability to transport synaptic vesicles containing synaptophysin and synaptotagmin leading to various neurological pathologies such as hereditary sensory neuropathy, autosomal dominant and recessive forms of spastic paraplegia, and different neurological conditions. These mutations are frequently misdiagnosed because they result from spontaneous, non-inherited genomic alterations. Whole-exome sequencing (WES), a cutting-edge method, assists neurologists in diagnosing the illness and in planning and choosing the best course of action. These conditions are simple to be identified in pediatric and have a life expectancy of 5–7 years. There is presently no permanent treatment for these illnesses, and researchers have not yet discovered a medicine to treat them. Scientists have more hope in gene therapy since it can be used to cure diseases brought on by mutations. In this review article, we discussed some of the experimental gene therapy methods, including gene replacement, gene knockdown, symptomatic gene therapy, and cell suicide gene therapy. It also covered its clinical symptoms, pathogenesis, current diagnostics, therapy, and research advances currently occurring in the field of KAND-related disorders. This review also explained the impact that gene therapy can be designed in this direction and afford the remarkable benefits to the patients and society.
Introduction
KIF1A-associated neurological diseases (KAND) are a group of neurological illnesses caused by changes in the microtubule (MT) motor protein KIF1A as a consequence of a KIF1A gene mutation. These genetic changes might produce pathogenic mutations and lead to neurological disorders in patient [1]. Due to the limited availability of full gene sequencing and exome sequencing at the time, these illnesses were originally discovered in 2011, and many patients received the wrong diagnosis [2]. There are various KAND in 2011, and many patients received the wrong diagnosis [2]. There are various KAND variations that can be passed down both dominantly and recessively. Researchers have noted various types of mutations in the same proteins in patients, and intriguingly, the majority of them exhibit private variants, opening up the possibility for further investigation and the discovery of further variants in these KIF1A-related disorders (KRD) [3]. This uncommon illness, which can also damage the vision, muscles, and nerves, primarily targets the neurons in the brain [4]. Studies on metabolic diseases, such as diabetes mellitus, have found that the KIF1A gene is more expressed and more immunoreactive [5]. Furthermore, this might eventually cause other neurological issues including encephalopathy and brain shrinkage [6].
The molecular motor KIF1A affects the survival and development of sensory neurons in our body as well as the movement of membrane-bound cargo [7]. If there are any disturbances in these neuronal trafficking pathways, which are strictly controlled due to the functional compartmentalization of neurons and connect the neuron's body, dendrites, and axons, neurodegenerative diseases would result [8]. The KIF1A protein's normal function can be altered by one mutation or numerous mutations in the gene that codes for it, resulting in this disorder [3]. Due to the serious life-threatening problems brought on by genetic abnormalities in this condition, patients' quality of life and life expectancy might be greatly affected [2]. The detailed work flow of current work is outlined in PRISMA diagram ( Figure 1)
KIF1A Gene
Kinesin-like proteins, better known as axonal transporter of the synaptic vesicles, are MT-centered motors belonging to the kinesin family of proteins, and are involved in
KIF1A Gene
Kinesin-like proteins, better known as axonal transporter of the synaptic vesicles, are MT-centered motors belonging to the kinesin family of proteins, and are involved in anterograde transport of some major membrane organelles, vesicles, macro-protein, and Pharmaceuticals 2023, 16,147 3 of 17 mRNA along the microtubular structures. This in itself explains that the KIF1A gene plays an important role in axonal transport as well as meiosis and mitosis processes [8]. The KIF1A is also required for neuronal dense core vesicles (DCV) transport to the dendritic spines and axons [9].
KIF1A protein comprises a neck region, a tail, and an N-terminal motor domain, as illustrated in Figure 2 [10]. The motor-domain has MT-dependent ATPase activity and MT-binding actions, whereas the tail consists of a stalk domain for protein binding, and a pleckstrin-homology [PH] domain is used for lipid binding. A small strand "neck linker" and the neck coil regions play an important role in the dimerization process of kinesin-3motor and the processive motility [11][12][13]. The KIF1A gene initially exists as an inactive dimer and this stage is maintained by autoinhibitory mechanisms. It gets activated when bound to cargo, forming a homo-dimer enabling the transport of synaptic vesicle precursors along the MT [10,11,14]. anterograde transport of some major membrane organelles, vesicles, macro-protein, and mRNA along the microtubular structures. This in itself explains that the KIF1A gene plays an important role in axonal transport as well as meiosis and mitosis processes [8]. The KIF1A is also required for neuronal dense core vesicles (DCV) transport to the dendritic spines and axons [9]. KIF1A protein comprises a neck region, a tail, and an N-terminal motor domain, as illustrated in Figure 2 [10]. The motor-domain has MT-dependent ATPase activity and MT-binding actions, whereas the tail consists of a stalk domain for protein binding, and a pleckstrin-homology [PH] domain is used for lipid binding. A small strand "neck linker" and the neck coil regions play an important role in the dimerization process of kinesin-3motor and the processive motility [11][12][13]. The KIF1A gene initially exists as an inactive dimer and this stage is maintained by autoinhibitory mechanisms. It gets activated when bound to cargo, forming a homo-dimer enabling the transport of synaptic vesicle precursors along the MT [10,11,14]. Variations in the KIF1A gene have been associated with various clinical conditions and are described in a database known as Online Mendelian Inheritance in Man (OMIM). Variants in KIF1A were studied in various neurodegenerative diseases with dominant and recessive inheritance ( Figure 3). In patients suffering from severe neurodegenerative disorders, homozygous recessive mutations in the KIF1A gene were first described as hereditary sensory and autonomic neuropathy type 2 and as three consanguineous families with an autosomal recessive form of hereditary spastic paraplegia (HSP) with an autosomal dominant form of SPG30 [16]. A particular mutation p.T99M was reported in patients having an intellectual disability (ID), spasticity and axial hypotonia [17]. A partially overlapping phenotype-brain atrophy with progressive encephalopathy was recently found to be associated with de-novo KIF1A mutations [6]. Several de-novo variations and mutations in patients were classified as either pure or complicated. The complicated type is accompanied by axonal neuropathy and brain cerebellar atrophy [18,19]. Variations in the KIF1A gene have been associated with various clinical conditions and are described in a database known as Online Mendelian Inheritance in Man (OMIM). Variants in KIF1A were studied in various neurodegenerative diseases with dominant and recessive inheritance ( Figure 3). In patients suffering from severe neurodegenerative disorders, homozygous recessive mutations in the KIF1A gene were first described as hereditary sensory and autonomic neuropathy type 2 and as three consanguineous families with an autosomal recessive form of hereditary spastic paraplegia (HSP) with an autosomal dominant form of SPG30 [16]. A particular mutation p.T99M was reported in patients having an intellectual disability (ID), spasticity and axial hypotonia [17]. A partially overlapping phenotype-brain atrophy with progressive encephalopathy was recently found to be associated with de-novo KIF1A mutations [6]. Several de-novo variations and mutations in patients were classified as either pure or complicated. The complicated type is accompanied by axonal neuropathy and brain cerebellar atrophy [18,19]. [20][21][22][23]. (Created with Biorender.com).
KIF1A-As a Super Engaging Motor
KIF1A protein has the unique ability to be kinetically tuned to become a supe gaging motor that ensures its proper functioning and integrity under hindering co tions when under loads [24]. The ability of these kinesin motor proteins under mecha loads is very important for the proper intracellular transport of the cargo. The high l on the kinesin motor will affect the motor speed as well as the MT attachment lifetim the KIF1A belongs to the kinesin-3 family, the super processive behavior under zero l of these proteins will purely depend upon the loops and cores of this proteins [10,25 Serapion et al. and Allison et al. found that the processive runs done by the KIF1A terminated when under load and thus low average termination forces are require compared to KIF5B, by comparing two different motors, i.e., KIF1A and KIF5B. He therefore, it shows that KIF1A uses a different mechanism to work under loads to incr its efficiency and is not similar to other kinesin motors.
The distinct feature of KIF1A is its ability to form reengagement structures with aid of different loops. Loop 12 plays a significant role in the motility of these proteins especially the positively charged K loop insert present in loop 12 has a crucial role 30]. In some studies, the scientists tried to replace the lysine (K) and remove the ch from loop 12 from the motor and this resulted in a lack of ability of the protein to w under load [24]. MT nucleotide and loop 12 influence the motor's ability to reengag der mechanical load. The degree of expansion of the MT lattice and the polymerizati the MT with different nucleotides affects the rate at which the reengagement takes p [31][32][33]. Thus, all these studies reveal that the proper functioning of loop 12 and th rangement of MT nucleotides aids in the adaptive nature of KIF1A and its novel me nism of transport of cargo by super engagement and reengagement methods. [20][21][22][23]. (Created with Biorender.com).
KIF1A-As a Super Engaging Motor
KIF1A protein has the unique ability to be kinetically tuned to become a super engaging motor that ensures its proper functioning and integrity under hindering conditions when under loads [24]. The ability of these kinesin motor proteins under mechanical loads is very important for the proper intracellular transport of the cargo. The high loads on the kinesin motor will affect the motor speed as well as the MT attachment lifetime. As the KIF1A belongs to the kinesin-3 family, the super processive behavior under zero loads of these proteins will purely depend upon the loops and cores of this proteins [10,25,26]. Serapion et al. and Allison et al. found that the processive runs done by the KIF1A get terminated when under load and thus low average termination forces are required as compared to KIF5B, by comparing two different motors, i.e., KIF1A and KIF5B. Hence, therefore, it shows that KIF1A uses a different mechanism to work under loads to increase its efficiency and is not similar to other kinesin motors.
The distinct feature of KIF1A is its ability to form reengagement structures with the aid of different loops. Loop 12 plays a significant role in the motility of these proteins, and especially the positively charged K loop insert present in loop 12 has a crucial role [27][28][29][30]. In some studies, the scientists tried to replace the lysine (K) and remove the charge from loop 12 from the motor and this resulted in a lack of ability of the protein to work under load [24]. MT nucleotide and loop 12 influence the motor's ability to reengage under mechanical load. The degree of expansion of the MT lattice and the polymerization of the MT with different nucleotides affects the rate at which the reengagement takes place [31][32][33]. Thus, all these studies reveal that the proper functioning of loop 12 and the arrangement of MT nucleotides aids in the adaptive nature of KIF1A and its novel mechanism of transport of cargo by super engagement and reengagement methods.
Symptoms
KAND has a broad phenotypic spectrum of signs and symptoms. Intellectual disability, spasticity, inherited progressive spastic paraplegia, cerebral atrophy, optic nerve atrophy, and microcephaly are a few of these ( Figure 4). But most frequently, most individuals show signs of seizures [3]. Cerebellar function impairment has also been reported in some clinical investigations, and some patients have also shown dysautonomic symptoms such as temperature instability and urine retention. Due to gastrointestinal dysfunction, people with severe diseases could need parenteral nourishment [2]. Several studies state that the mutation in the KIF1A gene directly affects the motility of hetero-dimeric motors [3,34,35].
KAND has a broad phenotypic spectrum of signs and symptoms. Intellectual dis bility, spasticity, inherited progressive spastic paraplegia, cerebral atrophy, optic nerv atrophy, and microcephaly are a few of these ( Figure 4). But most frequently, most ind viduals show signs of seizures [3]. Cerebellar function impairment has also been reporte in some clinical investigations, and some patients have also shown dysautonomic sym toms such as temperature instability and urine retention. Due to gastrointestinal dysfun tion, people with severe diseases could need parenteral nourishment [2]. Several studi state that the mutation in the KIF1A gene directly affects the motility of hetero-dimer motors [3,34,35].
The following examples are the typical services that a KAND patient may requi [36]; • Neurologist-neurological abnormality, seizures, and spasticity Specialized therapist-issues with speech and coordination • A team of specialists-intellectual disability Figure 4. KAND symptoms on a clinical basis [36]. II C is a type of autosomal recessive disorder an can be represented as 2C also. (Created with BioRender.com.).
Hereditary Sensory Neuropathy IIC (Also Represented as 2C)
The deterioration of the neurons that results in the loss of feeling is the cause of th neuropathy. Other signs, such as numbness and tingling, are also detected and eventual [36]. II C is a type of autosomal recessive disorder and can be represented as 2C also. (Created with BioRender.com.).
The following examples are the typical services that a KAND patient may require [36]; • Neurologist-neurological abnormality, seizures, and spasticity Specialized therapist-issues with speech and coordination • A team of specialists-intellectual disability
Autosomal Dominant Variety of KRD [KIF1A-Related Disorders]
Developmental delays, cerebellar atrophy, peripheral neuropathy, ptosis, facial diplegia, intention tremors, strabismus, nystagmus, clumsiness, and ataxia are some of the typical symptoms. Other symptoms include hypertonia (increased muscle tone), hyperreflexia (exaggerated reflexes), spasticity (muscle tightness), and hyperreflexia [36,37] The deterioration of the neurons that results in the loss of feeling is the cause of this neuropathy. Other signs, such as numbness and tingling, are also detected and eventually lead to the loss of sensation. As a result of this affecting sensory neurons, automatic or involuntary body movements are also directly impacted [36,38,39].
HSP
There are more than 80 genetically different types of HSP [40]. These types of spastic paraplegias are caused due to variations in the KIF1A gene and are referred to as an autosomal dominant type of Spastic Paraplegia (SPG30). The major symptoms include neurological difficulties, severe leg weakness, and spasticity [38,41,42]. The major sites of mutations in the human KAND protein are depicted in Figure 3.
Diagnosis
Traditional diagnostic methods such as multiplex probe amplification, karyotyping, genetic testing, and chromosomal microarray analysis were employed to screen out all forms of neurological disorders, including KAND [43][44][45][46][47]. The main drawback of these procedures was that different mutation types began exhibiting comparable clinical traits. A cutting-edge method called whole-exome sequencing (WES) is now becoming more widely used [48]. The WES aids in the diagnosis of the condition and the selection of the most effective treatment plan for the neurologist, particularly the pediatric neurologist [49]. Although WES is frequently used to diagnose KAND, it has several technical limitations that make it difficult to detect trinucleotide repeats, big indels, and epigenetic modifications that could impede the diagnosis of the illness [50,51]. Some of the newer fields such as the cytogenetics, chromosomal aberration, molecular diagnostic technique, carrier detection techniques are recently being explored by scientists to develop a novel method for the diagnosis of KAND.
Treatment
There is currently no effective therapy or cure for KAND. However, because gene therapies have the potential to treat many neurological disorders, researchers are working on them. Some of the fundamental experimental approaches for gene therapy are listed in Table 1. Even while there is no concrete evidence that gene therapy may entirely cure KAND, the preliminary findings from research trials enable the researchers to focus more on creating a treatment plan. These treatments will target the genes that cause the condition as well as the neurotrophic factors that support the healthy function and survival of the neuronal cell [52]. The use of nanoparticles, engineered microRNA, plasmid transfection, viral vector design, polymer-mediated gene delivery, clustered regularly interspaced short palindromic repeats (CRISPR)-based therapeutics, and other technologies has advanced this field [52]. Because surgical treatment cannot cure KAND and there is currently no approved standard pharmacological treatment, gene-based therapeutics are crucial [53]. By fully comprehending the pathophysiology of the disease and then treating it at the molecular level, these technologies also have the advantage of repairing genes and treating disorders that are not at all treatable by utilizing conventional medical procedures [54,55]. Table 1. The fundamental experimental approach for gene therapy used four theoretical modes of action [56].
Sl no.
Modes of Action Specification
1.
Gene replacement [57] This is done when the disease is caused due to the loss of functionality of the gene.
2.
Gene knockdown [58] This is employed when a function has been toxically increased or when gene metabolites or gene products have accumulated.
3.
Pro-survival or symptomatic gene therapy [56] Here the pathological condition is reversed by using a pro-survival gene that is non-specific in nature.
4.
Cell suicide gene therapy [59] This is typically thought of as the last option. This is primarily used in cancer treatment, where it is necessary to eradicate malignant cells. In the case of KAND, this method's application is constrained. The choice of a gene transfer vector is crucial because it directly affects how effective the treatment will be. The selection process takes into account a number of variables, including affinity and blood-brain barrier (BBB) crossing capability. Adenovirus (Ad), Herpes-Simplex virus (HSV), Lentivirus (LV), and recombinant Adeno-Associated virus (rAAV) are some of the most often employed viral vectors [56]. Even though all these methods exist to treat KAND, there is no conclusive clinical evidence that they can also be used to treat KRDs. The effectiveness percentage is still unknown.
De Novo Variations in the KIF1A Gene
De novo variants are primarily seen in patients who also have comorbid conditions like cognitive impairment, muscle stiffness, or optic nerve atrophy. These conditions cooccur with clinical symptoms caused by recessive mutations in the KIF1A gene, making them more harmful. The majority of mutations are found in the motor domain and are easily anticipated since they have an impact on the protein's ability to operate normally as a motor because of the change in the original structure [60]. Only a limited amount of information is known regarding de novo mutations because there haven't been many reports on these mutations published ( Table 2). The missense mutations are observed at the c.296C > T/p.Thr99Met location of the KIF1A gene, which can affect the amino acid produced which will directly have an impact on the protein functions [17]. The prediction algorithms used are Scale Invariant Feature Transform (SIFT) [61], Polymorphism Phenotyping v2 (PolyPhen-2) [62], and PANTHER [63]. The following clinical manifestations are observed in people with de novo mutations in the motor domain:
•
Intellectual disability-delay in cognitive development occurs in all cases • Cerebellar atrophy-diagnosed using magnetic resonance imaging • Optic nerve atrophy • Spastic paraplegia-mainly affecting lower limbs • Peripheral neuropathy [14] 8. Relation between KIF1A Variants and HSP 8.1. HSP An uncommon neurological condition called HSP causes stiffness or wasting of the bladder or lower limbs [37]. It is caused by X chromosome-linked inheritance patterns, autosomal dominant and recessive mutations, as well as other factors that are categorized in OMIM [64]. Autosomal dominant paraplegia HSP subtype SPG30 is a result of homozygous missense variants, SPG7 and SPG11 in the KIF1A gene whereas the dominant form of HSP is observed due to variation in the 'SPAST' gene [38,65,66]. HSP can manifest basically in the form of spasticity and weakness in the patient. Although they may also experience hyperuricemia, these patients' life expectancies are unaffected. In more severe cases of HSP, peripheral and optic neuropathy as well as mental impairment may also be present. Currently, 79 specific and fixed positions are located in chromosomes with 61 corresponding genes that have a link to HSP condition [37].
KIF1A Variants and Spastic Paraplegia
KIF1A mutations are seen in three regions; the motor domain, regulatory region, and cargo binding region and these are mainly responsible for the development of SPG30. Specific mutations affect the gene function in a specific manner. These changes in the functions will ultimately result in the mislocalization of cellular cargoes, i.e., it will make the KIF1A protein unable to regulate its motility and subsequently, it fails to bind to the cargo. The severity of SPG30 depends on to what extent the KIF1A gene has undergone mutations [15]. The loss of functioning in the motor domain of KIF1A can affect the structural domain which is essential for various functions such as hydrolyzing ATP, providing mechanical force, and MT-binding (loop L8). Examples of these can be mutations residing in Switch I represented as R216C and Switch II which is represented as E253K and mutations that cause destabilization of loop L8 [66,67]. Switch II is better understood as E253K and ATP-binding cassette (ABC), that is the ATP-binding cassette mutant which drastically slows down the motility of this motor protein resulting in the inability to move to the distal portion of the neuronal axon.
The variants which result in the loss of functionality of the gene located outside the kinesin domain of the motor region lead to a defect in normal functioning that causes a problem known as functional intolerance [68,69]. Interestingly, a gain of function has also been observed in SPG30. Making use of a single-molecule assay procedure, Chiba et al. reported V8M, R350G, and A255V, the three KIF1A mutants casual in SPG30 had higher rates of settling on MTs and had higher velocity as compared to wild type (WT) KIF1A, indicating that excessive cargo accumulation can be harmful. In cohort studies done by Maartje Pennings et al., 20 heterozygous KIF1A variants were reported by clinical exome sequencing and the resulting SPG due to KIF1A was pure. It was observed that phenotypic differences in the KIF1A-related diseases may be due to different levels of impairment in transport. Parental testing done by the team revealed the deletion of chr2q37 in a few families. KIF1A gene is localized in the cytogenic 2q37.3 band and microdeletion of chromosome 2q37 is deleted in patients suffering from 2q37 microdeletion syndrome that can be observed by intellectual disability, brachydactyly, weight gain, hypotonia, characteristic facial features, autism, and epilepsy [37,67]. Eleven of the 20 variants reported in the studies done by Maartje Pennings et al. were found to be missense variants located at the motor domain that cause dominant SPG. The rest nine variants detected outside the motor domain included variants that showed loss of functionality of gene (some were de novo occurrences) and chr2q37 deletion which indicates that loss of function variants has the ability to cause autosomal dominant SPG [37].
In another study done by Stephan Klebe et al., using targeted NGS, p.R350G variant was identified, which has a direct effect on amino acid in the motor domain of kinesin 1A, and surprisingly this variant was found to be compatible with phenotype expressed by HSP patients. In the same studies, whole-genome genotyping done in a Palestinian family revealed that there is the presence of a unique homozygous c.756>T [p. Ala255Val] mutation that caused the phenotypic symptoms. Studies have shown that the nature of mutations could help scientists to foresee the phenotype expressed. Non-sense mutations which can lead to complete loss of functionality of the protein can cause significant clinical manifestations in the peripheral nervous system (PNS), as peripheral neuropathy is common in more than 60% of the SPG30 patients [22,70,71].
KIF1A and Brain Atrophy
Homozygous mutation in KIF1A is one of the main reasons for the rare hereditary sensory and autonomic neuropathy (HSAN) and HSP, but experiments that were done in vitro suggest that homozygous mutations influence the transport through synaptic vesicles and can lead to axon degeneration [22,66]. For example, in a child, a pathogenic variant, p.T99M de novo variation that causes cerebellar atrophy was reported, indicating these mutations may alter the neuronal function by disabling kinesin-mediated cargo transport. It has been observed that the homozygous inactivation KIF1A gene in mice can cause severe motor as well as sensory disturbances [72]. In studies done by Sahar Esmaeeli Nieh et al. [6] on 6 different patients, five de novo mutations were identified out of which two patients were observed who had de novo c.296C>T change that contained a substitution of threonine to methionine also represented as T99M [17]. Mutations like p.E253K also represented as c.757G>A and p.R316W were reported in the other two patients tested. The rest two patients had changes in the amino acid residues that were again getting mutated to form a third amino acid variant [19]. All these mutations were identified within a conserved region of the motor domain and they have the capacity to cause damage by using PolyPhen-2 [73,74].
In another study done by Chihiro Ohba et al., 5 missense mutations were found in five patients and were confirmed by Sanger sequencing to be de novo events. Magnetic resonance imaging done by the same team noticed that the patients had some difficulties in the gait along with exaggerated reflexes from locations such as deep tendons and in a few patients, cerebellar atrophy was observed. All de novo mutations observed during this study are located in the motor domain which mostly affects motor function. In this study, all the mutations observed were identified in the motor domain. The mutation p.Arg316Trp had been previously reported [72], at the same time Arg254, Arg307, and Arg307 were found on n loop L11. The α5 helix that helps to induce phosphate release during the hydrolysis of adenosine triphosphate molecule and facilitates KIF1A protein to bind onto MTs was also found to be mutated in some individuals [19,69,75]. The mutations can exhibit some unique actions on the structures present near them such as the p.Glu253Lys mutation adjacent to Arg254 can suppress γ-phosphate release [19], while p.Arg316Trp mutation disrupts the stability of loop 8 which forms a bond with the MT [69].
NESCAV Syndrome
NESCAV syndrome (NESCAVS), also referred to as autosomal dominant 9 or intellectual disability, is a neurodegenerative disorder characterized by global development delay with delayed walking or difficulty in walking due to spasticity in the lower limbs leading to loss of independent ambulation [2]. Some of the clinical features include optic nerve atrophy and varying degrees of brain atrophy, microcephaly, joint contractures, kyphosis, clubfoot, spasticity, and cerebellar atrophy [76]. It has been observed that NESCAVS is caused due to de novo heterozygous T99M mutation in the KIF1A gene. This study was done on an 8-year-old Japanese boy having axial hypotonia, peripheral spasticity, and global development delay with additional clinical manifestations like growth hormone deficiency, neurogenic bladder, and constipation [77]. In a few other studies involving unrelated patients, other manifestations like cortical visual impairment, optic neuropathy, movement disorders [6], hyperreflexia, hypermetrophic astigmatism, oculomotor apraxia, and distal muscle weakness [60]. Hamdan et al. (2011), identified a de novo missense mutation in the KIF1A gene in a patient with NESCAVS. In his study, he inserted a KIF1A MD-EGFP fusion construct into the hippocampal neurons present in rats and showed that the distal localization gets greatly reduced in neurites carrying the T99M mutation which leads to increased accumulation [17]. These mutations are found by whole-genome sequencing which can be later confirmed by sanger sequencing [60].
PEHO Syndrome [OMIM No. 260565]
PEHO syndrome characterized by progressive encephalopathy along with edema, and hypsarrhythmia is a rare neurodegenerative disease which leads to total loss of granules in the neurons resulting in an extreme condition of cerebellar atrophy [78]. This condition was first reported in 14 Finnish families in the year 1991. The basis for the diagnosis of PEHO syndrome has been put out by Somer et al. [79] who recognized the necessary features of this condition, i.e., jerking along with spasms, brain atrophy on neuroimaging studies, especially in the cerebellum and few regions of the brain stem with mild supratentorial atrophy [80]. In studies done by Sylvie Langlois et al., which involved the genomic study of patients with PEHO syndrome is being described; nine candidate genes were identified using trio WES out of which eight genes were heterozygous variants and a gene was de novo variant. The missense variant, p.(T99M) in KIF1A residing in chromosome number 2 is considered pathogenic [81]. Sanger sequencing was also carried out on the female patient and the unaffected parents and it was proved that the patient was heterozygous for the variant. Before this study was made, 24 patients had been reported with de novo heterozygous variants affecting the motor domain of KIF1A protein and the functional impact of these variants was demonstrated by Lee et al. The main clinical features reported were moderate to severe developmental delay, cerebellar atrophy, optic nerve atrophy, progressive spasticity affecting lower limbs, and peripheral neuropathy [19].
KIF1A and Autism Spectrum Disorder [ASD]
ASD also known as autism is a type of neurodevelopmental disorder in which the patients show a deficit in communication as well as the processing of language and expression of thoughts. This disorder directly or indirectly can influence the social life of the patient to a great extent. There are several genes implicated in ASD but approximately 10-15% of cases are due to mutations in a single gene [82]. Reports suggest that the patients exhibiting complex phenotypes are characterized by axonal neuropathy, spasticity, and majorly ASD. The genetic examination of these patients revealed about 21,683 variants in the coding regions [83]. Another study reports that there is a link between the KIF1A mutations and autism and is normally characterized by other neurological conditions like sensory disturbance, hyperactivity, spastic paraplegia, and epilepsy. Normally the c.38 g>A [R13H] mutation exhibits autism and hyperactivity, but in some special cases, all the neurological symptoms are exhibited by c.37C>T (p.R13C) which is a de novo mutation [84].
In the majority of the research studies done on KAND, the peripheral blood is used as the sample and the DNA is extracted and the gene sequencing is mainly done by WES technology. There are modern tools also available for predicting the structure such as the SWISS-MODEL and Mutation Tester that utilize a different strategy for reporting [85]. If a patient is found to have harmful variants such as c.664A>C (p. Asn222His), a type of de novo variant, it is suggested that the patient is at a higher risk of getting ASD [85]. Not only this, one interesting thing to be noted is that along with KIF1A mutations, the mutations on the HUWE1 gene have also led to the expression of ASD and other conditions like epilepsy. This is mainly due to the 22q11.2 duplication (a penetrant copy number variant) [86]. These studies suggest that ASD is having a close association with KAND.
Recent Studies in the Field of KAND
Transport of cargo is very much important as far as a cell is concerned. If proper translocation does not take place, the cargo can get accumulated and lead to cell necrosis. The transport of cargo is done by three major methods:
•
Regulation of motor ATPase activity by the process of autoinduction. There have been tremendous efforts done by a lot of scientists to discover the in vivo functioning of each part of the KIF1A gene and its protein. Recent advancements in technology have led to the discovery of newer models such as the DNA origami scaffold model which provides a much more precise picture of what is happening in vivo. We have included some of the latest discoveries in KAND in this paper. In a few studies done recently, it was proved that a more bound linker will allow the cargo motors to attach to the MT track. It was found that KIF1A, KIF13B, and KIF16 regulate the parts of the KIF1A protein, especially the coiled domain [30]. Another study shows that the kinesin-3 monomers can be multimerized which results in the transport of cargo [87]. Most of the studies done on the regulation of motor domains are normally carried out using the pure components of proteins, particularly in the motor domain but there are a few exemptions to be noted here such as the functioning of two opposite domains in the protein [88][89][90][91][92].
When transport of the cargo does not take place in a neuronal cell, the kinesin motors especially the kinesin-3 subfamily adopt the mechanism of autoinhibited conformation. In such a case, the UNC-104KIF1A gets activated automatically. To date, the mechanism of activation is not understood fully. This condition leads to the enhancement in cargo transport in the form of vesicles in the cell. This also explains to us the cause of motor hyperactivation associated with this disease [93]. In some cases, it is reported that there can be a disruption of motor domain/CC1 domain-mediated autoinhibition due to the actions of dominant suppressors. To be more specific, the mutations at C184 can disrupt the inter-domain packing, while if the mutation takes place at the G421 then there will be a sudden turn between the CC1a and CC1b that can indirectly lead to interference in packing [12,[94][95][96][97].
Gene Therapy
In order for the transgene to integrate with the host DNA (retrovirus) and make up for the defective gene's lack of expression, the transgene must be introduced into the target cells through gene transfer therapy. Scientists have developed a wide range of carriers for the successful delivery of genes to their targets and the majority of them comprise of plasmid DNA and oligonucleotides [98]. Despite the fact that gene therapy overcomes the difficulties associated with conventional treatment, it is not without drawbacks. They include the high cost of gene therapy, which restricts its use, ethical concerns about changes made to the germline, immune rejection of the transferred gene, and the route of administration [96]. Due to their low pathogenicity, cellular tropism, replication incompetency, and simplicity of manipulation, LV and AAVs are being investigated as delivery modalities for the introduction of transgenes in clinical trials. A naturally occurring serotype of AAV called AAV9 has the capacity to penetrate the BBB and target neurons, astrocytes, and microglia in the brain. The capsid proteins of these serotypes distinguish them and help determine the corresponding cellular tropism [99].
Due to its cell-specific transduction abilities, rAAV9 is the most recommended CNS delivery technique for neurological diseases. Both dividing and non-dividing cells can be transduced by rAAVs [98]. The three most effective gene editing techniques used for modifying cellular DNA at the native locus are CRISPR and CRISPR-associated (Cas) proteins, transcription activator-like effector nucleases (TALENs), and zinc finger nucleotides (ZFN). ZFNs are the first programmable nucleases that can cleave particular regions of DNA using an altered Fokl endonuclease to change the way double-stranded breaks are repaired in DNA [100]. Target gene alterations can be carried out using TALENs, which can recognize random target sequences. TALENs merge Fokl endonuclease with transcription activator-like effectors (TALEs) modular DNA binding domain [101].
The foundation of gene editing techniques is the introduction of genomic breaks and the precise allocation of these breaks by nuclease enzymes. Genome editing depends on two biological pathways: non-homologous end joining (NHEJ) and homology-directed repair (HDR) [102]. While NHEJ is frequently observed in cells that are not dividing, such as neurons, HDR occurs across all cell cycle phases. The NHEJ repair process has been used Pharmaceuticals 2023, 16,147 12 of 17 by researchers to support gene editing techniques. Several unique gene editing techniques have been created thus far for non-dividing cells, including, HITI (homology independent targeted integration) [103], HITI-based SATI approach (Single homology Arm donor mediated Intron-Targeting Integration) [104], CRISPR Prime editing [105], HMEJ (homology mediated end joining) [106], vSLENDR (virus-mediated single-cell labelling of endogenous proteins via HDR) [107], PITCh (precise integration into targeted chromosome) [108].
A genomic technique known as whole exome sequencing (WES) is used to sequence the protein-coding sections of genomic DNA (exons) and find the causative mutations that lead to specific genetic disorders [109,110]. WES has been extensively utilized to diagnose KRDs [48,70]. The use of gene therapy to treat neurogenetic disorders has risen dramatically over the past several years, with KAND increasingly displacing more traditional approaches. As previously stated, there is no known treatment for KAND. The development of gene therapy offers hope for the existing therapeutic approaches. Gene editing tools now make it possible to change genes, and there is a chance that the conditions could be reversed. However, there are several drawbacks to gene therapy, such as inserted gene over-or under-expression, vector capacity to carry the gene, and mutant gene product attacking the wild-type allele. The constraints of gene therapy have been overcome by the emergence of gene editing tools like CRISPR Cas9.
Conclusions
In this paper, we addressed all the available information, recent studies, and newer advancements concerning the KAND. The main cause of this disease is the mutations that take place in the KIF1A gene which can result in loss of function or gain of mutated functions. These mutations directly result in the mis-delivery of essential cargo transported inside the neurons. These cargoes play a very important role in neuronal growth, differentiation, and survival. Another problem is that there can be different phenotypic expressions even for the same mutation in the gene making the diagnosis difficult even with the help of expensive testing. It was observed that there can be two different forms of KIF1A-related disorders which are the autosomal dominant forms as well as autosomal recessive forms. One such condition is spastic paraplegia, which has been discussed earlier in this paper. Spastic paraplegia can be pure when symptoms are confined to stiffness in the legs and bladder. When these symptoms are accompanied by other neurological disturbances, then it is termed as complicated. KIF1A mutations are also linked to brain atrophy, encephalopathy, PEHO syndrome and autism spectrum disorder and all the disorders were observed to occur due to de novo variations in the KIF1A gene. Very few reports were available regarding the de novo variation, which has been mentioned in the article.
There are various organizations and foundations that provide information and spread awareness about KAND. Some are governmental while others are privately funded organizations. These organizations consist of the patients, family members of the patient as well as the physicians, clinicians, research scholars, and other paramedical staff. Although this disorder is rare and there are still many research gaps in the field of KAND in neuroscience, the possibility of the development of a new drug or an active chemical moiety cannot be predicted as of now. To date, there is no existing cure that guarantees the recovery of the patient, but we expect that newer advancements in neuroscience can enhance the treatment and management of KAND. A better understanding of the in vivo functioning of the motor domain, the part where most of the mutations take place can give us a lead to improve the existing difficulties faced by the patients. Newer models like the DNA origami scaffold model are used by scientists to provide more information on what is happening in vivo. Newer technologies such as gene therapy have the potential to pave way for advanced therapies and thereby increasing the quality of life of the patients. Data Availability Statement: The data sets used, analyzed, and reviewed were collected from the corresponding authors and online research databases.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-01-24T16:44:28.496Z | 2023-01-19T00:00:00.000 | {
"year": 2023,
"sha1": "ef3ae311b07994c5d66b610850f423716a24f47a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8247/16/2/147/pdf?version=1674121972",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "667eab3c38b8c1865db9bda4d9318bfe1def7582",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52841190 | pes2o/s2orc | v3-fos-license | Transmission-based association mapping of triglyceride levels in a longitudinal framework using quasi-likelihood
Complex genetic traits are often characterized by multiple quantitative phenotypes. Because values of such phenotypes vary over time, it is thought that analyses of longitudinal data on the phenotypes may lead to increased power in detecting genetic association. In this paper, we extend a transmission-based association test applying quasi-likelihood that has been developed by us to the longitudinal framework and to carry out a genome-wide association analysis of triglyceride levels based on the data provided in GAW20. We consider different phenotype definitions based on administration of fenofibrate and obtain significant association findings within genes involved in heart diseases.
Background
Most clinical end-point traits are governed by quantitative precursors and it may be a prudent strategy to analyze these precursor phenotypes for association mapping of a clinical end-point trait. The family-based design for detecting association as implemented in the classical transmission disequilibrium test [1] is a popular alternative to population-based case-control studies as it circumvents the problem of population stratification. We have developed a modification of a transmission-based test for quantitative traits proposed by us [2] by incorporating transmission information from both parents [3], instead of only the heterozygous parent in a family, based on the paradigm that the phenotype of an offspring is a function of the alleles transmitted by both parents. We adopt a quasi-likelihood approach [4] to develop a novel test statistic for association in the presence of linkage between a single-nucleotide polymorphism (SNP) and a quantitative trait. Although most association analyses are based on phenotype measured at single time points, longitudinal data on phenotypes carry more information on trait variation compared to cross-sectional data. However, the major statistical challenge in the association analyses of longitudinal phenotypes lies in the modeling of phenotype values over different time points. We extend our proposed method [3] in a longitudinal framework, and apply it to analyze triglyceride levels in families using data over the 4 time points provided in GAW20. We compare the association results for triglyceride levels based on transmission information from both parents with those based on transmission information only from heterozygous parents. We also explore common association findings for triglyceride levels with and without considering the effect of a drug fenofibrate, as well as adjusting the triglyceride values for high-density lipoprotein (HDL) levels.
Data description
For our analyses, we use pedigree information on triglyceride levels at 4 different time points for 200 nuclear families, along with their genotypes at all the available 597,145 variant sites distributed over 22 autosomal chromosomes, as provided in the Genetics of Lipid Lowering Drugs and Diet Network (GOLDN) data set as part of GAW20. We exclude loci that are monomorphic or have minor allele frequency < 0.05. Because HDL is a potential confounder in the genetic association with triglyceride levels, we use HDL levels at the 4 time points as covariates. To adjust for the effect of fenofibrate, which was administered after the second time point, we perform our transmission disequilibrium analyses based on summarized values of triglyceride and HDL levels before and after the administration of the drug.
Statistical methodology Imputation of missing values
Data on triglyceride levels and HDL levels are not available for all individuals at every time point. The assumption of multivariate normality provides a computationally elegant framework for the expectation-maximization (EM) algorithm [5] to estimate parameters when data are missing. Because the Kolmogorov-Smirnov test shows significant departure from normality (at level 0.05) for both triglyceride and HDL levels at some of the time points, we perform logarithmic transformations on both phenotypes to induce normality. We use an unrelated set of 117 founders from the pedigrees to estimate the missing phenotype values based on data on the available phenotype values using an EM algorithm as described in Haldar et al. [6]. For the remaining individuals in the pedigree, we use the plug-in parameter estimates of the mean vector and variance-covariance matrix of the phenotypes obtained via the EM algorithm to estimate the missing phenotype values. We then use a generalized linear regression equation of the triglyceride levels on the HDL levels at the 4 time points based on the set of founders to obtain the residuals for all individuals in the pedigree. We use these residuals as phenotype values in our association analyses.
Test for transmission disequilibrium using quasi-likelihood
The phenotypes for our association analyses are unadjusted triglyceride levels and triglyceride levels adjusted for HDL levels using the algorithm described in the preceding section. We use a novel quasi-likelihood regression framework based on a resistance generalized estimating equation approach [7] to test for association of a SNP with a multivariate phenotype. For each SNP, we consider all nuclear families in the pedigree with at least 1 heterozygous parent at that SNP. Suppose data are available on N nuclear families with n i offspring in the i th family, Y j = (Y j1 , Y j2 , Y j3 , …, Y jk ) denotes a vector of k phenotypes for the j th offspring, Z j and W j are indicator random variables (1 or 0), respectively denoting whether the heterozygous parent and the other parent (heterozygous or homozygous) at a SNP transmits the minor allele or not to the j th offspring. For the i th family, we model the conditional distribution of using a quasi-likelihood function as follows: where λ j (α,γ) is the vector of the conditional expectations of Z j and W j given Y j , both of which are modeled as logistic link functions involving α and γ, while V j is the conditional variance-covariance matrix of (Z j ,W j ) given Y j . The test for association is equivalent to testing H 0 :γ = 0 versus H 1 :γ ≠ 0, and the usual Wald test statistic is distributed as chi-squares with k degrees of freedom in the absence of association. Our association analyses comprise 3 different choices of phenotypes. As fenofibrate was administered after the second time point, we consider the first principal component of the log-transformed triglyceride levels of the first and the second time points along with the first principal component of the log-transformed triglyceride levels of the third and the fourth time points as a bivariate phenotype. We compare the association findings based on this phenotype with (a) the first principal component of the log-transformed triglyceride levels of the first and the second time points (ie, before the administration of fenofibrate) and (b) the first principal component of the log-transformed triglyceride levels of the third and the fourth time points (ie, after the administration of fenofibrate). To evaluate the effect of HDL on triglyceride levels, we perform each of the above analyses for unadjusted log-transformed triglyceride levels and log-transformed triglyceride levels adjusted for HDL levels. We denote the unadjusted bivariate phenotype analysis as MTBAT and the adjusted analysis as MTBA-TAdj. Similarly, the corresponding univariate analyses based on Kulkarni and Ghosh [3] prior to the administration of fenofibrate are denoted as TBATPre and TBATPreAdj, while those following the administration of the drug are denoted as TBATPost and TBATPostAdj. We, additionally, performed all the test procedures using transmission information only from heterozygous parents (as in the classical transmission disequilibrium test).
Results
The tests for association are based on 200 nuclear families comprising 990 offspring. As our proposed test procedure requires that at least 1 parent in the family is heterozygous at the marker locus, we selected only those SNPs that have more than 20 informative families. Hence, we performed our analyses on 552,556 SNPs. Of the 990 offspring, data on triglyceride levels were available for 719 offspring at the first time point, 988 offspring at the second time point, 554 offspring at the third time point, and 731 offspring at the fourth time point, and data on HDL levels were available for 719 offspring at the first time point, 989 offspring at the second time point, 709 offspring at the third time point, and 772 offspring at the fourth time point. To correct for multiple testing, we used the Benjamini-Hochberg procedure [8] with an overall false discovery rate (FDR) of 0.05.
The number of SNPs found to be significantly associated with the different phenotype definitions were as follows: 718 based on MTBAT, 685 based on MTBATadj, 147 based on TBATPre, 657 based on TBATPreAdj, 121 based on TBATPost, and 622 based on TBATPostAdj. Among these SNPs, 28 were common for all 6 phenotype definitions, 448 were common between MTBAT and MTBATadj, 80 were common between TBATPre and TBATPreAdj, and 42 were common between TBATPost and TBATPostAdj. With respect to unadjusted triglyceride levels, 51 SNPs were common between MTBAT, TBATPre, and TBATPost, whereas for adjusted triglyceride levels, 390 SNPs were common between MTBATAdj, TBATPreAdj, and TBATPostAdj. The SNPs rs6601447 and rs1986677 located on 8p23.1 within the gene MSRA, the SNP rs2466051 located on 8p12 within the gene NRG1, the SNP rs1281132 located on 4p16.1 within the gene SH3TC1, and multiple SNPs in the region 12q13.12 within the gene FMNL3 exhibited significant evidence of association with all phenotype definitions. Although the SNP rs1712316 within the gene SH3TC1 was found to be significantly associated with all phenotype definitions except TBATPreadj, multiple SNPs in the region 10q.24.1 within the gene ENTPD1 were significantly associated with all phenotype definitions except TBATPost. Among the SNPs mentioned above, rs6601447 and rs1986677 ranked among the top 10 significant findings in all our analyses. We note that although multiple SNPs within the gene FMNL3 showed significant evidence of association, the significances for the multivariate phenotype were more pronounced (lower p values) for the unadjusted triglyceride phenotypes compared to those adjusted for HDL, whereas the opposite phenomenon was observed for the univariate phenotypes, although it seems intuitively difficult to explain the phenomenon. We observe that a higher number of SNPs exhibited significant evidence of association for the bivariate phenotype and that defined by the postdrug measurements compared to the phenotype defined by the predrug measurements. Interestingly, we observed that the analyses based on transmission only from heterozygous parents did not yield a single significant finding with any of the phenotype definitions after FDR correction This is consistent with the results of the simulations corresponding to the quasi-likelihood approach for univariate phenotypes [3] and suggests that transmission information from both parents increases the power of the association tests.
Discussion and conclusions
In this paper, we modified a transmission-based test for association that includes transmission information from noninformative parents using a quasi-likelihood approach [7] in the multivariate framework. A major advantage of the method is that the retrospective likelihood used to model allelic transmission conditioned on phenotypes does not require any assumptions on the marginal or the joint distributions of phenotype values across different time points.
Many of our association findings are mappable to genes related to heart diseases. The SNP rs2510873, which exhibited significant evidence of association with the bivariate phenotype and was defined by the postdrug measurements, is located in the same genomic region (11q23.3) as the SNP rs964184, which was previously reported to be significantly associated with both triglyceride and HDL levels based on the same GOLDN data [9]. The enzyme MSRA (methionine sulfoxide reductase A) protects cardiac myocytes from oxidative stress and is an important therapeutic target for ischemic heart diseases [10]. Neuregulin-1 (NRG1) activation improves cardiac function and survival in different forms of cardiomyopathy [11]. The gene FMNL3 (formin-like 3) is involved in cardiac myofibril development and repair [12]. Because we found that the SNPs located within FMNL3 exhibited much higher significance with unadjusted triglyceride levels compared to those adjusted for HDL levels, it is likely that the gene modulates the effect of HDL levels rather than the effect of triglyceride levels. We also found that the number of SNPs associated with the bivariate phenotype or the phenotype defined by postdrug measurements was much higher than the phenotype defined by predrug measurements. One possible explanation for this phenomenon is that some of the SNPs modulate the interaction effect of fenofibrate and triglyceride levels. However, separate interaction analyses are necessary to validate this hypothesis.
Linear mixed models are a popular method of choice for genetic association analyses in a family-based framework primarily because such models have the flexibility of accounting for relatedness within families and to correct for population stratification between families [13]. Our proposed quasi-likelihood approach [7] has the implicit assumption that allelic transmissions to the different offspring within a family are uncorrelated, implying that the likelihood is equivalent to that based on independent trios and, hence, the test of association is valid only in the presence of linkage. Moreover, given that the likelihood used in a linear mixed model is prospective in nature, families with both parents homozygous at a SNP can be included in the model and the effect of fenofibrate can be modeled both as a main effect and an interaction effect with SNPs. Such inclusions in the model are likely to yield higher powers of detecting association. On the other hand, the 2 major disadvantages of linear mixed models compared to transmission-based tests are the inherent computational burden involved in analyzing large pedigrees and the susceptibility to violations in distributional assumptions (such as normality). Such violations are particularly common for high-dimensional phenotypes as encountered in longitudinal data, and result in inflated rates of false positives, although simulation studies show that they may be robust to certain model misspecifications [14].
It has been argued that association tests based on imputed phenotypes may lead to biased inferences and it may be more prudent to perform an EM procedure based on the joint likelihood of genotype and phenotype data. However, such a strategy would substantially increase the computational load. Moreover, given that the quasi-likelihood approach is retrospective in nature, the test is less likely to be adversely affected by imputed phenotypes compared to a test based on prospective likelihood. We finally wish to highlight that while the inclusion of transmission information from noninformative parents in the proposed test procedure [7] results in increased power in detecting association, the test is susceptible to inflated false-positive rates in the presence of population stratification.
Availability of data and materials
The data that support the findings of this study are available from the Genetic Analysis Workshop (GAW), but restrictions apply to the availability of these data, which were used under license for the current study. Qualified researchers may request these data directly from GAW.
About this supplement
This article has been published as part of BMC Proceedings Volume 12 Supplement 9, 2018: Genetic Analysis Workshop 20: envisioning the future of statistical genetics by exploring methods for epigenetic and pharmacogenomic data. The full contents of the supplement are available online at https:// bmcproc.biomedcentral.com/articles/supplements/volume-12-supplement-9.
Authors' contributions SG and HK developed the proposed method. HK wrote the computer codes and performed the data analyses. HK and IM participated in the compilation and interpretation of the results. SG drafted the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate Not applicable.
Consent for publication
Not applicable. | 2018-09-23T17:25:10.603Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "dd0bdf6a14a588564ae03b2fb2cefcbe4ccdc0a0",
"oa_license": "CCBY",
"oa_url": "https://bmcproc.biomedcentral.com/track/pdf/10.1186/s12919-018-0147-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd0bdf6a14a588564ae03b2fb2cefcbe4ccdc0a0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
271253542 | pes2o/s2orc | v3-fos-license | Chronicling participants’ understanding and experiences of integrating ICT into the teaching of geography in South African schools
This article examines geography teachers’, parents’ and learners’ understanding and experiences of the integration of Information and Communication Technologies (ICTs) in the teaching of that subject. The study was guided by the TPACK-SAMR model, which proved to be a reliable tool for measuring the extent of ICT integration. The purposive sampling technique that was employed enabled the researchers to identify participants for the research study related to the importance of integrating ICTs into the teaching and learning of geography. The article draws its purpose from the integration of technologies into the teaching of geography as a means of preparing and equipping learners who take this subject with the type of skills required in the 21 st -century job market. Surprisingly, the research findings revealed that some teachers still do not feel comfortable to integrate diverse technologies into their teaching of geography, perceiving it as time consuming. Their unwillingness to become digital citizens and conform to the demands of the Fourth Industrial Revolution (4IR) is a drawback, as are learners’ inappropriate use of ICTs (visiting irrelevant, unwanted sites instead of downloading subject-related content). To empower learners to adopt and use ICTs as valuable tools and solutions on their learning journey drastic changes are required, particularly on the part of curriculum planners in geography.
Introduction
In the voices of Fleischmann and Van der Westhuizen (2019), integrating Information and Communication Technologies (ICTs) helps to improve learners' academic performance.In the same breath, Seedat (2019), concedes that ICTs have been welcomed by geography teachers.He further argues that ICTs serve as invaluable tools for enhancing learners' understanding of geospatial concepts,
AFFILIATION:
resolving issues that affect citizens both locally and on a global scale (Seedat, 2019).Worryingly, Tarisayi (2022), reports a varied uptake of ICT integration by geography teachers between the global and African countries.For instance, some of the world countries have the following uptake: Singapore, 10%, and in Turkey, 82% teachers are not using it, India is represented by 2%, while 33% of German educators have integrated ICT into the teaching of geography.In the same vein, Fleischmann and Van der Westhuizen (2018) assert that senior geography teachers seem reluctant to integrate ICTs into their teaching.Within the South African context, Mzuza and Van der Westhuizen (2019) assert that the introduction of the Interactive GIS Tutor (I-GIS-T) as a programme and tool to facilitate geography mapwork teaching, for instance, yielded positive results in terms of improving learners' subject-related knowledge, especially when it came to applying theory into practice.However, in a qualitative study conducted by Cele (2022), the findings reveal a myriad of challenges faced by South African teachers in integrating ICTs into the teaching of geography.He further indicates challenges such as the lack of pedagogical knowledge and support that hinders their ICT integration uptake.Similarly, Clark et al. (2020) concede that geography teachers may assist learners to upload applications (Apps) such as scanners and google earth (GE) onto their digital devices.They further argue that this facilitates learner engagement during geography excursions (Clark et al., 2020).However, researchers observed that teaching strategies that fail to accommodate the context in which learning is taking place do more harm than good and fail to prepare learners adequately for the future.Furthermore, they are of the opinion that, in a rapidly changing world characterised by the Fourth Industrial Revolution (4IR), the use of ICTs is imperative in teaching and learning, yet ICTs alone cannot help learners to master the content deemed suitable for their grade -such tools must be both relevant and appropriately used.Similarly, Stojsic, Ivkov-Dzigurski and Maricic (2019) affirm that integrating ICTs into geography lessons promotes learner participation, facilitates the understanding of challenging concepts, and motivates learners to learn.Thus, learners require guidance from their teachers on how to take advantage of the technology available in their learning space.
The article draws on the imperative of integrating ICT into the teaching of geography, as proposed by Seedat (2019), Clark et al. (2020) and France et al. (2021), who advocate that the teachers of this subject accept and adopt such technologies as tools to facilitate teaching.To empower learners to adopt diverse technologies as viable solutions to facilitating their learning in their academic journey, drastic changes are required, and curriculum planners in geography should focus on the integration thereof.Thus, this article seeks to highlight several of the digital devices that qualify to be classified as ICT tools that are relevant for geography classrooms.The intention is also to use the study participants' experiences as a springboard for guiding researchers, geography teachers and curriculum designers alike in preparing and assessing content that is befitting of 21 st -century requirements and expectations.
ICT Integration in the teaching of FET-phase geography
Recent studies indicate that integrating ICTs into 21 st -century classrooms can no longer be delayed (Constance & Musarurwa, 2018;Stojsic et al., 2019).For instance, Hogan (2020) asserts that geographers study geospatial relations regarding a range of human and physical phenomena occurring on the Earth's surface by means of tools and platforms such as Q-GIS, ARCGIS, YouTube, Facebook, Twitter, WhatsApp, Google Earth and the internet.Similarly, studies by Stojsic et al. (2019), Clark (2020) and Hogan (2020) found that adopting and implementing ICTs in the teaching of geography boosted learners' confidence and their academic performance scores (APSs).
Worryingly, a study by Fleischmann and Van der Westhuizen (2019) found that although many African countries have started using related platforms (and South Africa is no exception), the process is not without its challenges.South Africa's White Paper 7 (RSA, 2004) seeks to connect teachers and learners digitally (i.e., through e-education), to facilitate the processes of teaching and learning, but as Hogan (2020) and Hlatywayo (2021) point out, only a handful of schools have integrated ICTs into the curriculum, with some strongly opposed to the implementation of the policy, as it could aggravate the digital divide between the haves and the have-nots.
As such, the unwillingness of some teachers to adopt ICTs qualifies them as barriers to ICT integration in geography.Adarkwah (2020) concur that lifelong learning will only be realised via the integration of ICTs, thereby ensuring that the sustainable Millennium goal number 4 (allowing equitable access to learning) is realised.Chawanji (2018), Mkhongi and Musakwa (2020) affirm that geography learners and teachers will only manage to integrate ICTs if such tools and platforms are accessible to them, but if schools do not budget for the procurement of the appropriate technologies, any policy which advocates ICT integration will look good on paper, without being applied and implemented in the classroom.
Like many, countries, South Africa crafted ICT policies such as White Paper 7, on e-education Tshimanika (2023) which emphasises that the integration of ICTs in teaching and learning should be inclusive of all learners, irrespective of their capabilities, to grant them an opportunity to fully participate in, and benefit from the learning process (Constance & Musarurwa, 2018;Hogan, 2020) confirm that digital resources improve learner understanding of reality as well as boost their APSs.Furthermore, it is envisaged that, having acquired 21 st -century skills, learners will become the generators of solutions (as opposed to merely receiving information from their teachers), and will be exposed to independent learning and be capacitated to learn at their own pace (Hogan, 2020;Clark et al., 2020).Using a variety of ICT tools, geography teachers can move their learners from rote to inquiry-based learning (IBL).Chiyokura, Nakamura and Matshuhashi (2017) and Lembani et al. (2023) concede that this goal can only be realised if self-directed learning (SDL) is promoted and implemented.Digital tools, according to Chiyokura et al. (2017) and France et al. (2021), will allow geography teachers to expose their learners to projects which require of them to apply collaborative and interpersonal skills, enquiry and teamwork to resolve geospatial issues.In addition to that, Chiyokura et al. (2017) posit that teachers not only have to expose learners to ICT interactive tools such as Google Earth and Google Maps, but also to guide them on how to conduct research on spatial issues to capacitate them for problem-based learning (PBL).However, Chawanji (2018) and Hogan (2020) caution against schools that craft policies that outlaw the use of digital devices to facilitate teaching and learning of geography; thus, depriving learners of access to realise their potential in terms of nurturing their skills in self-directed learning.2.2 Teachers' experiences on ICT integration in the teaching of geography Globally, scholars highlight the rewards of integrating ICTs into the teaching of geography.For instance, Stojsic et al. (2019) concede that geography learners in Serbia get excited whenever their teachers integrate ICTs into their teaching.In the African context, this view is supported by Constance and Musaruwa (2018), who argue that tech-savvy learners have little difficulty learning those ICT-related skills that will help them to participate in ICT-mediated lessons.
They further confirm that Seychelles geography learners end up benefiting from the buddy system, in which highflyers team up with struggling learners in a collaborative endeavour that sees both parties benefiting from the process (Constance & Musaruwa, 2018).Similarly, within the South African context, Mzuza and Van der Westhuizen (2023) assert that it is vital for geography teachers to integrate ICTs into their teaching.
Hogan (2020) highlights that integrating various ICTs into the teaching of geography improves the working relationship between learners and teachers, such that there is mutual learning in using certain tools.Another advantage is that learners who struggle to master concepts in class can turn to their fellow classmates (as more knowledgeable others) for assistance (Vygotsky, 1978).Not only does the educational context stands to benefit, as Clark et al. (2020) argue, but through ICT implementation, geography teachers can create maps using Q-GIS tools, and from there, such information might add value to the work of researchers and municipalities, for instance, allowing environmental risks to be minimised and thereby making an invaluable contribution towards improving citizens' lives.In this regard, Chiyokura et al. (2017) propose that learners be tasked with the responsibility of designing a map displaying areas that are prone to flooding, for instance, where habitations are located below the flood-line, thereby assisting local municipalities in minimising risks associated with natural disasters such as floods.Such a platform will help to motivate geography teachers and learners to share their ideas, successes and failures, and to come up with innovative ways of contributing towards hazard mitigation and adaptation as well as societal challenges.Guo et al. (2020) and Chiyokura et al. (2017) concur that integrating ICTs and other multimedia technologies (MTs) during the Covid-19 pandemic into the teaching of geography (and other subjects) stimulated innovation amongst participants, serving as an enabler in motivating learners to work in teams to complete their research projects.This could be achieved by having learners collect, interpret, manipulate and analyse data.The activities should mostly be learner-centred by nature, with the teacher serving as a facilitator.Ultimately, it is envisaged that learners will become proficient in their use of a range of ICTs and, without coercion, learn to forge relationships with their fellow geography learners, locally and globally, thereby increasing their social and academic networks.In so doing they will gain vicarious experience, verbally (e.g. via voice notes) or via images (e.g.screen grabs, videos), which they obtain from their friends without needing to physically displace themselves.However, Hlatywayo (2021) concedes that seasoned geography teachers seem to resist adopting new technologies in their pedagogies.In the same breath, Fleischmann and Van der Westhuizen (2020) caution about the inequalities in terms of the provision of ICT infrastructure between the rich and the poor countries.This then leads to a digital divide and creates digital exclusion.South Africa is a unique case, as the digital exclusion occurs within the same Department of Education, where, on the one hand, some schools are well resourced and on the other hand of the continuum stand schools with inadequate infrastructure.The latter struggle to implement ICT integration in geography; thus, depriving the millennials' chances to enhance their ICT skills, which have become life skills in the 21 st century.
Theoretical framework
The study lensed was by the TPACK-AMR (Technological, Pedagogical and Content Knowledge -Substitution, Augmentation, Modification and Redefinition) framework, as amended by Puentedura (see Drugova et al., 2021).See Figure 1.1 below.
Figure 1.1 TPACK-SAMR model
Originally described by Mishra and Koehler (2006), the TPACK model advocates ICT integration in educational contexts.In addition to that, Fleishmann and Van der Westhuizen (2018) came up with the TPACK-GIS model for the under-resourced schools that teach geography.Drugova et al. (2021) posit that digital content must be made up of cloud-based platforms characterised by digital exercises that are interactive by nature, such as videos, audio, pictures, gifs, tests and animations.One such success story is Skying (skyengschool.com)-an online school in Europe that offers more than 3 000 lessons and tasks that can be checked automatically, enabling teachers to monitor learner progress and task completion.In their virtual classrooms, teachers assign different tasks/activities to learners, thereby individualising learning.Such platforms can be accessed both in class and from home.Study material takes the form of interactive videos, which allow learners to receive real-time feedback from the platform without having to wait for the teacher to give feedback in class.Learning thus happens at anytime and anywhere, if a learner has access to a computer, tablet or mobile phone.Drugova et al. (2021) explain that, combining the TPACK with SAMR allows numerous permutations: pedagogical knowledge (PK), when linked with substitution, relates to teaching methods being uploaded onto a platform, which allows for listening or reading, for instance, enabling the teacher to assess the learners' strengths and weaknesses; and PK, when linked with augmentation, involves homework activities in the classroom being discontinued, and student-centred methods being given priority.PK, when linked with redefinition, allows teachers to play a mentorship role while learners choose their own material, plan for such and determine the frequency of their activities, guided by continuous feedback from the online platform.https://doi.org/10.38140/pie.v42i2.7083
Gubevu & Mncube
Chronicling participants' understanding and experiences of integrating ICT Drugova et al. (2021) also propose combining content knowledge (CK) with substitution, where analogous content is replaced by digital material (either partially or completely).CK is paired with augmentation, so that existing content is complemented by various contents and homework is given and controlled by the teacher, with assignments on the online platforms where teachers monitor learner progress.If CK is combined with modification, interactive ICTs are used to provide learners with digital content which improves the learning experience, and where CK is paired with redefinition, both the teacher and the learner have a role to play in generating digital content.
Statement of the problem and research question
Worryingly, Constance and Musarurwa (2018) and Hlatywayo (2021) acknowledge the low and varied uptake of ICT integration on the part of seasoned teachers who teach geography -a challenge which has been identified in various teaching and learning-related discourses (Seedat, 2019;Bengel & Peter, 2021).This reluctance deprives geography learners of the opportunity to be exposed to a wealth of domain-based resources and the view of experts who are active or available online, irrespective of either party's geographic location.The question addressed here is, "What are the geography teachers understanding and experiences with ICT integration, and what makes it challenging for them to assimilate such technologies into their teaching, in order to effectively equip their learners with valuable problem-solving skills?"
Research objective
The objectives of the study were to: • identify teachers, parents and learners' understanding and experiences of ICT integration in the teaching of Grade 12 geography; • investigate the role of ICTs in assisting struggling learners to master challenging geography concepts in Grade 12; • investigate the role of ICT integration in the teaching of map skills, GIS and integrating mapwork with theory in Grade 12 geography; and • evaluate the impact of integrating ICTs into the teaching of Grade 12 geography.
Research methodology
The researchers employed a constructivist/interpretivist research paradigm.According to Lotz-Sisitka, Fine and Ketlhoilwe (2013), this paradigm relates to the researchers' beliefs about the world around him/her, as those relate to the construction of knowledge.According to this paradigm, reality exists in the human mind and is conditional upon human experiences and interpretation.In other words, it is not independent but subjective and socially constructed, and can have varied meanings (Lotz-Sisitka et al., 2013).In this regard, the researchers sought to allow participants to make a meaningful contribution by making their voices heard.The constructivist approach is grounded in the fundamentals of qualitative research.McMillan and Schumacher (2010) assert that it focuses on the voices and perceptions of the study participants, on how they view and interpret reality.This enabled the researchers to be in the shoes of the participants to gain their lived experiences through critical discourse analysis.
As McMillan and Schumacher (2010) explain, the qualitative approach allows the researcher to arrive at an in-depth understanding of the phenomenon under study -in this case, how teachers perceive technology and integrate ICTs into the geography lessons they present at South African schools, how parents perceive such integration, and to probe learners' views on this matter, since research that is qualitative by nature accommodates and reflects the voices of the participants in respect of how they perceive reality.This study employed the qualitative approach to garner the participants' views on the strategies they used, or were cognisant of, in their personal capacity and context in respect of the adoption and use of ICTs in geography lessons.This was achieved by allowing participants to use voice notes (VNs) to respond to questions posed to them regarding ICT integration into the geography classroom, and to reflect on their personal experiences.Purposive sampling enabled the teacher participants to be custodians of the research study.It also permitted participating parents to take ownership of their children's scholastic progress.To this end, semi-structured interviews, observations and document reviews were used as data-collection tools.
Research design
A case-study design was employed.As confirmed by Yin (2018), to a significant extent, the findings reported are the product of case studies.Case studies allow researchers to attach meaning to concepts used in their study participants' context.This occurs where the how and the why questions need to be answered, as participants' understanding of such concepts cannot be measured (Yin, 2018).Thus, the researchers used case studies to access the participants' views, experiences and understanding of ICT integration in the teaching of geography in their unique contexts.
Data-collection methods
The researchers used observation to verify whether Grade 12 teachers and learners adhered to ICT policies of banning smartphones during teaching and learning in the deep rural setting of the uMzinyathi District in KwaZulu-Natal.Furthermore, Grade 11 teachers and learners participated in semi-structured interviews to solicit information on how they understand and experience the role of ICT in geography classrooms (Coombs, 2021).Interviews were used, as they allow for interaction between the interviewer and the interviewee.Observation was also used, as it highlights content gaps teachers may not be aware of when they present their lessons practically.The other data-collection method used in this instance was document analysis, which revealed whether the contents of lesson plans matched the participants' practice.
In keeping with the requirement for an ethical study, the following issues of privacy, consent and approval were addressed.Privacy entailed that details about the participants or the institutions of learning involved would remain confidential.To address consent, prior to the interviews, the researchers obtained the participants' consent as well as permission with clearance number MNC071SGUB01, to conduct the study from the relevant education authorities.Lastly, for approval to conduct research in schools, permission was sought from the provincial Department of Education in the province of KwaZulu-Natal.Furthermore, school principals gave us permission to gain access to schools as research sites.https://doi.org/10.38140/pie.v42i2.7083
Gubevu & Mncube
Chronicling participants' understanding and experiences of integrating ICT
Research findings and discussion
Yin (2018) contends that the use of participants' voices in research is a very powerful tool, and for this reason the transcripts of the interviews are reflected verbatim to ensure that those voices are heard, regardless of whether they advocate ICT integration in the teaching of geography or believe such tools have limited value in the teaching of the subject.
The interview outcomes and discussions are presented under the following themes: • ICT integration: A complementary tool for learning geography • ICT integration in the teaching of geography as a time saver • ICT integration in the teaching of geography as an exclusionary measure
ICT integration: A complementary tool to learning Grade 12 geography
In the view of Fleischman and Van der Westhuizen (2019), the uptake of I-GIS-T in geography teaching offers a means of reshaping education from a teacher to a learner-centred approach, allowing learners to communicate better and cooperate with one another as well as with their teachers.This statement was confirmed by one learner participant: The hard lockdown in 2020, because of the The worldwide impact of the Coronavirus changed opinions about the educational value of hand-held devices in online classes, when social distancing left parents with no other option but to embrace technology if they wished their children to continue their education despite the lockdown restrictions.Another geography learner had this to say on the appeal and usefulness of subject-specific online platforms:
Online geography learning turned out to be the missing link as a supporting tool for my studies. I happened to use the digital gadgets to access visual images for what has been taught in class by my geography teacher. ICT integration … provides me with an opportunity to be alone, and do things individually before I can ask for help from my teacher. There are times where I manage[d] to [gain a] better understanding of concepts from my digital gadgets, since such concepts [show] colour and dimensions that would have given my geography
teacher … difficult times when asked to present the concept in question.Before the hard lockdown, I used to fail to submit tasks to my teachers and they [were] not … in a position to reprimand me [for] doing that.Now that there are smartphones that can even send e-mails, my parents are … in a position to check the progress of my studies, by communicating directly with my subject teachers, instead of asking me.A participating geography teacher asserted, ICTs such as Google Earth can deliver and display content in a multitude of ways, within a short space of time and in real time.I manage to display content using diagrams to visual learners.ICT allows me to introduce topics using videos.I use ICT to assist learners who find it difficult to understand lessons, by sending them voice notes on their gadgets for them to play repeatedly, to clarify concepts instantly.I am able to present content both in a digital manner and by using hard copies.There are concepts that can be easily explained by … word of mouth.There are those concepts that [require] real pictures, to be better understood by learners … I have also realised that ICT integration … provides an alternative in terms of the environment and space in which learners find themselves learning.ICT has proven to be valuable, as it provides an alternative in terms of a learning space that is vibrant and … transcends the four walls of the classroom.I have discovered that learners learn easily when they work with digital gadgets, as they are in control of the learning tools.ICT allows for differentiation to take place, as my learners can ask questions [from] the comfort of their homes, as they learn at their own pace.The WhatsApp mobile learning boosts … social interaction and social presence amongst my learners.It is through ICT integration that my learners develop the skills of sharing knowledge.
As these research participants revealed, ICT integration enabled teachers and learners alike to view different devices as useful tools for making the process of learning geography more meaningful and enjoyable.The findings further highlighted the fact that integration of technology is the missing link in the realisation of a paperless society.Drugova et al. (2021) support this notion, indicating that, for effective technology integration to occur, there must be an interwoven relationship between all the prongs of the TPACK framework, as described by Mishra and Koehler (2006), requiring of teachers to know the latest technologies (technological knowledge, TK) and master them if they are to use them effectively.They must also be familiar with, and adept at effecting the pedagogical knowledge (PK, i.e., knowledge of assessment techniques that truly assess learner capabilities.For Chiyokura et al. (2017) and Guo et al. (2020), multi-level integration has established a more flexible learning environment, such that both struggling learners and high achievers are able to participate in the learning process.
In essence, technology implementation serves as an enabler, in ensuring that learners with different capabilities share the same space and can benefit from lessons.To that end, various technological devices can help geography teachers to clarify the learning content for their learners, by means of colourful images, accessed using Google Earth and Google Maps.As confirmed by the participants, had it not been for ICTs, geography teachers would not have been able to present lessons during the hard lockdown, or to help learners complete the 2020/21 academic year.
ICT integration in the teaching of Grade 12 geography as a time saver
Integrating a range of technologies when teaching geography was found to enable the exchange of information in the teaching and learning process, amongst others, using tele and video conferencing as well as PowerPoint presentations (Lembani et al., 2020;Hogan, 2020).Participants in the present study identified a variety of time-saving tools, including digital notes and videos that serve to clarify difficult concepts which would otherwise require significant simplification for the learners.Seedat (2019) posits that geography teachers can easily retrieve data from their devices if these are arranged in files; they can also easily log onto the internet and access search engines such as Google Scholar to expose learners to current issues in their discipline.In respect of the timesaving benefits of ICT integration, one young participant had this to say, As the research participants revealed, technology uptake allowed teachers and learners to perceive different devices as useful tools for making the geography learning process smoother and more meaningful.Chiyokura et al. (2017) point out that using a computer-supported collaborative learning space (CSCL) enables learners to use Google Earth to access distant places via computer and to rely on their collaborative skills to collect data virtually.They might design projects as a team, using a problem-solving learning approach.In respect of teachers honing their skills, Drugova et al. (2021) advise that combining TPACK with SAMR is critical in achieving sustained capacity development.By implementing CSCL, teachers can work to ensure that struggling learners do not give up if the work given to them requires digital skills that are too demanding.Further, Drugova et al. (2021) argue that, in the TPACK-SAMR model, knowledge of content (CK) can be modified, such that learners are exposed to interactive ICTs to give them digital content that will help improve their APSs.
ICT integration in the teaching of Grade 12 geography as an exclusionary measure
Piper et al. (2020) state that, in the absence of monitoring, the implementation of ICT-related integration policies will always have the difficult task of addressing the inequalities between learners in rural, semi-urban and urban areas.This finding is echoed by the statements below.
As the parent of a geography learner commented on the digital divide, A participating geography teacher asserted, ICT integration is not incentivised.Once a teacher gets the qualification at the tertiary institution, there are no incentives in place to motivate them, other than the CPTD (Continuous Professional Teacher Development) programme that has no monetary value, and does not lead to promotions.ICT integration is also not monitored, as some teachers still submit mark-lists that are in the digital format while others are not reprimanded [for] submitting handwritten ones.
From these statements, it is evident that this parent participant appreciated the value technology brings in advancing the education of his/her child.Many, however, felt that as parents and teachers they failed their children by being unable to assist them in accessing online learning during the times when those children needed it most.This happened during the restrictions imposed in 2020, when learners were prohibited from attending school on a full-time basis.Some schools used a rotational model of attendance during the lockdown, with learners attending for a few days, and then staying home to allow other groups of learners to attend class, in keeping with the Covid-19 protocols on social distancing.Even pre-Covid, Johnson et al. (2016) found that schools offering geography were faced with numerous challenges, such as a failure to access different devices, and that ICT-related education was largely absent from school timetables.Most of these issues remain problematic.Attempts at integration admittedly exclude some learners, especially where the policy of 'one laptop, one teacher' or 'one tablet, one learner' is not yet implemented.If put into effect, these policies will ensure that both teachers and learners have technology at their disposal, which will enrich the teaching of geography for all parties concerned.
As Bengel and Peter (2021) posit, ICTs allow users to put a global positioning system (GPS) (navigator) to use when conducting spatial analyses, while remote sensing can be used to capture data by means of satellites from distant places without any physical contact.The use of geographical information systems (GISs) in geography map skills is expected to take over the use of paper maps, as is the case in the applied sciences (Bengel & Peter, 2021) -a development which emphasises the need for this discipline to move with the times.
Conclusion and recommendations
The study focused on participants' understandings and experiences of ICT integration in the teaching and learning of geography.The results indicated that such integration serves as a tool to close the gap between the school and the parents by promoting interaction between geography learners, teachers and the parents whose children take this subject.The results further reveal that most participants perceived ICT integration in the teaching of geography to be a vital tool for ensuring that digital learning takes place, with learners and teachers becoming co-learners in the process.Some participants viewed the uptake of technology in geography classrooms as an exclusionary measure, widening the urban-rural/rich-poor divide in terms of access to and the use of digital devices.Based on the participants' understandings, the researcher defined ICT integration as imperative for the teaching and learning of geography as a subject, requiring buy-in from learners, teachers and parents.https://doi.org/10.38140/pie.v42i2.7083
Gubevu & Mncube
Chronicling participants' understanding and experiences of integrating ICT As noted, for ICT integration in the teaching of geography to be successfully implemented, more education on the importance of technologies in the curriculum is required -not only for technophiles, but also those who are averse to, or reluctant to implement modern technology.At present, the researchers observe a myriad of emerging challenges are confronting the education system, ICT policy implementation not being monitored, and relevant geographyrelated ICT infrastructure being lacking (which serves to widen the digital divide).It is recommended that, as a start, geography teachers who are poorly capacitated in terms of the required technology-integration skills transform their smartphones' operating systems into handy educational tools.
https://doi.org/10.38140/pie.v42i2.7083Gubevu & Mncube Chronicling participants' understanding and experiences of integrating ICT During the hard lockdown, I spent three months away from school, but with the introduction of WhatsApp learning in geography teaching, I managed to cover all the work that our teachers wanted us to do for the year.[Without] ICTs … our 2020 academic year was going to be a wasted year.Our teachers managed to send us activities to our smartphones, which we managed to submit within a specified time.They even ensured that we access[ed] the remedial work for the activities we submitted.Our teachers used WhatsApp and emails to assist us with geography lessons.Using different resources such as videos, VNs, images and digital notes, allows me to access … several learners.This, in turn, limits the challenges I face in class, as most of my learners strongly believe in the Internet of Things.This happens whenever you give them challenges; they quickly go on Google and search for answers.This enables my learners to be centre stage [in] their own learning.In the presence of ICT integration, my learners [are] exposed to the Internet of Things.
gadgets, even if it meant we go to bed on[an]empty stomach.Other geography teachers prefer to get their submissions done via email, and that makes my life difficult, as there is no laptop or … smart phone at home, let alone the money for buying … data bundles.This makes me feel that I am not part of the 'new normal' [post-Covid].
This will lighten … the load on our shoulders, as that will mean that government will automatically finance its implementation.This is because not all learners are privileged enough to own digital gadgets, let alone to maintain them.The hard lockdown has put a lot of pressure on us, as parents, because we were forced to try to purchase such https://doi.org/10.38140/pie.v42i2.7083 | 2024-07-18T15:22:03.324Z | 2024-07-12T00:00:00.000 | {
"year": 2024,
"sha1": "77aee2817b8c4af5201860588c11ccc4aaee26b9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.38140/pie.v42i2.7083",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "72ade23eed45ecb65e7de6f18f044b568b03507a",
"s2fieldsofstudy": [
"Geography",
"Computer Science",
"Education"
],
"extfieldsofstudy": []
} |
235373262 | pes2o/s2orc | v3-fos-license | Comparative Genomics of Clostridium perfringens Reveals Patterns of Host-Associated Phylogenetic Clades and Virulence Factors
Clostridium perfringens is an opportunistic pathogenic bacterium that infects both animals and humans. Clostridium perfringens genomes encode a diverse array of toxins and virulence proteins, which continues to expand as more genomes are sequenced. In this study, the genomes of 44 C. perfringens strains isolated from intestinal sections of diseased cattle and from broiler chickens from diseased and healthy flocks were sequenced. These newly assembled genomes were compared to 141 publicly available C. perfringens genome assemblies, by aligning known toxin and virulence protein sequences in the assemblies using BLASTp. The genes for alpha toxin, collagenase, a sialidase (nanH), and alpha-clostripain were present in at least 99% of assemblies analyzed. In contrast, beta toxin, epsilon toxin, iota toxin, and binary enterotoxin of toxinotypes B, C, D, and E were present in less than 5% of assemblies analyzed. Additional sequence variants of beta2 toxin were detected, some of which were missing the leader or signal peptide sequences and therefore likely not secreted. Some pore-forming toxins involved in intestinal diseases were host-associated, the netB gene was only found in avian isolates, while netE, netF, and netG were only present in canine and equine isolates. Alveolysin was positively associated with canine and equine strains and only present in a single monophyletic clade. Strains from ruminant were not associated with known virulence factors and, except for the food poisoning associated clade, were present across the phylogenetic diversity identified to date for C. perfringens. Many C. perfringens strains associated with food poisoning lacked the genes for hyaluronidases and sialidases, important for attaching to and digesting complex carbohydrates found in animal tissues. Overall, the diversity of virulence factors in C. perfringens makes these species capable of causing disease in a wide variety of hosts and niches.
INTRODUCTION
Clostridium perfringens is a Gram-positive facultative anaerobic bacterium that is a normal inhabitant of the soil as well as the gastrointestinal tracts of healthy animals. However, C. perfringens is also an opportunistic pathogen known for its ability to cause gas gangrene/clostridial myonecrosis of the skin (Buboltz and Murphy-Lavoie, 2020) as well as foodpoisoning in humans that costs the United States approximately $343 million annually (ERS-USDA, 2014). In cattle, it can cause hemorrhagic bowel syndrome (HBS), enterotoxaemia, and abomastitis (Nowell et al., 2012;USDA, 2018;Diancourt et al., 2019). In poultry, it causes necrotic enteritis (NE), a disease that has seen an increase with decreased antibiotic use in the poultry industry and results in approximately 2 billion US dollars globally in losses annually (Van der Sluis, 2000). Other enteric diseases in which C. perfringens are implicated are canine acute hemorrhagic diarrhea syndrome (AHDS) and foal necrotizing enteritis (FNE; Gohari et al., 2015).
The first C. perfringens genome sequence, published in 2002, greatly expanded our understanding of the vast array of virulence genes and toxins (Shimizu et al., 2002). In 2018, the toxin typing scheme for C. perfringens was expanded to include the pore-forming toxin, NetB, shown to be relevant to NE (Rood et al., 2018). The toxinotyping scheme is based on the presence of alpha toxin, beta toxin, epsilon toxin, iota toxin, enterotoxin, and NetB toxin. These toxins are used for typing but are not the only factors important to disease as C. perfringens is known to produce multiple additional toxins and virulence factors (Revitt-Mills et al., 2015;Kiu and Hall, 2018).
There are other enzymes that may not be essential for disease but contribute to virulence. Some of these are proteases that degrade proteins into available forms of amino acids. Clostridium perfringens is unable to de novo synthesize many amino acids and thus, must obtain them from the environment (Sebald and Costilow, 1975;Shimizu et al., 2002). These proteases are likely important for degradation of host tissue which enables C. perfringens to both obtain nutrients and facilitate toxin diffusion (Matsushita et al., 1994;Awad et al., 2000). Carbohydrate-active enzymes (CAZymes) are also important for the virulence of C. perfringens. For instance, the release of sialic acid by C. perfringens sialidases has been shown to increase the activity of toxins (alpha and epsilon), increase adhesion to host cells by altering the charge of the cell surface, and can be used as a carbon source (Severi et al., 2007;Almagro-Moreno and Boyd, 2009;Chiarezza et al., 2009;Li et al., 2011Li et al., , 2016Therit et al., 2015;Juge et al., 2016;McClane and Shrestha, 2020;Wang, 2020).
Previous genome sequencing studies of clinical C. perfringens strains from equine, canine, and poultry have revealed specific host-associated virulence factors (Gohari et al., 2017;Lacey et al., 2018). Strains from human, food, environmental, and ruminant sources were also included in comparative genome studies (Kiu et al., 2017;Lacey et al., 2018), however, only four ruminant isolates were available in public databases. To increase the diversity of sequenced C. perfringens genomes and improve our understanding of potential host-related virulence factors, 22 C. perfringens isolated from healthy and diseased poultry flocks and 22 C. perfringens isolated from dairy cow intestinal tracts with HBS were sequenced. These genomes were compared to 141 publicly available genomes and analyzed for the presence of the major known virulence factors to ascertain associations with host and diseases and determine evolutionary relationships.
Strain Isolation
Intestinal tracts or fecal samples were obtained in the United States from commercial broiler operations and dairy farms. All animal facilities were operated under the standards for humane care and treatment for commercial animals set in the Animal Welfare Act (AWA; USDA, 2020) and the National Dairy Farmers Assuring Responsible Management animal care program (National Milk Producers Federation Board of Directors, 2019). Live broilers were obtained from flocks during NE outbreaks and from healthy flocks. The broilers were sacrificed on farm by cervical dislocation in accordance with the integrator's animal welfare practices. The gastrointestinal tracts from the duodenal loop to the cloaca were removed, placed into sterile Whirl-pak ® bags (B01297, Nasco, Fort Atkinson, WI, United States), and sent to the laboratory in Waukesha, WI overnight, on ice. For each broiler, 6 cm sections of the duodenum, jejunum, and ileum were dissected, and luminal contents removed by rinsing with sterile 0.1% peptone (Bacto™ Peptone, Becton, Dickinson and Company, Sparks, MD, United States). The three sections from each bird were combined in a filtered Whirl-pak ® bag (B01348, Nasco, Fort Atkinson, WI, United States). Fecal grabs or the infected portion of the gastrointestinal tract (discoloration, blood clotting within the jejunum) from cows that had suffered a digestive death were collected within 6 h of death. The samples were placed in a zip-top freezer bags and sent to the laboratory in Waukesha, WI overnight, on ice.
Intestinal tracts or fecal samples were diluted 1:9 with sterile 0.1% peptone and masticated at 300 rpm, for 1 min in a Stomacher (Model 400 circulator, Seward, England). Serial dilutions prepared from the filtered side of the Whirl-pak ® bags were pour-plated in duplicate with tryptose sulfite cycloserine (TSC) agar (Thermo Fisher Scientific, Waltham, MA, United States), and incubated at 37°C with anaerobic gas packs (R681001, Remel, Lenexa, KS) overnight. Up to 20 representative Frontiers in Microbiology | www.frontiersin.org isolates per sample were grown in Reinforced Clostridial Medium (Thermo Fisher Scientific, Waltham, MA, United States) before storage at −80°C.
Strain Selection
In general, one isolate per animal was included and isolates from the same animal were included only if they produced differential randomly amplified polymorphic DNA (RAPD) typing banding patterns (Power, 1996). Primers and PCR conditions were as described previously (Baker et al., 2010) with the only difference being that amplicon fragments were separated on a 5300 Fragment Analyzer System (Agilent, Santa Clara, CA, United States). Nineteen C. perfringens isolates were from broiler chicken intestinal samples collected during NE outbreaks, however, the presence of NE lesions was not recorded for these intestinal tracts, while three isolates were from healthy broiler chicken intestinal samples. Twenty-one isolates were from dairy cow intestinal samples with HBS, and one isolate was from a fecal sample of a dairy cow with HBS.
Genome Sequencing and Assembly
RNA-free DNA was isolated using a phenol-chloroform method with RNase treatment and precipitated with ethanol. Genomic DNA integrity was evaluated on a 0.75% agarose gel and quantified using Qubit (Thermo Fisher Scientific, Waltham, MA, United States). The 16S rRNA gene was PCR amplified and Sanger sequenced to confirm identity. Shotgun libraries were prepared with Nextera Flex kits (Illumina, San Diego, CA, United States) and sequenced for 251 cycles from each end on a MiSeq using a MiSeq 500-cycle sequencing kit v3 (Illumina, San Diego, CA, United States) or sequenced for 151 cycles from both ends on an iSeq 100 using iSeq 100 i1 Reagent (Illumina, San Diego, CA, United States). For some genomes, shotgun libraries were prepared with Hyper Library Construction Kit (Kapa Biosystems, Wilmington, MA, United States) and sequenced for 300 cycles from each end on a MiSeq using a MiSeq 600-cycle sequencing kit v3 (Illumina, San Diego, CA, United States). All reads were demultiplexed using bcl2fastq Conversion Software (Illumina, San Diego, CA, United States). Draft genome assemblies were generated using SPAdes 3.13.1 using default parameters (Bankevich et al., 2012). Reads were aligned to genome assemblies with bwa mem v0.7.17 (Li and Durbin, 2009). Bam files were converted to sam files with samtools, and coverage was calculated using bedtools (Quinlan and Hall, 2010).
Bioinformatic Analysis
Draft genome assemblies were compared with all 141 publicly available C. perfringens genome assemblies from NCBI RefSeq as of February 25, 2020. All available metadata for genomes were collected and host information was categorized into relevant groups to improve statistical power (e.g., chicken and turkey were classified as avian; Supplementary (Gurevich et al., 2013). A maximum likelihood tree was generated by performing SNP calling on genome assemblies with CSI Phylogeny using the reference strain C. perfringens ATCC 13124 (Kaas et al., 2014). The phylogenetic tree was visualized and annotated using iTol v5.6.2 (Letunic and Bork, 2016). Genomes were annotated using Prokka v1.14.6 (Seemann, 2014). A BLAST protein database was made from virulence factor protein sequences (Supplementary Table 2) using makeblastdb (BLAST+ v2.9.0). Prokka protein annotations were aligned to protein databases using BLASTp (BLAST+ v.2.9.0, -evalue 1 -max_target_seqs 1 -qcov_hsp_perc 50; Camacho et al., 2009). Both consensus and atypical variants of beta2 were used. These parameters set a threshold of 50% alignment length, which is appropriate for draft genome assemblies to reduce false negatives. We chose a threshold of 80% identity to allow for the detection of variants. For known variants (PfoA-Alv and NetB-NetF), we increased the percent identity threshold to 90% to distinguish between these closely related proteins. A binary matrix of virulence gene presence or absence was created from the BLASTp results. Beta2 protein sequences were analyzed for signal peptide content using SignalP v5 (Armenteros et al., 2019) and aligned with Clustal Omega v1.2.4 (Sievers et al., 2011). In silico PCRs of previously published beta2 primers were done using the -search_pcr function of USEARCH v10.0.240 with the following settings -strand both -maxdiffs 2 -minamp 30 -maxamp 2000 (Edgar, 2010).
The virulence gene presence within a category and the associated lift (Tufféry, 2011) was computed for each category. Lift is common measure in data mining algorithms to identify the strength for pairwise association of outcomes or even possibly sets of outcomes where outcomes are defined in terms of presence or absence. The lift is defined as the rate of joint occurrence of the pair of outcomes/sets of outcomes in the dataset relative to the product of the rate of each outcome, i.e., for outcomes X and Y, lift = Prob The lift provides an indication of the relative magnitude of presence or absence of the gene within a category as compared to the presence across all isolates. Lift values greater than 1 indicate a higher presence in the category compared to the presence in all strains and, conversely, lift values less than indicate lower prevalence in the category. A 2 × 2 contingency table was created for each virulence gene (presence/absent) and category (yes/no by strain) and tested for significant association using Fisher's Exact test for independence (Agresti, 2002). A Bonferroni adjustment was implemented to provide an overall 0.05 error rate across all comparisons. All computations were performed using R version 3.5.0.
Overview of C. perfringens Genome Assemblies
Between 199,762 and 3,020,471 paired reads were generated for each of the 44 strains sequenced resulting in a range of 23-and 433-fold coverage for each strain (Supplementary Table 3). Assembly statistics were generated using QUAST, and the number of coding sequences was counted from Prokka annotations (Supplementary Table 1). The minimum and maximum number of contigs, total length, and percent GC, N 50 , and L 50 all fell within the range for RefSeq strains, except the length of strain CHD30685R, which was 33 kb shorter than the shortest RefSeq assembly.
The isolate metadata are shown in Supplementary Table 1. The largest group was of avian strains (n = 61) which were all chicken associated, except for one strain isolated from a turkey, with 49 of these strains from flocks experiencing NE. A total of 34 isolates were isolated from ruminants: 25 from cattle, four from lamb and sheep, four from llamas, and one strain was isolated from a bison. The NCBI database contained 29 human-associated strains, most (n = 12) of which had no known disease associations, while the rest were from healthy humans (n = 5), necrotizing enterocolitis (n = 3), food poisoning (n = 5), gas gangrene (n = 2), diarrhea (n = 1), necrotizing enteritis (NCTC8081; Deguchi et al., 2009), and an ICU patient (n = 1). There were 17 canine isolates, 16 of which were isolated from canine AHDS. The 16 equine isolates were all isolates from FNE. The 15 food-associated isolates have very little disease information deposited with them but are likely food poisoning strains. Lastly, five environmental isolates from river water, soil, or sludge, three porcine intestinal disease-associated isolates, and one mouse isolate were downloaded from the NCBI database. Four of the strains had no host or disease metadata.
Toxins and Virulence Factors
The toxin and virulence factor profiles were determined using BLASTp for all 185 C. perfringens strains used in the analysis (Supplementary Table 1). The prevalence of each gene varied from less than 1 to 100% (Table 1). Alpha toxin (plc), collagenase (colA), the small intracellular sialidase (nanH), and alphaclostripain (ccp) presence were highly conserved and were present in at least 99% of assemblies analyzed. All 185 alpha toxin protein sequences were at least 96% identical to the type strain ATCC 13124, although it should be noted that the JFP992 sequence was split over two contigs and the predicted alpha toxin protein sequence for UDE_95-1372 was truncated at the N-terminus. Very few strains encoded beta toxin (3%), epsilon toxin (3%), and iota toxin (2%).
Toxinotypes
We classified the strains into toxinotypes using the BLASTp toxin profiles. Approximately, 94% of the strains analyzed were type A, F, or G ( Table 2). Toxinotype A encodes alpha toxin while the other typing toxins, other than cpe in some strains, are all plasmid encoded. Toxinotype A strains comprised 43.8% of the strains and were present in all host categories. Toxinotype F strains encode enterotoxin (cpe) either on the chromosome or plasmids and were the predominant toxinotype in isolates from canine, equine, and food. One avian strain and seven of the 29 human isolates were also Toxinotype F. Toxinotype G strains encode netB, which is plasmid-borne (Lepp et al., 2010(Lepp et al., , 2013 and was only present in avian isolates and in 76% of the NE associated isolates. The NetB pore forms in chicken hepatocytes and red blood cells of duck, chicken, and goose, and is important for the development of NE (Keyburn et al., 2008(Keyburn et al., , 2010Yan et al., 2013;Lacey et al., 2018;Yang et al., 2019b). Based on epidemiological (Martin and Smyth, 2009;Tolooe et al., 2011;Yang et al., 2018). In a challenge model, two of three netB positive strains produced disease at a high rate (79-89%), but a netB negative strain still affected 44% of challenged birds (Cooper and Songer, 2010). A necrotic enteritis induction model would be necessary to determine if the 12 NE associated strains that did not encode netB are commensals or can cause disease. Strains of Toxinotypes B, C, D, and E only made up 3% or fewer of the total strains analyzed. These toxinotypes are acknowledged to be associated with many livestock diseases (Songer, 1996;Billington et al., 1998;Filho et al., 2009;Munday et al., 2019) and are incorporated into veterinary vaccines (Ferreira et al., 2016(Ferreira et al., , 2018, and yet very few have been sequenced.
Beta2 Toxin Variants
Beta2 toxin is cytotoxic for intestinal cells and there is a strong association between C. perfringens strains that encode cpb2 and gastrointestinal diseases in pigs, although there are at least two variants of the beta2 toxin and this diversity is not always acknowledged (Gibert et al., 1997;Garmory et al., 2000;Waters et al., 2003;Fisher et al., 2005;Jost et al., 2005). To investigate the sequence variation between the consensus and atypical genes, as well as signal peptide variation, we classified the beta2 toxin sequences by amino acid identity and signal peptide content. After combining the results of the consensus and atypical beta2 toxin BLASTp results, cpb2 was identified in 109/185 (59%) of strains analyzed, and one strain, JGS 1495, had both the consensus and atypical variants located on different contigs. Six types of beta2 were identified: five that have been described [three consensus (C) types and two atypical (A) types] and one novel type that we designated N1. Only one strain encoded the N1 type, 1001175st1_F9, a strain isolated from healthy human stool (Yang et al., 2019a). The consensus variant was divided into two types, C1 and C2, which are ~92% identical at the protein level. We further classified the beta2 sequences by signal peptide content and added a -tr designation in Figure 1 for those strains lacking a signal peptide. Of the six consensus cpb2, two were the original consensus variant, C1, two were the C2 variant described in a 2005 publication (Fisher et al., 2005), and one was a C2-tr variant. Of the atypical beta2 toxin sequences, which are approximately 63% identical to the consensus variant, 64 (62%) were A1 and 39 (38%) were A1-tr. A representative from each of these six variants was selected for protein sequence alignment (Figure 2).
Beta2 toxin disease associations are often of a specific type and the presence of a signal peptide may play an important functional role and it is therefore important to acknowledge in disease association studies. We performed in silico PCR to determine which types would have been detected in various publications (Supplementary Table 4). The PCR protocol in the original Cpb2 paper used to associate cpb2 with intestinal disease in horses and piglets would have only detected the C1 type (Gibert et al., 1997). Similarly studies associating cpb2 and diarrhea in piglets would also have detected the C1 type (Waters et al., 2003). In addition, there is an association between Cpb2 and autism spectrum disorder, and these studies used primers that also would have detected the C1, C2, and C2-tr types, but not the atypical variants or the novel variant N1 (Garmory et al., 2000;Alshammari et al., 2020). In a study by Kircanski et al. (2012), when Cpb2 protein levels were quantified in culture supernatants by Western blot, 95% of consensus isolates and 75% of atypical isolates were shown to express the protein. The study would have successfully identified C1, A1 (and A1-tr), and possibly identified C2 (and C2-tr). They would have been able to distinguish between consensus and atypical variants but would not have been able to distinguish the presence of the signal peptide potentially explaining why 25% of atypical C. perfringens and 5% of consensus isolates did not express beta2 toxin.
Of the three strains in the present study which were porcine associated, one encoded a C1 type (NCTC 10719), one encoded both a C1 and an A1 type (JGS 1495), and one lacked beta2 toxin (JXJA17). Ruminant, canine, and equine assemblies encode only atypical (primarily A1-tr) beta2 toxin. Chicken isolates primarily encoded the A1 type. In the first paper describing the atypical variant, Jost et al. (2005) noticed a similar pattern that atypical isolates were more often identified in C. perfringens strains that were isolated from livestock other than pigs and not expressed.
These findings add a new dimension to this previous research which reveals that the associations are often of a specific Cpb2 type. Future studies which take this variation of sequence and signal peptide content into account are likely to see stronger associations between Cpb2 and various diseases.
Clostridium perfringens Phylogeny
Phylogenetic relationships between the strains were determined by CSI Phylogeny (Kaas et al., 2014) which analyzes the SNPs across reads using a reference genome. Clostridium perfringens ATCC 13124 with 3,256,683 nucleotides was used as the FIGURE 1 | Maximum likelihood phylogenetic tree of C. perfringens genome assemblies determined by single nucleotide polymorphisms compared to the reference strain ATCC 13124. The reference strain and Clade I are shown in red. Clade II is the food poisoning associated clade and Clade V is the alveolysin clade. Host associations are shown on the inner ring, followed by specific virulence factors, the outer three rings indicate the beta2 variant, toxinotype, and health or disease association (if known) of each strain. The tree is rooted at the midpoint.
reference strain. The percentage of the reference genome that was covered by all isolates was 48.06% with 1,565,015 positions found in all 185 genomes. The maximum likelihood tree generated is shown in Figure 1, with five clades labeled I through V. The reference strain, ATCC 13124 was present in Clade I with 60 strains which contained 43 of the 61 avian isolates, five equines, five ruminants, three humans, one canine, and two environmental isolates.
Alveolysin Clade
Alvoelysin (alv) was the only toxin limited to a single clade as all 35 strains that encoded alv were present in Clade V confirming a previous study showing it was cladespecific (Kiu et al., 2019). Alveolysin is an understudied toxin of C. perfringens that is similar to perfringolysin (Kiu et al., 2019) with both being cholesterol-dependent cytolysins, previously known as thiol-activated cytolysins (Billington et al., 2000). Gene duplications are frequent mutations in microbes (Reams and Roth, 2015), and we therefore hypothesize that alveolysin may have arisen from a gene duplication of perfringolysin followed by divergence during evolution as the two toxins are similar (~79% similarity) and generally encoded as little as 5 kb apart, although lateral gene transfer cannot be ruled out.
Within Clade V is a sub-clade of 26 strains that contains clinical isolates associated with canine AHDS and FNE that are all type F. These strains appear almost clonal, but not only were they isolated from different host species but also across multiple outbreaks between 1999 and 2014 in three different countries (Gohari et al., 2017). Of the 26 strains in the sub-clade, 23 encoded netE and netF (88%). NetF toxin is very similar in structure to NetB, but it has only been identified in isolates from canine AHDS and FNE (Gohari et al., 2015(Gohari et al., , 2016(Gohari et al., , 2017. Also, of interest within Clade V is that three of the four toxinotype E strains (Q061.2, a515.17, and a508.17) are present. These strains contain a variant iota toxin which is 84-87% similar to the typical iota toxin sequence. The other toxinotype E strain (JGS 1987) is outside this clade and has the typical iota toxin sequence. This iota toxin variant has been identified in other C. perfringens that lack public genome assemblies (PB-1, 3441, TGII002, and TGII003;Miyamoto et al., 2011). The strains in that study and each of the three variant strains in the present study also have a variant enterotoxin protein sequence (~96% similar to the other 53 sequences) located on the same plasmid as the iota toxin genes indicating evolutionary divergence of the plasmid within this clade. Further studies to obtain complete plasmid sequences need to be done to validate this supposition.
The only strain to encode binary enterotoxin (becA and becB), Q135.2 (IQ3), is also in clade V and was isolated from a fecal sample obtained from a healthy child (Kiu et al., 2019).
The becA and becB genes are plasmid-encoded and seem to be rare (Kiu et al., 2019;Matsuda et al., 2019).
Further research is needed of the virulence potential of the strains in Clade V due to the presence of alveolysin, an understudied chromosomal toxin as well as several variant and rare toxins carried on plasmids.
Food Poisoning Associated Clade
Fourteen of the 15 strains isolated from foods were present in Clade II. Seventeen of the human isolates, three from food poisoning cases and three from necrotizing enterocolitis were also present in Clade II, as were five environmental and five avian isolates. The 20 strains in which chromosomal cpe genes were detected were present in a sub-clade of 27 strains. Ten of these isolates from both food and humans appear clonal and were submitted in the same bioproject (PRJNA436899) and are therefore most likely from the same clinical outbreak. Experimental evidence suggest that strains carrying chromosomal cpe are more heat-tolerant allowing them to survive better if food is undercooked (Sarker et al., 2000). Our results confirm a previous study where strains that carry cpe chromosomally are related and that they lack the pfoA gene (Deguchi et al., 2009). The majority (23) of these 27 strains also lacked the hyaluronidase and sialidases that enhance a strains ability to colonize the intestinal tract .
The alpha toxin protein sequences in the sub-clade of 27 were divergent with less than 97% similarity to the sequence from the type strain, ATCC 13124. The alpha toxin gene is located near the origin of replication, which is evidence of its importance as it is the first area to be replicated during cell division and is generally highly conserved, thus genetic changes in it are likely to reflect evolution (Rood and Cole, 1991;Uzal et al., 2010). This chromosomal variation indicates that these strains form a distinct evolutionary lineage which may be less adapted to the host environment and more opportunistic than other strains. Although necrotizing enterocolitis is not associated with food-poisoning, the disease most often occurs in premature infants with immature gastrointestinal tract microbiota. They appear to be more likely to be transiently present in the gastrointestinal tract, whereas the host-adapted strains cause more lethal diseases in adult animals.
Host and Environmental Associations
We determined significant associations of virulence genes with categorical host metadata using Fisher's Exact test for independence and this data are shown in Figure 3 together with the lift that provides an indication of the relative magnitude of presence or absence of the gene within a category as compared to the presence across all isolates. In comparing avian strains (n = 61) to the other categories of isolates, there was a significantly higher proportion of isolates with netB, cpb2-A1, tpeL, nanJ, nagJ, and nagH. Avian strains showed lower frequencies of cpb2- cpe,netE,netF,and alv. Ruminant strains (n = 34) showed lower prevalence for netB and cpe. Canine strains (n = 17) and equine strains (n = 16) showed higher prevalance for alv, cpb2-A1-tr, cpe, netE, netF, and netG. Food (possibly food poisoning) strains were positively associated with cpe and showed lower prevalence for pfoA, two sialidases (nanI, nanJ), and four hyaluronidases (nagH, nagI, nagK, and nagL). There were no significant associations for genes and human strains (n = 29) and environmental strains (n = 5). Unknown (n = 4), porcine strains (n = 3), and mouse (n = 1) were not evaluated for associations.
Our results confirmed previous data that netB is associated with avian strains (Keyburn et al., 2008(Keyburn et al., , 2010Lepp et al., 2010Lepp et al., , 2013Lacey et al., 2018). We only found one report of netB being detected in species other than poultry and that was in a ruminant isolate (Martin and Smyth, 2009). The other toxin gene associated with poultry is tpeL, which was also detected in ruminant and porcine strains. TpeL glycosylates cell signaling proteins resulting in apoptosis (Guttenberg et al., 2012;Schorch et al., 2014;Nagahama et al., 2015) and have been shown to be responsible for increased NE pathogenicity (Coursodon et al., 2012;Shojadoost et al., 2012;Gu et al., 2019). Our data indicate that the A1 beta2 toxin variant with the signal peptide is associated with avian strains, although this variant is detected in other host strains too.
As in previous reports, the pore-forming toxins, netE, netF, and netG, are associated with canine and equine strains (Gohari et al., 2016(Gohari et al., , 2017(Gohari et al., , 2020Sindern et al., 2019) and these toxins were not detected in any other strains. These canine and equine strains are unique among the diversity of strains from other hosts and environments. They are present in Clades I and V and related strains appear almost clonal even though they are from distinct hosts and from epidemiologically unrelated clinical isolates collected from the United States, Canada, and Switzerland between 1999(Gohari et al., 2017. Plasmid-borne enterotoxin was present across both clades, and the predominant beta2 variant in these strains was the A1 variant without the signal peptide. Alveolysin, associated with equine and canine strains, was also present in strains from other hosts in Clade V. Challenge assays either in vitro or in vivo may reveal what it is about these strains or the two hosts that cause an almost clonal population to be present across countries and disease outbreaks. There were no positive associations with any of the investigated toxins or virulence factors with strains from ruminants. Ruminant strains were defined by the absence of enterotoxin and netB genes. Previous experimental induction of disease in a calf ileal loop model indicated that diverse C. perfringens strains from ruminant, chicken, and human origins could cause necrohaemorrhagic lesions , and alpha and perfringolysin toxin were sufficient to cause lesions in this model . Novel toxin genes were not detected in the genome of a bovine clostridial abomasitis isolate strain F262, however, the strain did produce perfringolysin O, alpha-toxin, and beta2-toxin (Nowell et al., 2012). Clostridium perfringens Type D are associated with ruminant enterotoxaemia, mostly in lambs, but also in sheep and goats (Popoff, 2011), however, epsilon toxin was not commonly present in the sequenced genomes. To date, therefore, no specific toxins or virulence factors are associated with the 26 sequenced clinical C. perfringens strains of ruminant origin, however, it may be dependent on the type of disease, and the 22 strains sequenced in this study were all associated with HBS. There is genetic diversity in the strains from ruminants as they are present in all clades except for Clade II, however, 20 of the 32 ruminant strains were present in Clade III. Therefore, further analysis of these genomes may reveal genes promoting colonization or growth in the intestine that could affect pathogenesis in ruminants.
Our results have confirmed previous data that certain toxin genes are host-associated such as netB in avian strains (Keyburn et al., 2008(Keyburn et al., , 2010Lepp et al., 2010Lepp et al., , 2013Lacey et al., 2018) and netE, netF, and netG in canine and equine strains (Gohari et al., 2016(Gohari et al., , 2017(Gohari et al., , 2020Sindern et al., 2019). In addition, our data indicate that there are differences in beta2 toxin variants between hosts with the A1 variant with the signal peptide being associated with avian strains and the A1 variant without the signal peptide associated with canine and equine strain. However, considering the role that C. perfringens has in multiple livestock and human diseases there is still limited data on the virulence factors and host specificity of these pathogens.
Clostridium perfringens are found in a wide variety of hosts and environments; however, most of the strains selected for study and genome sequencing are associated with a handful of diseases and may not represent the diversity present in both hosts and environment. More specifically, few strains acknowledged to be associated with livestock diseases, such as Types B, C, D, and E have been sequenced. Vaccination efforts for livestock have focused on these toxinotypes (Ferreira et al., 2016) which may be why they are absent from recent studies, however, strains should be present in culture collections that could be sequenced to aid in understanding this pathogen. A better understanding of this opportunistic pathogen that is a member of the gut microbiota can lead to more targeted preventative measures to reduce factors that can lead to overgrowth and clinical diseases.
CONCLUSION
This is the most comprehensive comparative genomics study of C. perfringens virulence factors to date. Only four of the 24 virulence factors were highly conserved and were present in at least 99% of assemblies analyzed. Types A, F, and G represent 93% of sequenced isolates, while Type B, C, D, and FIGURE 3 | The percentage of isolates within each category for each of the individual toxins and virulence factors and whether they are significantly associated with the category are marked with a red circle. The number of strains per category is as follows: avian (61); ruminant (34); human (29); canine (17); equine (16) food (15), and environmental strains (5). Unknown (4), porcine strains (3), and mouse (1) are shown, but were not evaluated for associations. Cells are colored by lift: values greater than 1 indicate a higher presence in the category compared to the presence in all strains and, conversely, lift values less than indicate lower prevalence in the category. E are underrepresented in publicly available genome sequences even though they are associated with many livestock diseases. The sequence variation of beta2 toxin was expanded to include a new beta2 toxin (N1) and primers to detect beta2 sequence variants should be redesigned to detect all variants and identify the presence of the cpb2 signal peptide, although PCR results should ideally be compared with protein expression data, especially from non-porcine isolates. Although avian strains were not all associated with netB, those isolated from NE outbreaks were more likely to contain netB, confirming previous studies. The plasmid cpe, netE, and netF genes were again confirmed to be associated with equine and canine strains. We show that alveolysin, a recently described protein, we hypothesize arose through a gene duplication of perfringolysin, is also associated with these strains and is only present in a single monophyletic clade, Clade V. A distinct evolutionary lineage of C. perfringens associated with food poisoning lacks perfringolysin, hyaluronidases, and sialidases which we hypothesize are important host-associated genes for colonization.
In future studies, we will perform pan genome analysis to potentially identify genes other than the known toxin and virulence genes that may be host-associated. Due to the importance of plasmids in C. perfringens pathogenicity it would be beneficial to obtain complete plasmid sequences for comparative purposes and determine co-location of virulence factors. Most of the strains selected for genome sequencing are associated with disease and may not be representative of the diversity existing in both the host and the environment, therefore, further effort should be made to isolate and sequence a wider diversity of strains.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://www.ncbi.nlm. nih.gov/, PRJNA686134.
ETHICS STATEMENT
Ethical review and approval was not required for the animal study because broilers were sacrificed on farm by cervical dislocation in accordance with the integrator's animal welfare practices. Dairy cows died on farm. Written informed consent was obtained from the owners for the participation of their animals in this study.
AUTHOR CONTRIBUTIONS
RG, TR, and AS contributed to conception and design of the study. RG sequenced, assembled, and analyzed the genomes, and wrote the first draft of the manuscript. All authors contributed to the article and approved the submitted version. | 2021-06-09T13:32:38.076Z | 2021-06-09T00:00:00.000 | {
"year": 2021,
"sha1": "f294e2bab3b12912f20127cdad65fe494e1666ce",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2021.649953/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f294e2bab3b12912f20127cdad65fe494e1666ce",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3965141 | pes2o/s2orc | v3-fos-license | Erinacine A-Enriched Hericium erinaceus Mycelium Produces Antidepressant-Like Effects through Modulating BDNF/PI3K/Akt/GSK-3β Signaling in Mice
Antidepressant-like effects of ethanolic extract of Hericium erinaceus (HE) mycelium enriched in erinacine A on depressive mice challenged by repeated restraint stress (RS) were examined. HE at 100, 200 or 400 mg/kg body weight/day was orally given to mice for four weeks. After two weeks of HE administration, all mice except the control group went through with 14 days of RS protocol. Stressed mice exhibited various behavioral alterations, such as extending immobility time in the tail suspension test (TST) and forced swimming test (FST), and increasing the number of entries in open arm (POAE) and the time spent in the open arm (PTOA). Moreover, the levels of norepinephrine (NE), dopamine (DA) and serotonin (5-HT) were decreased in the stressed mice, while the levels of interleukin (IL)-6 and tumor necrosis factor (TNF)-α were increased. These changes were significantly inverted by the administration of HE, especially at the dose of 200 or 400 mg/kg body weight/day. Additionally, HE was shown to activate the BDNF/TrkB/PI3K/Akt/GSK-3β pathways and block the NF-κB signals in mice. Taken together, erinacine A-enriched HE mycelium could reverse the depressive-like behavior caused by RS and was accompanied by the modulation of monoamine neurotransmitters as well as pro-inflammatory cytokines, and regulation of BDNF pathways. Therefore, erinacine A-enriched HE mycelium could be an attractive agent for the treatment of depressive disorders.
Introduction
Depression, a psychiatric disorder characterized by a low self-esteem, altered mood, hopelessness, reduced interest/pleasure in daily activities and persistent thoughts of death or suicide, has become a significant global health issue and economic burden [1]. The lifetime prevalence of depression is approaching 20% of the population and is expected to be the second leading cause of incapacity worldwide by the year 2020 based on data from the World Health Organization [2]. The etiopathology as well as the clear mechanisms underlying depressive disorders are still far from understood, since depression is a highly complicated psychiatric illness. Stress, which can induce neuroinflammation, mitochondrial damage, neuroplastic deficits and intracellular signaling pathways, has been implicated to act as a major determinant for the onset of depression, and may provide a novel target for preventing neurodegeneration [3,4]. The animal models and clinical studies on the link between stress and depressive disorders suggest that antioxidant agents can reduce oxidative stress through scavenging reactive oxygen species (ROS) and reactive nitrogen species (RNS), which further protect against neuronal damage induced by stress [5][6][7][8][9]. In addition, stress-induced depression has been shown to alter the levels of monoamine neurotransmitters such as serotonin (5-HT), along with behavioral changes in animal models [10,11]. Numerous studies reported that normalizing the disturbed monoaminergic neurotransmitters is associated with treating depressive disorders [12][13][14]. Furthermore, a growing body of evidence has demonstrated that stress negatively regulates the level of brain-derived neurotrophic factor (BDNF), which may contribute to the impairment of the dendritic plasticity and hippocampal neurogenesis and be responsible for neuron damage and onset of depression [15][16][17]. Although there are many pharmacotherapies available nowadays, over 30% of depressed patients do not achieve a clinically appreciable improvement with current treatments. The significant limitations of conventional antidepressants include the slow onset for therapeutic actions (weeks to months) and undesirable side-effects such as nausea, diarrhea, migraine headache, sleep disturbance and sexual problems [18,19]. In the view of the impact on depressors, especially for those suicide-risk patients, research focused on the discovery and development of agents with promising efficacy and fewer side effects is urgent.
Hericium erinaceus (HE), Houtou mushroom in Chinese, has been used as food and folk medicine in several East-Asia countries for centuries [20]. HE has been documented to display a wide range of beneficial properties, including anticancer, antimicrobial, antihyperglycemic, antioxidant and hypolipidemic activities, and immune modulation [21][22][23][24]. A group of diterpenoids isolated from the cultured mycelia of HE, namely erinacines, were demonstrated to be potential enhancers of nerve growth factor (NGF) biosynthesis in cultured astrocytes [25][26][27]. The increased production of NGF is correlated with proper neural growth and maintenance [28,29]. Importantly, in particular, erinacine A has been reported to exhibit the protective effect against ischemic injury, Parkinson's and Alzheimer's diseases in vivo [30][31][32]. Therefore, erinacine A-enriched HE is attracting attention and may serve as a promising agent having neurotrophic activity with potential application in ameliorating neurodegenerative disorders.
Restraint stress (RS) has been extensively applied to induce a depression-like state for screening the effectiveness of antidepressant activities [33]. However, there is no quantitative data regarding the antidepressant-like activities of HE in a repeated RS-induced mouse model of depression. The aim of this present study, thus, was to study the effects of erinacine A-enriched HE mycelium and reveal the possible mechanisms using an RS mouse model. In relation to that, the behavioral alterations and the contents of monoamines, proinflammatory cytokines, and depression-related protein expressions were assessed.
Results
The chromatograms generated by high-performance liquid chromatogram (HPLC) and liquid-chromatography-electrospray ionization-mass spectrometry (LC-ESI-MS) with positive and negative ionization modes of the ethanolic extract from mycelia of H. erinaceus are displayed in Figure 1A. Peak 2 was verified to be erinacine A (2) and the other three peaks were tentatively identified comparing to the prepared standards (kindly provided by Dr. CC Chen, HungKuang University, Taichung, Taiwan) as previously reported [32]. The chemical structures and mass spectral characteristics of four major peaks are illustrated and described in Figure 1B and Table 1, respectively. The contents of those peaks were quantified from the established calibration curve as erinacine A (2) with the highest amount of 5.0 mg/g dry weight (Table 1). 1 The peak numbers were denoted as erinacine Q (1), erinacine A (2), erinacine C (3) and erinacine S (4) and are referred to in Figure 1; 2 The content of Hericium erinaceus (HE) extract was expressed in mg/g dry weight as mean of three independent analyses.
To examine the antidepressant-like effect of HE treatment, behavioral responses of the immobility time in the mouse tail suspension test (TST) and forced swimming test (FST) were carried out and are shown in Figure 2A,C, respectively. The results indicated a significant anti-immobility effect elicited by the treatment of HE at the doses of 200 and 400 mg/kg in the TST (p < 0.01) and FST (p < 0.01) as compared to the vehicle-treated stressed mice (RS group). In addition, HE at the doses of 100, 200 and 400 mg/kg increased the swimming time in the FST (p < 0.01, p < 0.001 and p < 0.001, respectively) as compared to the RS group ( Figure 2B).
The ability of HE to modulate emotional reactivity in stressed mice was examined and Table 2 reveals the results of HE on the assayed parameters in the elevated plus maze over the 5-min test. The data showed that there was a significant increase of the number of entries in open arm (POAE) in stressed mice treated with medium and high doses (200 and 400 mg/kg) of HE (p < 0.01) as compared to the RS group. The percentage increases in the POAE were 21.1% and 24.1%, respectively. Furthermore, stressed mice treated with HE at 200 and 400 mg/kg significantly increased the time spent in the open arm (PTOA) by 22.4% and 22.1%, respectively, as compared to the RS group. No significant difference in the number of closed-arm entries (CAE) was observed in all groups. Values are mean ± SEM (n = 10 per group). Con: normal control mice, RS: mice received vehicle treatment followed by repeated restraint stress, RS + HEL: mice received low dose of HE (100 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEM: mice received middle dose of HE (200 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEH: mice received high dose of HE (400 mg/kg body weight) treatment followed by repeated restraint stress. *** p < 0.001 vs. the Con group; ## p < 0.01 and ### p < 0.001 vs. RS group. Significant differences between groups were determined using one-way ANOVA and Tukey's post-hoc test. Values are mean ± SEM (n = 10 per group). Con: normal control mice, RS: mice received vehicle treatment followed by repeated restraint stress, RS + HEL: mice received low dose of HE (100 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEM: mice received middle dose of HE (200 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEH: mice received high dose of HE (400 mg/kg body weight) treatment followed by repeated restraint stress. *** p < 0.001 vs. the Con group; ## p < 0.01 and ### p < 0.001 vs. RS group. Significant differences between groups were determined using one-way ANOVA and Tukey's post-hoc test.
To exclude the changes in behavior observed in TST and FST that were attributed to the false-positive effect, the responses of HE treatment on locomotor activities in mice were tested. Table 3 depicts the mean locomotor responses of testing mice. The administration of vehicle or various doses of HE to repeated restraint-stressed animals did not give rise to any obvious changes in number of crossing and rearing. On the other hand, the RS-alone group showed higher numbers of defecation by~82% (p < 0.01), and middle and high doses of HE reduced defecation significantly by~27% as compared to the RS group (p < 0.05). The concentrations of norepinephrine (NE), dopamine (DA) and 5-HT were drastically reduced after repeated restraint stress in the vehicle-treated group (RS group) compared with the control group (p < 0.001) ( Figure 3). Although significant elevation of NE level was found only in high dose of HE treatment (p < 0.05), HE (100, 200 and 400 mg/kg) produced profound increases in DA levels in the hippocampal region (p < 0.001) as compared to the RS group ( Figure 3A,B). Supplementation of medium and high doses of HE helped to revert the stress-induced 5-HT depletion (by raising about 81.6% and 92.5%, respectively) ( Figure 3C). To exclude the changes in behavior observed in TST and FST that were attributed to the falsepositive effect, the responses of HE treatment on locomotor activities in mice were tested. Table 3 depicts the mean locomotor responses of testing mice. The administration of vehicle or various doses of HE to repeated restraint-stressed animals did not give rise to any obvious changes in number of crossing and rearing. On the other hand, the RS-alone group showed higher numbers of defecation by ~82% (p < 0.01), and middle and high doses of HE reduced defecation significantly by ~27% as compared to the RS group (p < 0.05). The concentrations of norepinephrine (NE), dopamine (DA) and 5-HT were drastically reduced after repeated restraint stress in the vehicle-treated group (RS group) compared with the control group (p < 0.001) ( Figure 3). Although significant elevation of NE level was found only in high dose of HE treatment (p < 0.05), HE (100, 200 and 400 mg/kg) produced profound increases in DA levels in the hippocampal region (p < 0.001) as compared to the RS group ( Figure 3A,B). Supplementation of medium and high doses of HE helped to revert the stress-induced 5-HT depletion (by raising about 81.6% and 92.5%, respectively) ( Figure 3C). Values are mean ± SEM (n = 10 per group). Con: normal control mice, RS: mice received vehicle treatment followed by repeated restraint stress, RS + HEL: mice received low dose of HE (100 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEM: mice received middle dose of HE (200 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEH: mice received high dose of HE (400 mg/kg body weight) treatment followed by repeated restraint stress. *** p < 0.001 vs. the Con group; # p < 0.05 and ### p < 0.001 vs. RS group. Significant differences between groups were determined using one-way ANOVA and Tukey's post-hoc test.
Effect of HE on the concentrations of plasma cytokines is illustrated in Figure 4. The levels of interleukin (IL)-6 and tumor necrosis factor (TNF)-α were markedly elevated in repeated restraint stress-treated mice compared with the control group (p < 0.001). Supplementation with HE at 200 and 400 mg/kg significantly inhibited stress-induced increases in IL-6 levels (p < 0.05 and p < 0.01, respectively) and treatment with HE at all doses drastically suppressed plasma TNF-α contents (p < 0.05, p < 0.01 and p < 0.01, respectively) as compared to the RS group. Values are mean ± SEM (n = 10 per group). Con: normal control mice, RS: mice received vehicle treatment followed by repeated restraint stress, RS + HEL: mice received low dose of HE (100 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEM: mice received middle dose of HE (200 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEH: mice received high dose of HE (400 mg/kg body weight) treatment followed by repeated restraint stress. *** p < 0.001 vs. the Con group; # p < 0.05 and ### p < 0.001 vs. RS group. Significant differences between groups were determined using one-way ANOVA and Tukey's post-hoc test.
Effect of HE on the concentrations of plasma cytokines is illustrated in Figure 4. The levels of interleukin (IL)-6 and tumor necrosis factor (TNF)-α were markedly elevated in repeated restraint stress-treated mice compared with the control group (p < 0.001). Supplementation with HE at 200 and 400 mg/kg significantly inhibited stress-induced increases in IL-6 levels (p < 0.05 and p < 0.01, respectively) and treatment with HE at all doses drastically suppressed plasma TNF-α contents (p < 0.05, p < 0.01 and p < 0.01, respectively) as compared to the RS group. Values are mean ± SEM (n = 10 per group). Con: normal control mice, RS: mice received vehicle treatment followed by repeated restraint stress, RS + HEL: mice received low dose of HE (100 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEM: mice received middle dose of HE (200 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEH: mice received high dose of HE (400 mg/kg body weight) treatment followed by repeated restraint stress. *** p < 0.001 vs. the Con group; # p < 0.05 and ## p < 0.01 vs. RS group. Significant differences between groups were determined using one-way ANOVA and Tukey's post-hoc test.
To understand the molecular mechanism underlying the antidepressant-like effect of HE, the expressions of BDNF, TrkB and PI3K signaling pathway proteins with β-actin as control in the hippocampus of mice were examined ( Figure 5). Repeated restraint stress decreased the expression levels of BDNF, TrkB and PI3K in the mice brain tissue compared to the control group. HE at tested concentrations was effective to reverse the stress-induced downregulation. Western blotting data revealed that Akt and GSK-3β expressions did not change in all groups. However, repeated restraint stress significantly downregulated Akt-p and GSK-3β-p expressions and the stress-induced decreases in both proteins were prevented by the treatment with HE in the hippocampus of mice. Values are mean ± SEM (n = 10 per group). Con: normal control mice, RS: mice received vehicle treatment followed by repeated restraint stress, RS + HEL: mice received low dose of HE (100 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEM: mice received middle dose of HE (200 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEH: mice received high dose of HE (400 mg/kg body weight) treatment followed by repeated restraint stress. *** p < 0.001 vs. the Con group; # p < 0.05 and ## p < 0.01 vs. RS group. Significant differences between groups were determined using one-way ANOVA and Tukey's post-hoc test.
To understand the molecular mechanism underlying the antidepressant-like effect of HE, the expressions of BDNF, TrkB and PI3K signaling pathway proteins with β-actin as control in the hippocampus of mice were examined ( Figure 5). Repeated restraint stress decreased the expression levels of BDNF, TrkB and PI3K in the mice brain tissue compared to the control group. HE at tested concentrations was effective to reverse the stress-induced downregulation. Western blotting data revealed that Akt and GSK-3β expressions did not change in all groups. However, repeated restraint stress significantly downregulated Akt-p and GSK-3β-p expressions and the stress-induced decreases in both proteins were prevented by the treatment with HE in the hippocampus of mice. Values are means ± SEM of three independent experiments. Con: normal control mice, RS: mice received vehicle treatment followed by repeated restraint stress, RS + HEL: mice received low dose of HE (100 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEM: mice received middle dose of HE (200 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEH: mice received high dose of HE (400 mg/kg body weight) treatment followed by repeated restraint stress. * p < 0.05; ** p < 0.01 and *** p < 0.001 vs. the Con group; ## p < 0.01 and ### p < 0.001 vs. RS group. Significant differences between groups were determined using one-way ANOVA and Tukey's post-hoc test. Values are means ± SEM of three independent experiments. Con: normal control mice, RS: mice received vehicle treatment followed by repeated restraint stress, RS + HEL: mice received low dose of HE (100 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEM: mice received middle dose of HE (200 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEH: mice received high dose of HE (400 mg/kg body weight) treatment followed by repeated restraint stress. * p < 0.05; ** p < 0.01 and *** p < 0.001 vs. the Con group; ## p < 0.01 and ### p < 0.001 vs. RS group. Significant differences between groups were determined using one-way ANOVA and Tukey's post-hoc test.
As a typical signal transduction pathway of pro-inflammatory cytokines, NF-κB and IκB expressions were examined in stressed mice with HE treatment. As shown in Figure 6, significantly lower expressions of NF-κB and IκB in the cytosol fraction of the hippocampus were observed in stressed mice, indicating that the nuclear factor was translocated into nucleus, and enhanced the production of inflammatory mediators. An increasing tendency of both protein expressions could be detected with the treatment of HE, demonstrating that HE could block the NF-κB-induced inflammation, and this was in line with plasma cytokine studies. As a typical signal transduction pathway of pro-inflammatory cytokines, NF-κB and IκB expressions were examined in stressed mice with HE treatment. As shown in Figure 6, significantly lower expressions of NF-κB and IκB in the cytosol fraction of the hippocampus were observed in stressed mice, indicating that the nuclear factor was translocated into nucleus, and enhanced the production of inflammatory mediators. An increasing tendency of both protein expressions could be detected with the treatment of HE, demonstrating that HE could block the NF-κB-induced inflammation, and this was in line with plasma cytokine studies. Values are means ± SEM of three independent experiments. Con: normal control mice, RS: mice received vehicle treatment followed by repeated restraint stress, RS + HEL: mice received low dose of HE (100 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEM: mice received middle dose of HE (200 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEH: mice received high dose of HE (400 mg/kg body weight) treatment followed by repeated restraint stress. ** p < 0.01 and *** p < 0.001 vs. the Con group; ### p < 0.001 vs. RS group. Significant differences between groups were determined using one-way ANOVA and Tukey's post-hoc test. Values are means ± SEM of three independent experiments. Con: normal control mice, RS: mice received vehicle treatment followed by repeated restraint stress, RS + HEL: mice received low dose of HE (100 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEM: mice received middle dose of HE (200 mg/kg body weight) treatment followed by repeated restraint stress, RS + HEH: mice received high dose of HE (400 mg/kg body weight) treatment followed by repeated restraint stress. ** p < 0.01 and *** p < 0.001 vs. the Con group; ### p < 0.001 vs. RS group. Significant differences between groups were determined using one-way ANOVA and Tukey's post-hoc test.
Discussion
Accumulating data suggest that stress plays an important role in the development and manifestation of depression [3,4], and restraint stress (RS) has been applied as a major promoter of depression-like condition to verify the effectiveness of antidepressant activities [33]. The present study investigated the antidepressant-like effects of erinacine A-enriched HE mycelium in the RS mouse model. For the first time, based on the evidence that supplementation of HE decreased immobility times in the mouse TST and FST without affecting the locomotor activity in the mouse open field test (OFT), we have demonstrated that HE exerted remarkable antidepressant-like effects in the RS-induced depressive mice.
Both TST and FST are the most common tools used for evaluating antidepressant potential. In line with other results, mice treated by RS displayed significant immobility in the TST and FST [36,37]. These behaviors were reversed by the treatment of HE (at 200 or 400 mg/kg), indicating an antidepressant-like effect. The antidepressant activity of oral HE treatment was further confirmed by an increase in swimming time analyzed by the FST. Meanwhile, in the OFT, the numbers of crossings and rearings were not altered among groups, indicating that the anti-immobility effects of HE observed in the TST and FST were not attributable to changes in locomotor activities.
Modulating monoamine neurotransmitters, including NE, DA and 5-HT, has been recognized as a major target for elucidating the mechanisms underlying the antidepressant-like effect. The present study demonstrated a significant decrease in the levels of neurotransmitter contents in the hippocampus following 14 days of restraint stress. Our results are in keeping with other studies showing RS induced significantly decreased levels of biogenic amines [38]. Interestingly, HE was effective in restoring these changes in the hippocampus induced by RS following treatment. In the current study, we found that the antidepressant-like effects of HE might stem from increasing the levels of hippocampal NE, DA and 5-HT, which is consistent with previous studies that showed that some botanical extracts mediated the antidepressant-like effect by virtue of an increase of brain monoamines [39,40]. Thus, this result supported the finding that HE administration may lead to antidepressant-like effect by reducing TST and FST immobility time through noradrenergic, dopaminergic and serotonergic modulation in the RS mice. Therefore, we speculated that a possible mechanism underlying the activity is that erinacine A (enriched erinacine in HE constituents) might act as a monoamine neurotransmitter receptor agonist or monoamine neurotransmitter reuptake inhibitor. This possibility needs to be further verified in the future investigation.
There is evidence to suggest that pro-inflammatory cytokines, including IL-1β, IL-6 and TNF-α, contribute to the onset and progression of depressive disorders [41]. Studies have pointed out inflammation could activate some signals which can trigger the transition from inflammation to depression [42]. In fact, increased circulating levels of pro-inflammatory cytokines have been reported with stressed and depressed patients [43,44]. The present study substantiated the enhancement of pro-inflammatory cytokines in the RS depressive mice model. The levels of IL-6 and TNF-α were markedly elevated. Our data are consistent with other reports, which revealed that stressful life events and depressive symptoms are associated with the increase of circulating cytokines in clinical and stress-treated animals [45,46]. Recent studies showed that erinacine A protected from 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-induced neurotoxicity as a result of oxidative stress signaling and the JNK/p38/NF-κB pathways in mice [30], and had a protective effect on ischemic myocardial injury via the inhibition of iNOS/p38 mitogen-activated protein kinase (MAPK) and nitrotyrosine in rats [31]. Based on the close link between inflammation and depression, it is reasonable to expect a favorable effect of anti-inflammatory response of erinacine A-enriched HE on depression-like behavior. In fact, our results signified that supplementation with HE drastically inhibited the stress-induced rise of plasma IL-6 and TNF-α contents. Furthermore, HE exhibited antidepressant effects on RS-induced depressive behaviors. Thus, these findings can support the possibility that HE might have an antidepressant effect via regulating the inflammatory response. Although HE shows antidepressant effects via suppressing inflammation, the underlying precise molecular mechanisms remain to be determined. A recent study showed that benzyl alcohol derivatives from H. erinaceum attenuate the lipopolysaccharide (LPS)-stimulated inflammatory response through the regulation of NF-κB and AP-1 activity in macrophage cells [47]. The present findings also demonstrated that repeated restraint-stressed animals accompanied by depression-like behavior reduced the expression levels of NF-κB and IκB in the cytosol fraction of hippocampal tissue, and HE-treated mice normalized these levels. It has been believed that NF-κB is a pivotal transcription factor, and it translocates into the nucleus and initiates the transcription of an array of relevant genes (such as pro-inflammatory cytokines) and inducible enzymes (such as inducible nitric oxide synthase (iNOS) and cyclooxygenase (COX)-2 following activation. Accordingly, targeting of the NF-kB pathway is an interesting tactic in the treatment of depression because inflammation plays a critical role in the progression of the disorder [39,40]. In this way, the normalization of NF-κB level could be, at least in part, responsible for the pharmacological effects of HE after repeated restraint stimulation.
It is generally accepted that synaptic plasticity could be influenced by stress condition and the weakening in neuroplasticity might be a key factor in the process of depression [48]. Accordingly, neuroplasticity turns out to be the therapeutic target of antidepressant agents. In this study, the expression of BDNF, the pivotal marker of synaptic plasticity, was examined to reveal the molecular mechanism by which HE drives to normalize depression-like behavior. BDNF is a member of the neurotrophic factor known to participate in the life of neurons during development and to modulate hippocampal-dependent learning and memory [49]. Accumulating evidence supports that BDNF is indispensable for exerting antidepressant effects because it can modulate synaptic efficacy by changing transmitter release and sensitivity [50]. There is also evidence that the lack of BDNF is linked to the pathophysiology of mood disorders [51]. Recently, Wittstein et al. suggested that corallocin C isolated from Hericium coralloids is able to induce the mRNA levels of NGF and BDNF for neurite outgrowth of PC12 cells, and the mechanism, at least in part, is connected to act on an upstream target [52]. This is in line with our present study, which found reduced expressions of BDNF and TrkB in the hippocampal region of mice after RS, and that the treatments with HE were effective in restoring the BDNF levels in the brain region. Since the BDNF content was greatly influenced by monoamine transmission [53], the restoration of BDNF content may be an effect of the normalized monoamine content (NE, DA and 5-HT).
Glycogen synthase kinase-3β (GSK-3β) is an enzyme that phosphorylates glycogen synthase, which in turn inhibits glycogen biosynthesis. Moreover, GSK-3β is now believed to play an important role in the pathophysiology of depression and is implicated to be a drug target for the treatment of depression. Furthermore, GSK-3β inhibitors such as thiadiazolidinone NP031115 and AR-A014418 have been reported to be associated with antidepressant effects, as proven by reduced immobility in the forced swimming test [54]. A large amount of evidence has implicated that the pathology of depression might be associated with neuronal inflammation [42]. Literature data indicate that phosphatidylinositol 3-kinase (PI3K) and serine/threonine protein kinase AKT seem to activate immune cells by modulation of the key inflammatory cytokines [55]. In addition, the PI3K/Akt pathway has been reported to play the role as an upstream mechanism of GSK-3β activity regulation, in which Akt might directly phosphorylate GSK-3β, resulting in GSK-3β inactivation. [56]. Irregularities in the PI3K/Akt/GSK-3β pathway are linked in patients with psychiatric illnesses. Therefore, regulation of AKT and GSK-3β may form an important signaling center for depressive therapy. In the present study, we demonstrated that HE was able to increase phosphorylation of Akt and GSK-3β. Altogether, the results presented herein firstly reveal that the antidepressant-like effect of HE involves the activated pathway of PI3K/Akt and inhibition of GSK-3β that converge to increase BDNF.
Cultivation of H. erinaceus
H. erinaceus, coded as BCRC 35669, was purchased from the Bioresources Collection and Research Center (BCRC) of Food Industry Research and Development Institute (Hsinchu, Taiwan). The H. erinaceus agar slant was transferred and maintained onto a potato dextrose agar plate at 26 • C for 15 days as reported by Li et al. [57]. A piece of mycelium block (20 × 20 mm) was inoculated into a 2-L Erlenmeyer flask containing 1.3 L of modified broth (0.25% yeast extract, 4.5% glucose, 0.5% soybean powder, 0.25% peptone, and 0.05% MgSO 4 ; pH was adjusted to 4.5) and the whole broth was incubated at 26 • C on a 120 rpm shaker for 5 days. The fermentation process was then scaled up from a 2-L shake flask to 500-L and 20-ton bioreactors for 5 and 12 days, respectively. Following the fermentation process, the mycelia were harvested by filtration, lyophilized, grounded into powder, weighed and stored in a desiccator at room temperature.
HPLC/ESI-MS Analysis of Hericium erinaceus Mycelial Ethanolic Extract
The HPLC/ESI mass spectrometric analysis of the ethanolic extract of H. erinaceus mycelium was accomplished according to a previous report (erinacines Q (1), A (2), C (3), and S (4)) [58] with minor modification. In brief, the analysis of extracts was performed using a Waters Symmetry (2.1 × 150 mm, 3.5 µm, Waters Corp., Milford, MA, USA) analysis column fitted with a Security-Guard Ultra C18 guard column (2.1 × 2.0 mm, sub-2 µm, Phenomenex, Inc., Torrance, CA, USA) using an HPLC system consisting of a photodiode-array (PDA) detector. Column temperature was held at 35 • C. The elution solvent system was performed by gradient elution using two solvents: solvent A (water containing 0.1% formic acid) and solvent B (acetonitrile containing 0.1% formic acid). The flow rate during the elution process was set at 0.2 mL/min. A gradient elution was carried out in the first 3 min 30% B, then 30-95% B in 17 min, 95% B isocratic elution for 15 min and finally 95-30% B in 5 min. The absorption spectra of eluted compounds were detected in the range of 210 to 600 nm using the in-line PDA detector monitored at 240, 280, 325, and 340 nm. The compounds having been eluted and separated were further identified with a triple-quadruple mass spectrometer. The system was operated in electrospray ionization (ESI) with both positive and negative ionization modes in a potential of + and −3700 V, respectively applied to the tip of the capillary. Ten µL of sample solution was directly injected into the column using an autosampler. Nitrogen was used as the drying gas at a flow rate of 9 L/min and the nebulizing gas set at a pressure of 35 psi. The drying gas temperature was maintained at 350 • C. The fragmentor voltage was set at 115 V. The separation of ionized mass fragments in the range of 100-800 amu at a scan time of 200 ms/cycle by using quadrupole mass spectrometry. The Mass Hunter software (version: B.01.04; Agilent Technologies, Santa Clara, CA, USA) was applied for all of the data acquisition and manipulation.
Animals and Treatments
Male Institute of Cancer Research (ICR) mice weighing 20-25 g were obtained from the BioLASCO (A Charles River Licensee Corp., Yi-Lan, Taiwan). The animals were housed in regular cages at a constant temperature of 23 ± 2 • C, and relative humidity of 55 ± 5% with 12-h light and dark cycles. They were fed commercial feeds and water ad libitum. The mice were allowed to acclimatize to the laboratory environment for one week before the study. All aspects of this experimental protocol involving animals were evaluated and approved by the Institutional Animal Care and Use Committee of the HungKuang University (10312; 30 September 2014) and were carried out in accordance with the Guidelines for the Care and Use of Laboratory Animals.
The mice were randomly divided into five groups of ten individuals each as follows: unstressed group (Con group), restraint-stressed group (RS group), restraint-stressed plus 100 mg/kg HE-treated group (RS + HEL group), restraint-stressed plus 200 mg/kg HE-treated group (RS + HEM group), and restraint-stressed plus 400 mg/kg HE-treated group (RS + HEH group). HE at three different doses (100, 200, and 400 mg/kg body weight) was given to mice daily by oral administration for 4 weeks, while the control and RS alone groups received the same volume of 0.9% saline. These doses were chosen based on previous reports that demonstrated safety and effectiveness in corresponding disorders [31,32,57]. Starting at the 15th day of the experiment, all mice except Con group were subjected to 14 days of restraint stress. The immobilization procedure was delivered once daily for 2 h by placing animals in well-ventilated transparent restrainers (100 × 40 mm) [59]. After 4 weeks of experiment, multiple behavioral parameters were evaluated followed by biochemical assessments subsequently.
Tail Suspension Test (TST)
The TST was performed as the method previously described with modifications [60]. Each mouse was suspended by adhesive tape in a metal rod fixed 50 cm above the floor. The trials were videotaped for 5 min, and the total duration of immobility was analyzed by a blinded observer. The mice were considered immobile only when they hung passively and completely motionless.
Forced Swimming Test (FST)
The FST was a modification of a previously described protocol [61]. Animals were individually placed in a glass cylinder (20 cm height × 14 cm diameter) containing 10 cm deep water at 25 ± 2 • C. Each mouse was forced to swim for 5 min and the immobility and swimming times were observed and scored during the trial. The immobility period was defined as the time spent by floating motionless and keeping its head above the water with only necessary movements. Following the test, animals were dried and returned to their cages.
Elevated Plus Maze Test (EPM)
The EPM test was conducted according the method previously described with modifications [62]. The apparatus comprised of two opposite open arms (50 × 9 cm) and two enclosed arms (50 × 9 × 5 cm) which were connected by a common central platform (9 × 9 cm) and elevated to a height of 50
Open Field Test (OFT)
The open field test was assessed to examine the effect of HE on spontaneous locomotor activity as described previously with modifications [63]. Testing was carried out in an open rectangular acrylic box (60 × 40 × 20 cm) with the floor divided into 96 equal squares (5 × 5 cm). Animals were individually placed in the box and their motor activities were videotaped for a 5 min session. The number of squares crossed with all paws (crossings), of upright posture stood on the hind legs (rearings) and of fecal pellets collected in the box were analyzed by an observer who was unaware of the treatments. The apparatus was thoroughly cleaned with 70% ethanol between tests to remove any residue or odor of the animals.
Determination of Cytokine and Monoamine Neurotransmitter Levels
The mice were sacrificed under CO 2 anesthesia after completion of behavioral tests. Blood was immediately collected into EDTA tubes and separated in a refrigerated centrifuge at 4 • C and stored at −80 • C until use. The total hippocampus was quickly removed, homogenized with ice-cold physiological saline solution and centrifuged at 13,000g for 10 min at 4 • C. The supernatant was harvested and reserved at −80 • C. The levels of plasma TNF-α, IL-6 and contents of NE, DA and 5-HT in the brain were determined using ELISA kits (R&D Systems, Minneapolis, MN, USA and Novus Biologicals, Littleton, CO, USA, respectively) as described previously [64][65][66] with modifications. Briefly, dispensed protein standards and samples were added to 96-well ELISA plates pre-coated with the capture antibody, followed by the addition of a biotinylated detection antibody and streptavidin conjugated to horseradish peroxidase. The chromogenic reaction was achieved with the addition of tetramethylbenzidine and terminated by using 2 M H 2 SO 4 after incubation. The absorbance of each well was measured at 450 nm with a VersaMax microplate reader (Molecular Devices, Sunnyvale, CA, USA). In all cases, a standard curve was constructed and the results were quantified from within the curve.
Western Blot Analysis in Hippocampal Tissue
The whole hippocampus was homogenized with RIPA buffer containing protease and phosphatase inhibitors and protein content was determined by Bio-Rad DC Protein Assay Kit (Bio-Rad, Hercules, CA, USA). Protein lysates were separated by electrophoresis on 12% SDS-PAGE gel and transferred to polyvinylidene difluoride (PVDF) membranes (Bio-Rad) using a semi-dry electroblotting system. After blocking with 5% non-fat milk powder in tris-buffered saline with Tween 20 (TBST), the membranes were incubated with diluted primary antibodies, including BDND, TrkB, PI3K, Akt-p, Akt, GSK-3β-p, GSK-3β, IκB, NF-κB, and β-actin, at 4 • C overnight. After reaction with horseradish peroxidase-conjugated anti-rabbit or anti-mouse immunoglobulin G antibody, the bound-protein bands were visualized by an enhanced chemiluminescence detection system. The relative intensity of proteins of interest was normalized against β-actin.
Statistical Analysis
Statistical analysis was carried out using SPSS version 15.0 for Windows software (SPSS, Chicago, IL, USA). Data were expressed as means ± SEM. Multiple comparisons were analyzed by one-way ANOVA with the Tukey's post-hoc test. The level of statistical significance was set at p < 0.05.
Conclusions
In conclusion, HE supplementation normalized the behavioral alterations triggered by restraint stress. The antidepression effect may be attributed to the restoration of hippocampal monoamine neurotransmitters, inhibition of plasma pro-inflammatory cytokines, and modulation of PI3K/Akt/GSK-3β pathway with consequent increase of BDNF expression. All of these pathways are key mechanisms in depression treatment, indicating that HE may represent a potent alternative therapy for depression. The current data confirm that HE may ameliorate altered behavior and neurochemical parameters through several signal transductions, and that these signals may be synchronized with each other. | 2018-04-03T02:06:45.267Z | 2018-01-24T00:00:00.000 | {
"year": 2018,
"sha1": "183532511c05cbe46ad5c2a28196cefffb1e74f0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/19/2/341/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c34a092aa23e2cffa4593e4919ce4c6389f216aa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
267079919 | pes2o/s2orc | v3-fos-license | Prevalence of Genetic Variants and Deep Phenotyping in Patients with Thoracic Aortic Aneurysm and Dissection: A Cross-Sectional Single-Centre Cohort Study
Background: There is a paucity of evidence on people with thoracic aortic aneurysm and dissection. We aimed to determine the prevalence of genetic variants and their associations with phenotypes. Methods: In this cross-sectional single-centre cohort study of consecutive patients who underwent endovascular or open-surgical repair of thoracic aortic aneurysm and dissection, genetic analysis was performed using four-stage Next Generation Sequencing, and findings were confirmed with Sanger sequencing. We collected personal and family history on comorbidities, clinical examination, anthropometrics, skeletal deformities, joint function, and ophthalmological measures. Cardiovascular risk and phenotype scores were calculated. Results: Ninety-five patients were eligible (mean age 54 ± 9 years, 70% males, 56% aortic dissection). One-fifth had a family history of aortic disease. Furthermore, 95% and 54% had a phenotype score of ≤5 and ≤2, respectively. There were no significant differences in the distribution of phenotype characteristics according to age, sex, aortic pathology, or performed invasive procedures. Genetic variants of uncertain significance were detected in 40% of patients, with classic mutations comprising 18% of all variants. We observed no significant association with cardiovascular and phenotype scores but with higher joint function scores (p = 0.015). Conclusion: Genetic variants are highly present in clinically relevant aortic pathologies. Variants appear to play a larger role than previously described. The different variants do not correlate with specific phenotypes, age, pathology, sex, or family history.
Introduction
Thoracic aortic aneurysm and dissection (TAAD) is a relatively low-frequency disease affecting 5-10 per 100,000 of the global population [1].Aneurysms stay asymptomatic in up to 95% of cases until a life-threatening dissection or rupture of the aorta occurs [1,2].
Early detection of TAAD and precise classification of the underlying aetiology may help to optimise lifelong follow-up and patient-tailored therapy plans [3].Whether it is cardiovascular atherosclerosis, connective tissue disease, trauma, or inflammatory vascular wall changes, it has an impact on many decision pathways [4,5].
Since the establishment of high-throughput sequencing, also known as Next Generation Sequencing (NGS), knowledge of pathogenic variants has been accumulated [6].Yet, the prevalence of genetic variants in patients with clinically relevant aortic pathology indicating aortic therapy has not been fully studied.Furthermore, in the presence of modern NGS, the current correlation of genetic variants and the presence of phenotypical features typical for syndromic TAAD need to be re-evaluated.
The aim of the present study was to determine the prevalence of genetic variants in patients with TAAD who underwent invasive aortic repair.Furthermore, phenotypegenotype correlation was studied to reveal the prevalence of phenotypical characteristics in patients with confirmed genetic variants.This may help to understand its role in the justification of indications for human genetic testing using NGS.
Ethical Statement
The study was reviewed and approved by the ethical committee board of the Technische Universität Dresden (decision number EK317082014).The indication for human genetic analysis was made by the Department of Angiology at the University Centre for Vascular Medicine.It was also confirmed by the cooperating practice for human genetics.STrengthening the Reporting of OBservational studies in Epidemiology (STROBE), the statement for the reporting of observational studies, has also been complied with [7].
Study Design and Cohort
In this cross-sectional single-centre cohort study, patients who were admitted to a tertiary reference centre in a university setting from 1 January 2008 to 30 June 2019 were screened for inclusion in the study.The inclusion criteria were age 18 or older, undergoing endovascular or open-surgical treatment for an index aortic pathology, and patient-informed consent.The exclusion criteria were aortic pathology due to inflammatory or traumatic genesis and patients over the age of 65 with no offspring.
Recruitment
A database screening for patients with aortic pathology who underwent an opensurgical or endovascular treatment from January 2008 through June 2019, according to the documented diagnosis code (World Health Organisation International Classification of Diseases, WHO-ICD), was carried out.The list of suitable ICD and procedural codes is provided in the Supplementary Files.Eligible patients were subsequently contacted and asked for inclusion in this study.As part of the comprehensive sensitivity analyses and to determine the risk of selection bias, we used the anonymised data of excluded patients to compare the demographic data, risk factors, aortic pathology, performed procedures, and medications with the study cohort.
The included patients underwent personal and family history taking, clinical examination, including thorough phenotype analysis, and subsequent human genetic analysis as follows.
Collection of History Data, Clinical Parameters, and Risk Score Calculation
Demographic data were collected from patients' files and directly from patients.For age-group analysis, age was divided into three groups: patients under 45 years (group 1), between 45 and 60 years (group 2), and over 60 years (group 3).
Further history and clinical data with possible associations with connective tissue diseases were collected during a comprehensive clinical examination and anamnesis of medical history.
Family History
Positive history was defined as confirmed connective tissue disease or clinically suspected due to confirmed aortic disease in relatives of the 1st, 2nd, or 3rd degree.Tall stature (defined as >97th percentile of the normal range based on normal population data, where the normal range is defined according to age and population group between the 3rd and 97th percentile); arm span/body length ratio >1.05, arachnodactyly, striae, and increased skin elasticity.
Cardiovascular Pathology and Cumulative Risk Score: Bicuspid aortic valve and atrial septal defect (ASD).These seven defined phenotype categories with representative clinical features/risk characteristics were used for scoring.In each category, one point was awarded for each risk characteristic and summed up for total points for this category.In the category of joint function, we also used established systemic scoring scales such as Beighton (max. of nine points possible) and the Murdoch or Steinbach sign (each sign is worth one point).If more than two signs were fulfilled in the Beighton score, only this scale was regarded as the total point score in the category of joint function.The higher the cumulative score, the higher the clinical probability of the possible presence of a connective tissue disease.
Further Comorbidities
The following comorbidities were recorded: arterial hypertension, defined as systolic blood pressure values of 130 mmHg or more, and/or diastolic blood pressure of 90 mmHg or more, and/or indicating medical treatment; clinically relevant hyperlipoproteinemia, defined as hyperlipoproteinemia indicating oral medical therapy; Diabetes mellitus indicating medical treatment (either oral antidiabetic or Insulin therapy); renal impairment, defined as a glomerular filtration rate (GFR) <60 mL/min; active or past history of smoking; coronary heart disease (CHD); lower extremity peripheral arterial disease (PAD); documented carotid stenosis; history of stroke; chronic obstructive pulmonary disease (COPD) indicating medical treatment.
Molecular Genetic Analysis
The molecular genetic analysis was obtained from ethylenediaminetetraacetic acid (EDTA) blood using the Next Generation Sequencing (NGS) panel analysis method.After capture-based enrichment of the genetic material, the NGS four-stage sequencing method (IMiSeq Desktop Sequencer, llumina Inc., San Diego, CA, USA) was applied.If a gene change was detected, it was confirmed using conventional Sanger sequencing.In addition to Multiplex Ligation-dependent Probe Amplification, the Sanger method was also used if a follow-up analysis became necessary due to ambiguous NGS results.
Due to the genetic heterogenicity of the connective tissue disease spectrum, nine genes were initially panel-examined and evaluated, according to the specification of the German health authorities for examining genes associated with connective tissue diseases with possible involvement of the thoracic aorta.The examined gene loci were: ACTA2, COL3A1, FBN1, MYH11, MYLK, SMAD3, TGFB2, TGFBR1, and TGFBR2.Due to the design of clinical routine diagnostics and the defined criteria for examination of further gene loci by the health insurance system, such as in cases of the presence of phenotypical features or positive family history, the genetic diagnostics were expanded after patient consent was provided to include the analysis of the following genes: AEBP1, BGN, COL1A1, COL4A5, COL5A1, COL5A2, EF-EM P2, ELN, FBLN5, FBN2, FLNA, FOXE3, GATA5, LOX, MAT2A, M FAP5, NOTCH1, NOTCH3, PLOD1, PRKG1, RPL26, SKI, SLC2A10, SMAD4, SMAD6, TAB2, and TGFB3.The focus of the variants was fixed on the defined genes of interest listed.Thus, whole genome sequencing was not performed.
Data Acquisition and Cooperation Partners
General patient data, as well as diagnoses and findings, were recorded using the electronic hospital information system used at the Dresden University Hospital in Dresden, Germany.These included performed interventions, documentation of the clinical and image morphological data, structured follow-up, and the multidisciplinary vascular conferences of the disciplines of angiology, vascular surgery, interventional radiology, and cardiac surgery at the Dresden Heart Centre, which were in accordance with established standards at the University Vascular Centre of the Medical Clinic and Policlinic III of the Carl Gustav Carus University Hospital.
Cardiac surgical intervention data and the corresponding documentation were acquired from the Department of Cardiac Surgery at the Dresden Heart Centre.Human genetic clinical examination, as well as routine blood tests and subsequent human genetic analysis, were performed by outpatient specialists in human genetics from the group practice for human genetics (Gutenberg, Str. 5, 01307 Dresden, Germany; existing cooperation agreement with the University Hospital Carl Gustav Carus).
Statistical Analysis
Descriptive analyses were provided using absolute or relative frequencies.The mean and standard deviations were used to present continuous variables.Frequency differences were analysed using chi-square text for nominal scale variables and the Kruskal-Wallis or Mann-Whitney U test for ordinal scale variables.To assess the correlation between age and cardiovascular risk factors, the Kendall's Tau correlation coefficient was calculated.Continuous variables were compared using the t-test.Results were considered statistically significant when p < 0.05.If the probability of error, p, yielded values between 0.05 and 0.1 (0.05 ≤ p < 0.1), the results were considered to be showing a trend.IBM SPSS software (IBM SPSS Statistics Version 28, IBM, Armonk, NY, USA) was used for the statistical analysis.
Patient Cohort, Demographic Data, and Representability of the Study Cohort
A total of 1334 patients with aortic disease of any aetiology were initially screened for eligibility.Therefore, 716 patients were eligible according to the inclusion and exclusion criteria.Ultimately, 116 patients gave their full consent, while 95 patients provided complete consent and were included in the current study.
The comparative analysis between the study cohort and the patients who met the inclusion criteria but were excluded due to failure of patient consent or an incomplete data set showed no significant differences between both cohorts regarding age, sex, performed procedures, arterial hypertension, history of smoking, hyperlipoproteinemia, diabetes mellitus, coronary heart disease (CHD), carotid stenosis, and peripheral artery disease (PAD) (Supplementary Table S1).The analysis showed that the study cohort had more aortic dissections than the fully screened cohort with fulfilled inclusion/exclusion criteria (57.9% vs. 44.6%,p = 0.009).Both cohorts showed no significant differences in medical therapy with angiotensin-converting enzyme inhibitors, calcium antagonists, beta blockers, antiplatelet drugs, and other antihypertensives.More study cohort patients were on oral anticoagulants than the excluded cohort (36.8% vs. 29.7%,p = 0.021).
Data analysis of the study cohort showed a mean age of 54 ± 9 years with a 70.5% male quota.In the age group of 45 to 65 years, aortic dissection was represented more frequently than aortic aneurysm (Figure S1, Supplementary Materials).Arterial hypertension was documented in 83.2% of patients, a history of smoking in 33.7%, and hypercholesterolemia in 32.6%.A total of 57.9% of patients suffered aortic dissections, and 42.1% had aortic aneurysms.Further distribution analysis of pathology according to age groups is depicted in Supplementary Figure S1.A total of 68% of patients underwent open aortic surgery, and 31.6% were treated endovascularly.
Family History Analysis and Distribution of Phenotypical Characteristics
A positive family history was documented in 20% of patients (more common in women than in men; p = 0.056).A positive history in first-degree relatives was confirmed in 4.2% of aneurysm patients and 2.1% of dissection patients.There was a higher prevalence of genetic variants in patients with a positive family history (p = 0.028).
A total of 54% of patients showed a total phenotypic score of two or less, and 92.6% showed a score of five or less.The detailed score distribution across the study population is depicted in Supplementary Figure S2.The phenotypical category analysis (Figure 1, Supplementary Table S2) showed that tall stature was present in 6.3% of patients.
The arm-span-to-body length ratio was above normal in 14.7% of patients.In skeletal deformities, pes planus was the most present feature, with a prevalence of 20.0%, followed by scoliosis in 13.7% of patients.Thin lips were the most prevalent craniofacial anomaly and were present in 25.3% of patients, followed by a high palate prevalence of 14.7%.All joint function signs were present in less than 10%, with the highest prevalence of 9.5% documented for the Murdoch sign.Myopia was present in 14.7% of patients, and enophthalmos in 11.6%.A total of 23% of patients showed a positive history of hernia, and 17.9% had a bicuspid aortic valve.Further subgroup analysis showed no significant difference in the distribution of phenotypical characteristics according to age group, sex, aortic pathology, or performed procedure (Table 1).
Figure 1.Distribution of patients' phenotype scores: For each category, a total value from all risk points was created for every patient.One point was awarded for each risk characteristic, and the total risk score was calculated.In the joint function category, the higher point value from either the Murdoch or Steinberg signs with one point each (maximum two) or the total value of the Beighton score (maximum nine points) was used.
The arm-span-to-body length ratio was above normal in 14.7% of patients.In skeletal deformities, pes planus was the most present feature, with a prevalence of 20.0%, followed by scoliosis in 13.7% of patients.Thin lips were the most prevalent craniofacial anomaly and were present in 25.3% of patients, followed by a high palate prevalence of 14.7%.All joint function signs were present in less than 10%, with the highest prevalence of 9.5% documented for the Murdoch sign.Myopia was present in 14.7% of patients, and enophthalmos in 11.6%.A total of 23% of patients showed a positive history of hernia, and 17.9% had a bicuspid aortic valve.Further subgroup analysis showed no significant difference in the distribution of phenotypical characteristics according to age group, sex, aortic pathology, or performed procedure (Table 1).
Analysis of Genetic Variants and Correlation with Phenotype
The genetic analysis confirmed gene variants in 40% of all patients.Classic mutations comprised 18.4% of all variants, while the other variants of uncertain significance (VUS) constituted 81.6% of detected variants.The distribution of genetic variants did not differ significantly across age or sex.The analysis showed a trend towards a higher prevalence of genetic variants in aortic dissections and in patients undergoing open surgery (p = 0.054 and p = 0.072, respectively, Figure 2
Analysis of Genetic Variants and Correlation with Phenotype
The genetic analysis confirmed gene variants in 40% of all patients.Classic mutations comprised 18.4% of all variants, while the other variants of uncertain significance (VUS) constituted 81.6% of detected variants.The distribution of genetic variants did not differ significantly across age or sex.The analysis showed a trend towards a higher prevalence of genetic variants in aortic dissections and in patients undergoing open surgery (p = 0.054 and p = 0.072, respectively, Figure 2 and Table 2).Genetic variants were found in thirteen genes.FBN1 gene variants comprised 32.5% of total variants, 7.5% of which were classic variants and 25% were other variants.MYH11 and TGFB2 followed and comprised 20% and 10% of total variants, respectively.Other gene variant percentages were under 10% each (Table 2).Simultaneous variants in two genes were detected in three patients (FGFB2 and MYH11 genes; MYH11 and NOTCH1 genes; and FBN1 and SMAD3 genes).
The distribution of different risk scores by genetic variant is depicted in Figure 3.
Genetic variants were found in thirteen genes.FBN1 gene variants comprised 32.5% of total variants, 7.5% of which were classic variants and 25% were other variants.MYH11 and TGFB2 followed and comprised 20% and 10% of total variants, respectively.Other gene variant percentages were under 10% each (Table 2).Simultaneous variants in two genes were detected in three patients (FGFB2 and MYH11 genes; MYH11 and NOTCH1 genes; and FBN1 and SMAD3 genes).
The distribution of different risk scores by genetic variant is depicted in Figure 3.For each risk score, a total value from all risk points was created for each patient.One point was awarded for each risk characteristic, and the total risk score was calculated.The percentage of each patient group with the same score is depicted with a different colour in each column.
Cardiovascular scores and phenotype scores did not show a significant difference in distribution between patients with or without genetic variants (p = 0.140 and p = 0.110, respectively).A significant difference was found in joint function score analysis, with a higher joint function score in patients with genetic variants (p = 0.015).The detailed results of the analysis are listed in Table 3.For each risk score, a total value from all risk points was created for each patient.One point was awarded for each risk characteristic, and the total risk score was calculated.The percentage of each patient group with the same score is depicted with a different colour in each column.
Cardiovascular scores and phenotype scores did not show a significant difference in distribution between patients with or without genetic variants (p = 0.140 and p = 0.110, respectively).A significant difference was found in joint function score analysis, with a higher joint function score in patients with genetic variants (p = 0.015).The detailed results of the analysis are listed in Table 3.
Genetic Variants and Their Association with Genetic Disease
Two patients with FBN1 gene variants showed the characteristics of Marfan syndrome.Both patients carried heterozygous variants.Two patients showed a high probability of Loeys-Dietz syndrome class 4; one of them carried a heterozygous TGFBR1 gene variant, and the other carried a heterozygous deletion of the TGFB2 gene.An Ehlers-Danlos syndrome of the vascular type was documented in one patient with a missense variant in the COL2A1 gene.
Furthermore, one patient with a heterozygous duplication of the MYH11 gene and a heterozygous variant in the NOTCH1 gene showed the microduplication syndrome (16p13.11);a patient with the heterozygous variant of the NOTCH3 gene showed CADASIL syndrome; a non-syndromic craniosynostosis was documented in a patient with the heterozygous variant of the NOTCH1 gene; and a patient with karyotype 45 X0 suffered from Turner syndrome.
All other detected gene variants were unclassified variants, variants with low clinical relevance, or variants without known clinical relevance (classes 1 to 3 according to Plon et al. [8]).
Discussion
In this study, human genetic analysis was abnormal in 40% of all cases, with 87.5% of all identified variants assigned to category A genes, which represent a relevant risk for thoracic aortic aneurysm and dissection (TAAD) according to the classification published by Renard et al. [9].Multiple syndromes were detected, including Marfan syndrome, Loeys-Dietz syndrome, and Ehlers-Danlos syndrome of the vascular type.It is significant to note that most of the variants were of unclear significance.However, this is compared to studies with designs that were similar to the presented study with their inclusion of non-selected TAAD patients [10,11].Studies that showed a higher prevalence of pathologic variants were those with designs that only considered TAAD patients under certain inclusion criteria [12][13][14][15][16][17][18][19].The high prevalence of variants of unclear significance in non-selected TAAD patients denotes that variants of unclear significance might play a larger role than is currently known.This should be thoroughly examined in future studies.
An important finding of this study was that there was no significant correlation between phenotypic and genetic variants.Only the joint movement score was significantly correlated with genetic variants.Further analysis confirmed that the genetic variant distribution was independent of age, sex, pathology, or cardiovascular risk.This confirms the importance of genetic testing, irrespective of phenotype, demographic data, or cardiovascular risk.To the best of our knowledge, this is the first report systemically investigating the phenotypical features of all-comer TAAD patients and their correlation to the genotype.In the study by Pope et al., only people with suspected hereditary TAAD, not those with sporadic TAAD, were included in the study.Clinical data were collected, but not systematically.A clinical study of their participants was carried out by Duan et al.To rule out hereditary connective tissue disease, this study specifically included people with marfanoid characteristics or lens ectopy, not just those with TAAD, of which only 68.2% were affected.This could explain higher proportions of striking clinical features [19].A correlation to the genotype was not recorded.However, the authors found a significant association between striae distensae and TAADs.The presence of striae distensae could therefore be a clinical clue to look for TAAD in individuals at risk [19].In the study cohort of Wooderchak-Donahue et al., the characteristics of lens ectopia and some musculoskeletal findings (dural ectasia, reduced elbow extension, and marfanoid habitus) were more conspicuous in people with the genetic variant.In contrast, skin changes and musculoskeletal features such as hypermobile joints, enlarged limbs, pes planus, and hindfoot deformities were more common in the negative cohort [14].In the study by Campens et al., the presence of syndromic features significantly increased (up to threefold) the likelihood of genetic variant detection [15].As a conclusion, we believe that although the presence of typical phenotypical features increases the probability of genetic variants, their absence should not be an exclusion criterion in the decision-making algorithm for indicating genetic testing.
Although familial studies have shown a tenfold increased incidence rate in first-degree relatives with a family history of thoracic aortic aneurysm [20], other studies have reported that sporadic TAADs without any evidence of a hereditary association could be based on genetic mechanisms.Therefore, gene analysis could be indicated in these patients [2].In a study by Guo et al., 28% of subjects with sporadic thoracic aortic dissection ages ≤56 years presented at least one variant of unclear significance, and 9.3% carried a genetic variant in any of the eleven syndromic or familial TAAD genes, significantly more than the control [21].In the study published by Renner et al., the diagnostic yield was not significantly higher in people with a positive family history than those without [18].This concurs with the results of our study, which showed a comparable genetic variant prevalence with no correlation with family history.An explanation for this might be that most of the detected genetic variants were of unclear significance.It is important to note that although over 37 genes are known to be associated with hereditary TAADs [22], only around 30% of familial non-syndromic TAAD cases have a genetic variant in these genes.This suggests that most of the genetic basis of these thoracic aortic aneurysms and dissections remains undiscovered [22,23].Further genetic testing of family members to examine the prevalence of detected genetic variants and their clinical relevance is crucially needed.
Limitations
In addition to many strengths, there were also limitations to this study.First, the study was limited to a single centre and was cross-sectional by design.The totality of the data in such epidemiological studies on genetic variants depends on voluntary participation.Due to the high value of biogenetic data and informational self-determination in the global discussion about individual privacy, it is not taken for granted that people will always consent.This challenge has appeared in numerous cohort studies in the past.Although we conducted comprehensive sensitivity analyses and are confident that excluded patients were not systematically different from the study cohort, selection bias and residual confounding cannot be omitted.Regarding the different timespan of TAAD onset and performing the genetic examination of the recruited patients, the spontaneous development of new mutations due to surgical interventions, environmental changes, and ageing has to be acknowledged.Mutations detected long after the onset of TAAD might result in differences in the distribution of mutations compared to patients with earlier genetic examinations for a nearby time point of primary TAAD onset.In addition, further testing for supplemental genes was only performed on a subset of patients with phenotypical features or a positive family history.This may produce a tendency bias in patient selection and distort the results, as the other patients may also have mutation possibilities in these genes.The solicitation of family history came only from the participants themselves and did not include any additional information from their relatives.Self-reported medical information remains a challenge in epidemiological studies, but we designed the variables in a robust way to avoid another bias.Finally, the gene palette examined was limited.Future research should consider expanding the participant base to include more than one centre, expanding the gene palate, and involving participants' relatives in the family history solicitation process.
Conclusions
Genetic variants are highly present in clinically relevant aortic pathology.Variants of unknown relevance seem to play a larger role than previously known.These genetic variants do not correlate with a specific phenotype, age group, pathology, sex, or family history.Therefore, extending genetic testing to all patients with clinically relevant aortic pathology should be considered, regardless of these factors.
Figure 1 .
Figure1.Distribution of patients' phenotype scores: For each category, a total value from all risk points was created for every patient.One point was awarded for each risk characteristic, and the total risk score was calculated.In the joint function category, the higher point value from either the Murdoch or Steinberg signs with one point each (maximum two) or the total value of the Beighton score (maximum nine points) was used.
Figure 2 .
Figure 2. Distribution of gene mutations across demographic and clinical patients' subgroups.
Figure 3 .
Figure 3. Distribution of risk scores by genetic mutation.
Figure 3 .
Figure 3. Distribution of risk scores by genetic mutation.
Table 1 .
Subgroup analysis and its correlation with phenotype.
Table 1 .
Subgroup analysis and its correlation with phenotype.
Table 3 .
Correlation of demographic data and risk factors with gene mutation. | 2024-01-23T17:08:21.561Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "f5b9db3809d4e48df304b244779d7c8e7958e92e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/13/2/461/pdf?version=1705219499",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4e0cd60d1b4381449bbafaa31c12727097e5eea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258554048 | pes2o/s2orc | v3-fos-license | The History of Electrospinning: Past, Present, and Future Developments
Electrospinning has rapidly progressed over the past few decades as an easy and versatile way to fabricate fibers with diameters ranging from micrometers to tens of nanometers that present unique and intricate morphologies. This has led to the conception of new technologies and diverse methods that exploit the basic electrohydrodynamic phenomena of the electrospinning process, which has in turn led to the invention of novel apparatuses that have reshaped the field. Research on revamping conventional electrospinning has principally focused on achieving three key objectives: upscaling the process while retaining consistent morphological traits, developing 3D nanofibrous macrostructures, and formulating novel fiber configurations. This review introduces an extensive group of diverse electrospinning techniques and presents a comperative study based on the apparatus type and output. Then, each process's advantages and limitations are critically assessed to identify the bona fide practicability and relevance of each technological breakthrough. Finally, the outlook on future developments of advanced electrospinning technologies is outlined, with an emphasis on upscaling, translational research, sustainable manufacturing and prospective solutions to current shortcomings.
community that exceeding the associated fiber output limitations is challenging. [19] This is because conventional electrospinning produces a single jet stream deriving from a single Taylor cone during fabrication, thus exhibiting a low yield of ≈0.01-0.3 g h −1 . [20] To address this limitation, researchers have devoted considerable effort to resolving this challenge. Notably, between 2004 and 2005, Ding et al. [21] and Theron et al. [22] found a simple way to significantly increase the fiber output by using multiple jet streams instead of a single nozzle. However, this approach proved to be time-consuming and involved a tedious spinneret cleaning cycle to avoid interactions between jets. [3] Later, Dosunmu et al. designed a new type of electrospinning setup using a multi-hole prous cylindrical tube to further increase the output of the process, reaching up to 1 g h -1 . [23] However, the porous cylinder spinneret capable of vertically orienting its axis could only deposit fibers in a 360°cylindrical collector surrounding the spinneret, resulting in inconcistent fibrous membranes due to secondary electrical effects and the ease of spinneret clogging.
These issues were resolved by a method developed by Yarin and Zussman in 2004 [19] that paved the way for all the highthroughput electrospinning technologies that followed. This method involved the production of free-surface NFs through a two-layer system. By placing a layer of polymer solution underneath a magnetic liquid that overlapped a permanent magnet against a vertically placed oppositely charged magnet by applying high DC voltage, for the first time, jets formed without a needle-based spinneret. A year later, Jiri and co-workers patented a process where a rotating charged electrode, immersed within a polymer solution, placed at a close distance to a counter electrode in a bottom-up position, could be used to fabricate NFs at an increased production rate with the assistance of an airstream to increase the auxiliary drying efficiency of the system. [24] Gradually, more needleless electrospinning methods were reported, further improving the productivity and quality of the fibers with the primary purpose of achieving industrial-scale NF production.
This comprehensive review critically discusses the history and evolution of electrospinning technology. A detailed introduction to the most used electrospinning techniques is succeeded by a comparative study focusing on the advantages and disadvantages of each method. Thereafter, the limitations of the different techniques are discussed, followed by an outlook on the future of advanced electrospinning technologies.
Brief History
A lengthy chronology of inventions and innovations accompanies the history and progress of electrospinning. Electrospinning is the descendant of electrospraying, a conceptually similar technology that employs electric forces to disperse a liquid or fine aerosols out of a polymer solution, first carried out in 1747 by Abbé Nollet [25] and first patented by John Cooley [26] and William Morton [27] in the early 1900s. When electrospinning, a fluid withdrawn through the spinneret is electrically charged, acquiring a nearly conical shape from the apex of which a jet arises. [28] In 1914, John Zeleny first demonstrated that the jet ejection at the tip of the metal capillary presents a liquid drop surface tension held at the edge that disintegrates onto a spray as the voltage increases. [29] Later, the earliest method of producing nanofibrous materials from polymer solutions was patented by Anton Formhals in 1934. [30] Between 1964 and 1969, Geoffrey Taylor, appertaining to Zeleny's work, mathematically demonstrated that the critical half-angle of the meniscus nears 49.3°at the furthest point before the disintegration event, illustrating why a polymer solution or melt extruded through a capillary will reshape from a spherical to a conical configuration in a strong electric field. [31,32] This gave rise to the concept of the "Taylor cone" formation. However, no significant electrospinning developments were reported in the literature in the following two decades. This loss of attention from academia and industry coincides with the lack of accurate methods to observe the morphology and measure the diameter of fibers down to the sub-micrometer scale.
It was not until the early 1990s, due to the increasing interest in nanotechnology, that the study of electrospinning technology started to gain popularity. [33] The modern era of electrospinning began with work conducted by Jayesh Doshi and Darrell Reneker, [34] who reported that the diameter of the fibers is inverse proportional to the distance from the needle tip to the collector. From 1999 to 2001, Reneker and Gregory Rutledge worked to better understand the parameters influencing the electrospinning process, [35,36] owing to the advancement of scanning electron microscopy (SEM) that enabled fibers within the nanometer scale to be observed in detail. These advancements commended the capabilities of electrospinning to the scientific community for the first time.
As depicted in Figure 1, electrospinning has gained significant attention since the turn of the century, with a consistent exponential increase in the number of published works in the field. Studies surrounding the working electrospinning parameters and understanding how different polymers can be processed into fibers flourished. This was followed by research groups developing novel electrospinning apparatuses, including co-axial, tri-axial, centrifugal, corona, bubble, rotary metals (cylinder, disk, ball), high-speed, and 3D electrospinning ( Table 1), which are expounded in this review. These advanced techniques have expanded the range of materials that can be used to fabricate fibers and the gamut of obtainable structures. The main principles behind developing these apparatuses has been to improve the fiber output and the fibers' macro and microarchitecture, further widening the reach of electrospun materials.
Principles and Process Parameters
Among the processing techniques, including thermal-induced phase separation, drawing, template synthesis, and selfassembly, electrospinning is of considerable significance as a rapidly evolving fiber preparation method. [52] This highly versatile method is used to process solutions, suspensions, or melts into continuous fibers of nano/microscale diameters [53] and is the only method capable of mass-producing continuous fibers at this range. [54] Electrospinning is one of the most conventional methods used for continuous fiber preparation today and is based on the principle that electrostatic forces can be used to form and expand fibers out of a polymer solution. [55] As expounded in the previous section, the principle of this process was first described in the 1930s in a patent entitled "Process and Apparatus for [26,27] Co-axial 2003 Sun et al. [37] Multijet 2004 Ding et al. [21] Magnetic fluid 2004 Yarin et al. [19] Roller 2005 Jirsak et al. [24] Centrifugal 2006 Andrady et al. [38] Porous tube 2006 Dosunmu et al. [23] Bubble 2008 Liu et al. [39] Tri-axial 2009 Kalra et al. [40] Conical wire coil 2009 Wang et al. [41] Ball 2009 Miloh et al. [42] Disk 2009 Niu et al. [43] Wet (3D) 2009 Yokoyama et al. [44] Cone 2010 Lu et al. [3] Spiral coil 2012 Wang et al. [45] Corona 2012 Molnár et al. [46] Stepped pyramid 2013 Jiang et al. [47] Beaded chain 2014 Liu et al. [48] High-speed 2015 Nagy et al. [49] Cold-plate (3D) 2015 Sheikh et al. [50] Three-dimensional (3D) 2018 Vong et al. [51] Preparing Artificial Threads" by Anton Formhals, considered the father of electrospinning. [56] However, considerable emphasis was not given to the process until the 1990s in works led by Reneker and Rutledge, who described the process. [34] The electrospinning process is related to an electrohydrodynamic problem. It is a simple and cost-effective method that uses electrostatic forces to produce and expand fibers from polymer solutions or melts with diameters ranging from a few tens of nanometers to micrometers. [55] During electrospinning, high voltage is applied to charge a liquid solution or melt by placing it between two conductors that endure the electromagnetic charge of opposite polarities, stretching the polymer to form fibers. [57] A standard laboratory-scale setup consists of four main components: a high-voltage DC (or AC) power supply, a syringe pump, a nozzle (usually a metallic capillary), and a collector (which can be a metallic foil, plate, or disc). The electrostatic force produced by the high-voltage supply is applied to the polymer solution or melt, which is dispensed through the fine needle orifice at a controlled rate. When electrospinning, the precursor solution extruded from the spinneret orifice forms a small droplet that is subject to an accumulated charge in the presence of an electric field. [58] The electric discharged of the polymer droplet, induces a conically-shaped geometry referred to as the Taylor cone. [59,60] Increasing the strength of the electric field causes an increased accumulation of charges at the surface of the polymer bud. After this, the repulsive electric forces overcome the surface tension of the polymer solution or melt, leading to vigorous whipping and splitting motions due to the bending instabilities generated, causing the fiber to elongate through the application of mechanical force. [61] At this point, the geometry of the formed asymmetrically electrospun (as-spun) fibers is directed by the electrostatic repulsion, colloid stability, the incoming surface ratio, and gravity. [55,57] The solidification of the liquid solution occurs by establishing a zone that thrusts the charged molecules, allowing for continuous solvent evaporation, stretching the drawn polymer threads as they advance toward the grounded or oppositely charged collector. [55] This transition between the liquid and solid phase is due to the Ohmic current primarily being transited to convective flow, thus increasing its acceleration. [62] Figure 2 illustrates the basic concept behind the electrospinning process. ); b) high-speed photograph outlining the Taylor cone formation, depicting the linear segment of the polymer jet, followed by the whipping jet region, modified from refs. [63] and [64]; c) the prototypical instantaneous position of the jet path succeeding through the three sequential bending instabilities, modified from ref. [58], Copyrights: (a) created with BioRender; (b) Adapted (right segment) with permission. [63] Copyright 2013, AIP Publishing; (b) Adapted (left segment) with permission. [64] Copyright 2007, Elsevier; (c) Adapted with permission. [58] Copyright 2019, American Chemical Society.
The formed electrospun mats exhibit a web-like fibrous structure due to the considerable extent of plastic deformation caused by the high charge density of the jet and the unstable whipping motion. [65] This phenomenon is known as bending instability, and it leads to randomly oriented and nonaligned fibrous mats. [66] NFs carry a range of novel physical and chemical properties that are not present in their corresponding macroscales, resulting in many characteristics shared among materials at the nanoscale. [67] Due to high specific surface area, a large surfaceto-volume ratio, and an extensive fiber length-to-diameter aspect ratio, properties such as peculiar quantum effects, electrical conductivity, redox potential, and the formation of crystal and magnetic structures increase their reaction rates per given mass. [65] Moreover, those properties allow the construction of highly porous constructs with adjustable pore size and wide surfaces that allow chemical functionalization. [65] Through the continuous research and evolvement of these basic principles and the manipulation of the conventional electrospinning apparatus, unique morphologies and structures have been successfully produced over the past two decades, as indicated in the examples presented in Figure 3.
The parameters influencing the electrospinning process [77] can be classified based on solution and solvent, operating, and ambient conditions (Figure 4). Solution parameters refer to polymer concentration and polymer molecular weight, solvent volatility, solution viscosity, surface tension, and solution conductivity, among others. Concerning the electrospinning parameters, the electric field strength, electrostatic potential, flow rate, and distance between the spinneret and the collector must be appropriately adjusted in conjunction with the polymer solution properties. Finally, ambient parameters refer to the chamber and solution temperature, humidity, and type of atmosphere, among others.
In general, by prolonging the fiber elongation or flight time during the electrospinning process, finer fibers can be produced, which can be achieved by increasing the distance between the collector and the spinneret. Moreover, the evaporation rate of the solvent can be increased by using low-volatility solvents and raising the chamber temperature. However, it must be noted that increasing the working distance beyond the critical threshold (the point at which the stability of the Taylor cone is impaired) results in a significantly longer flight time, which can lead to inhomogeneous fiber formation. [4] Insufficient solvent volatility and randomly oriented; c) core/shell; [68] d) hollow; [68] e) multichannel microtubes; [69] f) colloidal nanoparticle-decorated; [70] g) shish-kebab; [71] h) helical; [72] i) porous; j) necklace-like; [73] k) island-like; [74] l) beadsin-fiber, [75] electrospun fibers. Copyrights: (c, d) Reproduced with permission. [68] Copyright 2004, American Chemical Society; (e) Reproduced with permission. [69] Copyright 2007, American Chemical Society; (f) Reproduced with permission. [70] Copyright 2013, American Chemical Society; (g) Reproduced with permission. [71] Copyright 2021, Royal Society of Chemistry; (h) Reproduced with permission. [72] Copyright 2012, American Chemical Society; (i) Produced from author A.K. (j) Reproduced with permission. [73] Copyright 2008, Elsevier; (k) Reproduced with permission. [74] Copyright 2017, American Chemical Society; (l) Reproduced with permission. [75] Copyright 2020, Springer Nature. low temperatures may lead to wet fiber fusion during deposition. Therefore, the parameters that affect the electrospinning process must be observed and monitored to determine the ideal operating conditions that can yield optimized fiber characteristics tailored to the specific requirements of each study.
One of the most important parameters for obtaining an electrospinnable solution is determining the chain entanglement of the polymer solution from the molecular weight and its concentration. Within a solution, the root-mean-square distance of the segment of a molecular chain toward the center of its mass provides the average radius of gyration (R g ). [76] If the concentration is too low (diluted solutions), the polymer chains do not overlap, with the viscoelasticity of the solution being governed by shorter polymer chains. When the concentration of the polymer is in- Figure 4. Schematic representation of the electrospinning processing parameters. Several criteria must function concurrently under optimal conditions to attain a stable electrospinning process. These can be classified based on solution, operating, and ambient parameters. These conditions are further responsible for the process's production rate, physicochemical characteristics, and morphological properties.
creased, entanglement occurs as the polymeric chain begins to overlap.
The critical concentration (C c *) is generally accepted to be proportional to the polymer's molecular weight (M w ) and the effective solvent volume where N A is the Avogadro constant. [77] If the polymer concentration is below the critical point (C < C c *), inadequate chain entanglement can result in an unstable jet due to Rayleigh instabilities. Therefore, for stable electrospinning, the polymer concentration needs to be higher than the critical point (C < C c *). [78] In many cases, the correlation between the M w of a polymer solution and its corresponding R g value is evident. Thus, in some instances, attempting to electrospin variant molecular weights of the same polymer can ultimately contribute to the procurement of the required concentration (beyond C c *) for effective and stable electrospinning.
Selecting the ideal operating range of each electrospinning condition can be challenging when designing an experiment, due to the vast choice of polymers and the corresponding solvent systems. Moreover, most parameters are interdependent, leading to nonlinear causality, one of the significant challenges in the electrospinning field. Understanding the parameters that influence the electrospinnability of a polymer solution and the subsequent properties of the fibers formed has made it possible to advance polymer chemistry and evolve the capabilities of the produced electrospun scaffolds. Though the majority of present re-search surrounds the use of the conventional needle-based setup with either a drum or a flat collector, significant research has focused on further manipulating the design of electrospinning devices based on the fundamental principles described in this section, to further advance the producibility and morphological architecture of the fibers. For instance, co-axial and multi-axial electrospinning apparatuses can produce fibers from highly diverse polymer pairs, e.g., core-sheath, hollow, and nanoparticledecorated, while each one can maintain its separate material identity. A high fiber production rate can be achieved by increasing the surface area of the spinneret via technologies such as freesurface and needleless electrospinning. 3D buildups can be conveyed by incorporating 3D printing and electrospinning, whereas ultrathin-aligned NFs are obtainable via centrifugal electrospinning. Finally, portable electrospinning apparatuses have been developed for biomedical applications, where fibers can be directly deposited into an open wound. As described in the following sections, advances in such apparatuses have managed to keep electrospinning on the frontline of research.
Instrumentation
Despite the apparatus's design and configuration, the electrospinning process will always consist of three main components: a high-voltage power supply, a jet generator, and a collector.
When high-voltage power is applied to the spinneret, a strong electric field is created around a pendant droplet. This causes the droplet to overcome surface tension and, in the presence of suffi-cient molecular cohesion, commences jetting. The jet is directed by the movement of charged molecules from high to low voltage.
The jet generator, commonly referred to as the spinneret, introduces the polymer solution into the electric field and facilitates liquid distortion, Taylor cone formation, and jetting. The spinneret is the principal component differing between needle-based and needleless methods. In nozzle-based methods, the spinneret can be monopolar or consist of multineedle concentric arrangements. In the case of needleless methods, the spinneret configuration can vary significantly in form, with structures such as wire, corona, cylinder, ball, and bubble, among others.
Finally, the collector, either grounded or charged oppositely to the spinneret, directs the travel path of the jet, allowing for the fibers to be deposited while discharging them at the collector's surface. Although the collector's configuration does not play a significant role in the fiber production rate, it can influence fiber morphology and is responsible for the dimensions of the electrospun membranes produced. Traditionally, a metallic plate is used to produce randomly-oriented fiber mats, while a solid cylinder at high revolutions per minute can introduce fiber alignment. Nonetheless, variant collectors have been reported in the literature, including roller, rotating wire drum, knife-edged, honeycomb, and liquid bath, among others, capable of generating complex fiber morphologies. [79,80] Industrial-size, commercial-scale electrospinning units can provide continuous production lines by depositing fibers in a supporting textile using a feed/take-up dual cylinder system. This can significantly increase the output and dimensions of the produced materials to anywhere between 10 and 800 sq m h −1 and 0.01 and 2 g m −2 , respectively. For instance, a thicker deposition will coincide with a smaller membrane/fabric area; on this account, determining the solid content when measuring the output can be a better indicator.
Solvent Selection
Although it is feassible to procure electrospun fibers via solventfree techniques, such as melt electrospinning and melt blowing, the higher temperature required to melt the polymer during processing can lead to thermal degradation, which limits the materials that can be electrospun (e.g., small molecules, nanomaterials, and bioactive compounds). [14] Moreover, melt spun fibers tend to have langer diameters, primarily due to the much higher viscosity of polymer melts and poorer conductivity. Most of the research conducted on electrospinning relies on solvent-based methods, due to the greater flexibility, fewer limitations and the greater number of available technologies.
As described, in detail, in Table 2, at the end of the section, the polymer solution parameters are interdependent. For solution electrospinning, a solvent must be able to solubilize the polymer homogeneously while sufficiently evaporating during jetting, inducing fiber solidification. The choice of solvent has a substantial impact on solution spinnability and the fiber morphology. Experimentally, solvent selection is generally conducted by determining the chemical structure and properties of the polymer, establishing a list of compatible solvents; based on their physical properties, and conducting short parametric studies focused on solubilization and electrospinnability. Mathematically, the Hansen solubility parameter can be used to estimate the solvent's ability to interact with the polymer chains, taking into account the energy from the dispersion forces, the dipolar intermolecular force and hydrogen bonds. [77] Based on the selected solvent system, as described in the previous section, determining the critical concentration can be advantageous. This can influence processing stability by determining the necessary minimum solution concentration. Higher molecular weight polymers are more resistant to solvent dissolution and may require the application of heat below the polymer's melting point, in order to avoid polymer degradation, while promoting solubilization.
As a great number of parameters need to be met by a solvent to induce stable jetting, it is a common practice to use solvent-systems; mixtures consisting of two or more compatible solvents. In the realm of electrospinning, the most frequently utilized solvents are halogenated solvents (e.g., chloroform, trifluoroethanol), tetrahydrofuran (THF), aprotic solvents (e.g., dimethylformamide [DMF], dimethylacetamide [DMAc], dimethyl sulfoxide [DMSO]), and protic solvents (e.g., ethanol, acetone, water). [14,81] Halogenated solvents remain at the forefront of lab-scale electrospinning research due to their high rates of hydrophobic polymer dissolution and low boiling point, which is of special interest to polymers resistant to many standard organic solvents, such as fluoropolymers. [82] As the electrospinning process moves towards upscaling technologies, larger amounts of solvents for fiber processing, which has led to the investigation of green and sustainable solvents. This coincides with regulatory agencies, such as the Chemical Control Regulation in the European Union (REACH), setting the rationale and strict limitations on the use of harmful solvents (such as DMF, toluene, chloroform, and dichloromethane) to prevent workplace exposure and environmental contamination risks. [85] Along with the environmental impact and user safety, the selection of green(er) solvents should also consider production sustainability (e.g., emissions, energy efficient, whether it can be sourced from renewable sources), solvent recyclability, and disposal. [86] An overview of several studies that have focused on substituting harmful conventional solvents for green alternative is provided in Table 3. As a relatively new area of interest in response to society's growing enviromental consciousness and focus on sustainability, green chemistry focused on electrospinning is a critical research question that has yet to be fully explored.
Materials Selection
The most prevalent research question that has propelled the advancement of the field has been answering, "what can be electrospun?". This has given rise to a wide range of common and intricate materials being successfully electrospun into fibers. Although principally, electrospinning relies on polymeric materials, ceramics, metals, and inorganic chemical compounds can also be transformed into fibers in the presence of a carrier polymer, which can be subsequently kept or removed through postfabrication processing. In addition, small molecules can be electrospun by tuning their chemistry to attain sufficient polymer chain entanglement or by incorporating a readily electrospinnable high molecular weight carrier polymer. As a broad clas-www.advancedsciencenews.com www.advmattechnol.de Table 2. A detailed account of the solution, operating, and ambient parameters influencing the electrospinning process and fiber formation.
Parameters
Influence on electrospinning
Solution parameters
Molecular weight -The length of the polymer chains has a direct effect in facilitating or obstructing chain extensibility.
-Sufficient topological entanglements coupled with an appropriate solvent system are required.
-Generally, higher molecular mass polymers are associated with more uniform but thicker fibers. While insufficient molecular mass will either hinder electrospinning or produce non-uniform fiber mats.
Concentration
-Composite blends generally produce larger fibers due to a denser polymer entanglement.
-Increasing the polymer concentration is associated with uniform and more elongated fibers with no or fewer secondary morphologies (e.g., beads and spider webs) and a smaller fiber diameter standard deviation. -Low concentrations inhibit fiber formation due to inadequate surface tension, causing jet fragmentation. Viscosity -Viscosity increases as intermolecular interactions and/or molecular weight increase.
-Viscosity is dependent on the shear rate and temperature.
-Attaining appropriate viscosity during electrospinning can prevent polymer spraying (low) or the formation of large-diameter fibers (high). General note -Viscosity, molecular weight (M w ), and concentration are intertwined. The average number of entanglements per chain increases with M w , whereas the entanglements per mass/volume increase with concentration. [83] Surface tension -Surface tension is responsible for instigating the electrohydrodynamic events of the electrospinning process. -Surface tension is associated with a liquid surface taking up the minimum surface area required (the force required from a specific mass along a line of unit length). -The electric field required to initiate electrospinning correlates to the surface tension, which, in turn, will depend on the spinneret's configuration. -As the surface tension increases, a stronger electric field is needed to commence electrospinning.
This can sometimes be adjusted during electrospinning, e.g., beginning with higher voltage and lowering it after a stable jet has formed. -Surfactants can enhance electrospinnability by improving polymer spreading and/or increasing the solution's conductivity (especially for needleless spinnerets). -Needleless electrospinning techniques require a higher voltage because of the higher surface tension that must be devolved to instigate jet formation. Conductivity and permittivity -Two electrostatic forces set in motion, Taylor cone formation and jetting; electrostatic repulsion between the surface charges and a Coulombic force applied by the external electric field. [84] -An appropriate solution conductivity increases the number of charges that can be carried out while reducing the minimum voltage required for jet eruption. -Although theoretically, the fiber diameter decreases with increased solution conductivity, by promoting polymer stretching, in practice, a too-high conductivity will produce unstable jetting due to electrical air discharges. -Permittivity refers to the proportion of electric displacement toward the intensity of the electric field. Reducing the solution's permittivity can increase the electric field intensity. -When insufficient, introducing small amounts of salt (e.g., NaCl, LiCl, tetraethylammonium bromide [TEAB]) in the polymer solution can significantly increase the conductivity and permittivity. This approach is commonly used in needleless electrospinning to increase fiber output by increasing the number of formed Taylor cones.
Solvent parameters
Solvent volatility and vapor pressure -During electrospinning, fiber solidification relies on the solvent system's evaporation rate, and thus the volatility of the selected solvent system can influence the morphology of the fibers. -An adequate evaporation rate will allow the collection of dry membranes, while reducing the degree of solvent entrapment. -A too-volatile solution can induce morphological traits/defects (e.g., porous fibers in the presence of a non-water-soluble polymer) or even hinder electrospinning. -Vapor pressure can promote further solvent evaporation, generating noncylindrical secondary morphologies, such as spider webs. -A commonly used term in electrospinning, evaporation rate, will rely on a combination of parameters being met alongside the solvent's volatility, including relative humidity, working distance, and spinneret configuration. Dielectric constant -The dielectric constant refers to the solvent's capability to retain the electrostatic repulsions induced by the electric charge affecting the surface charge distribution. -A higher dielectric constant will improve surface charge distribution and jet stability. For instance, water presents a high dielectric constant that can weaken the electrostatic repulsions and is, thus, commonly incorporated as part of solvent systems.
Parameters Influence on electrospinning
Operating conditions General note -The operating condition requirements will differ greatly between needle-based and needleless electrospinning technologies. Applied voltage -As an electrohydrodynamic process, electrospinning relies on applying high voltage to a polymer solution to initiate the process. -A minimum threshold voltage influenced by the surface tension of the polymeric solution, referred to as critical voltage, V K , must be surpassed for jet generation. -Increasing the voltage above the required threshold generally reduces the jet's "flight time," producing an unstable jet path with larger diameter fibers or secondary fiber morphologies. -A voltage below the required threshold will, in most instances, spray the polymer solution onto the collector or along the jet path. -Needleless electrospinning technologies require a substantially higher V k due to the greater surface tension. Solution feed (flow) rate -The flow rate, which is the amount of solution exposed to the high electric field at a given time, is the main contributor affecting surface tension and the V K. -The effect of the solution feed rate will be directly influenced by most of the parameters discussed in this table. Increasing the flow rate will generally promote insufficient fiber stretching, which can produce wet or thicker fibers with larger pores. -Flow rate plays a key role in multi-axial needle-based electrospinning.
-Although, as a term, "flow rate" is not typically used in needleless electrospinning, the way that the solution is introduced into the needleless spinneret (e.g., via cartilage, a solution bath, among others) can positively or negatively affect the homogeneity of the produced fibrous membranes. Working distance -Working distance refers to the distance between the spinneret and the collector, which defines the jet path. -Increasing the working distance can give more time for a less-volatile solvent to evaporate and for the polymer to solidify. Expanding the jet path is also associated with thinner fibers and vice versa. -Exceeding the critical distance can halt electrospinning or produce defective fibers due to prolongated bending instabilities, affecting fiber branching Collector geometry -The collector's geometry can directly affect the micro and macromorphological properties of the deposited fibers. -A collector can provide alignment (e.g., a rotating mandrel), orientation (e.g., a cylindrical collector surrounding a rotating spinneret), facile patterning (e.g., honeycomb mesh), and mass production (e.g., supporting textile dual cylinder system). Spinneret design -The spinneret type is the cardinal difference between needle-based and needleless electrospinning and the principal focus of this review. -The spinneret configuration will affect the output of each technology, the complexity and architecture of the developed fibers (e.g., co-axial), and even the properties of the developed constructs (e.g., 3D macrostructures or alignment due to a rotating spinneret).
Ambient conditions
Temperature -The chamber temperature during electrospinning will affect the solution viscosity and surface tension, the solvent's evaporation rate, and the jet solidification rate. -Depending on the polymer and solvent-system properties, the working temperature can positively and negatively affect the process. Relative humidity -High relative humidity can induce non-uniformity, and in the case of hygroscopic polymers, unique fiber configurations (such as porous, dimpled, or pitted fibers) when other solution parameters are sufficiently met. -High humidity can also hinder electrospinning by affecting the total charge distribution and reducing the surface charge density. -Due to rapid solvent evaporation, very low humidity can reduce the flying jet path, producing thicker fibers.
sification, these can be divided into three principal groups: organic polymers, small molecules, and composite materials.
Organic Polymers
Organic polymers in the form of solutions or melts are the most frequent employed materials in electrospinning. In recent years, over two hundred polymers have been successfully fabricated into fibers and applied in various fields. [100] Based on their occur-rence, an extensive number of natural, synthetic, and semisynthetic polymers have been manipulated into electrospun fibers. Polymers of all forms; homopolymers, copolymers, and blends can produce stable electrospinning solutions. Unlike copolymers where covalent bonding is present, blended polymers are created by physical mixing of two or more polymers. Copolymers and polymer blends are readily employed to attain hybrid physicochemical and mechanical properties. Although consistency and reproducibility among batches to produce homogenous fibers of desired morphology will require optimization of the www.advancedsciencenews.com www.advmattechnol.de Table 3. Overview of recent studies utilizing green, environmentally safe, and biorenewable solvents for electrospinning.
Small Molecules
Chain entanglement can control molecular motion and disrupt the free movement of molecular segments, thus influencing a polymer's rheological, morphological, and mechanical properties. Increasing the degree of chain entanglement can reduce the effect of Rayleigh's instabilities and maintain a stable jet. [105] Thus, under the appropriate conditions, small molecules that can self-assemble in the presence of the appropriate solution conditions (for instance, through anionic or non-anionic noncovalent bonding) may attain sufficient chain entanglement to be electrospun into fibers. In addition, the self-assembled structures of molecules can be stable in solutions or melts when adequate intramolecular interactions form. Among the small molecules successfully electrospun are phospholipid amphiphiles, monopeptides, dipeptides, tetraphenylporphyrin compounds, and cyclodextrins. [106][107][108][109] In instances where a small molecule cannot produce stable intramolecular interaction to obtain the required chain entanglement, carrier polymers may be incorporated into the polymer mixture and subsequently removed through postfabrication strategies (e.g., solvent or heat treatment). Another way that small molecules can be manipulated into fibers is through in situ polymerization approaches, such as photopolymerization. [110] www.advancedsciencenews.com www.advmattechnol.de
Composite Materials
Many state-of-the-art fibrous materials are forged from composite polymer blends based on sol-gel chemistry principles. While polymer-polymer composites, where polymers of differing physicochemical characteristics can produce peculiar composite fibers that bring together their distinct properties (such as, adjustable biodegradability and biocompatibility along with mechanical stability), incorporating colloids in a polymeric solution can be implemented to immobilize nanomaterials within the fiber configuration. Colloids are good examples of polymerparticle composite electrospun structures that rely on the particles' aggregation state within a solution during electrospinning. A stable jet can be maintained when sufficient polymer entanglement and particle distribution are present, allowing composite fibers to form. [58] To produce electrospinnable solutions consisting of polymer-colloid systems, in general, a less viscous solution of lower concentration and higher molecular weight, along a compatible solvent system of lower conductivity, to account for the addition of the desirable compound, is required. Typically, the material is first dispersed in a polymer-compatible solvent and homogenized trough ultrasonication before being introduced into the polymeric mixture and vigorously stirred. The morphology of the composite fibers will be affected by the critical value of the average particle diameter and its impact on the polymer solution's properties, such as conductivity and viscosity. Through this process, a variety of materials and substances can be successfully incorporated into a fiber configuration, including carbon, organic and inorganic 0D, 1D, 2D, and 3D nanostructured materials, as well as pharmaceutical compounds. Table 2 provides a comprehensive list of the parameters that need to be met for consistent electrospinning and homogeneous fiber formation.
Although significant efforts have been directed towards achieving intricate fibrous membrane properties by manipulating the produced membranes during electrospinning (e.g., a coagulation or oxidizing bath collector), or post-fabrication (physical or chemical), including surface modification (e.g., grafting), in situ polymerization, plasma treatment, carbonization, physical vapor deposition, sputter coating, electrospraying, among other, the subsequent sections of this review will exclusively focus on apparatus-specific technological advancements.
Predictive Modeling
As a critical design tool, modeling and simulation (M&S) has played a tremendous role in understanding the complex interdependent events that collectively make electrospinning feasible. Prediction models, where experimental work falls short, can provide important insights about the underlying processes, as well as reduce unnecessary trial-and-error experimentation, thus saving resources (e.g., significantly reducing the use of solvents). Today's understanding of the polymer solution properties, external forces affecting the electrospinning process, as well as the formation stages; Taylor cone, jetting, elongation, jet instability, and solidification, would not have been possible without the collective efforts of several research groups providing those mathematical models. [111] In recent years, M&S has focused on improving the consistency of the process and the likelihood of attaining invariable fiber morphological traits, as well as understanding the electric field intensity and distribution based on spinneret configuration (e.g., needleless cylindrical systems) and collector shape. [112] A technological constraint that continues to exist today is having physical control over the fiber diameter and attaining a low standard deviation. Although empirical findings can determine to a great extent, variations in fiber diameter based on solution composition and electrospinning properties, to date, there is no library encompassing the extensive experimental work that has been carried out in the field over the years. Taking advantage of the vast number of trial-and-error studies conducted, M&S, could ultimately provide an accurate, predictive tool utilizing a verified model that considers the wide range of experimental parameters.
The response surface methodology (RSM) is a statistical polynomial method that explores the relationships of several explanatory and response variables to demonstrate and analyze existing relationships. RSM aims to optimize the response of output variables by influencing the responses of several independent input variables. [113] RSM, along with machine learning regressions (MLR); a method used to investigate the relationship between independent variables and a dependent variable or outcome. Both RSM and MLR can be valuable predictive tools, reducing the need for unnecessary experimentation when an optimal set of parameters with a high degree of confidence can be computed. [114] RSM has been extensively studied with regard to obtaining consistent fiber diameters, [115] tunable fiber orientation, [116] pore size and fiber quality, [117] and determining the number of beads and bead size. [114] Interpolation machine learning models, such as Kriging, [118] artificial neural networks (ANNs), [119] and greycorrelation analysis (Grey theory), [120] are powerful tools in understanding unknown nonlinear processes. These methods have provided insightful studies focused on determining and analyzing fiber diameter. [121] Theoretical modeling, along with experimentation, can be vital in better understanding the variant processes described in the following section, and by doing so, can significantly improve reproducibility and fiber output.
Classification
The electrospinning techniques introduced in this review are categorized based on spinneret configuration into two groups: needle-based and needleless. Six needle-based electrospinning techniques (mono-axial, co-axial, tri-axial, centrifugal, 3D, and handheld electrospinning) and five distinct needleless highthroughput technologies (roller, bubble, corona, wire, and highspeed electrospinning) were identified and are summarized in Table 4 below.
Mono-Axial
Introduced by Cooley and Morton in 1902, mono-axial is the first and most commonly used electrospinning method that evolved from electrospraying. [26,27] As previously described, a conventional mono-axial setup consists of a high-voltage power supply, a syringe container with a single metallic hollow capillary (blunt metallic needle), and a counter electrode collector placed at a specific distance from the oppositely-charged needle, horizontally or vertically. [103] A schematic diagram of the mono-axial electrospinning method can be seen in Figure 2a. First, a polymer solution of predetermined composition is loaded into a syringe and withdrawn at a controlled rate using a syringe pump, producing a liquid hemisphere droplet at the tip of a blunt metallic needle (the spinneret). The electrostatic charges build up at the surface of the liquid droplet due to the high voltage applied to the metallic needle. When the electric field exceeds a specific value, the electrostatic forces overcome the surface tension of the polymer solution or melt, instigating Taylor cone formation from the apex of the liquid droplet. Jetting occurs due to the electrohydrodynamic stresses present in the travel path (referred to as working distance), the linear region between the spinneret and the collector. As the jet expands toward the collector, it becomes thinner, resulting in the rapid evaporation of the solvent, leaving behind solid polymer fibers to be deposited on the collector. [53] Variations of the mono-axial electrospinning setup include linear or circular motion multi-spinneret systems consisting of multiple mono-axial needles. [122] This approach has been reported to be ineffective for the high-throughput production of NFs due to electrostatic interactions between nearby needles and needle clogging, although it is still employed by some high-throughput electrospinning systems today. The most important advantages and disadvantages of mono-axial electrospinning can be found in Table 5.
To some extent, the limitations associated with the fact that mono-axial electrospinning can only be used to produce fibers derived from a single solution can be circumvented via sequential electrospinning, co-electrospinning, or electrospinning-co-electrospraying. Sequential electrospinning refers to electrospinning solution A for a predetermined amount of time, then changing to solution B and continuing with the electrospinning process onto the same fiber deposition, and so on. Co-electrospinning/electrospraying, or concurrent electrospinning and electrospraying, uses a rotating mandrel as the collector, and two spinnerets fed to two distinct reservoirs are placed antiparallel or vertically and horizontally to one another to electrospin/spray simultaneously onto the same collector. This simple tweaking of mono-axial electrospinning process can enhance the complexity of the attained electrospun mats, producing structures such as layerby-layer or mixed membranes that encompass properties derived from two or more polymer solutions in a layer-by-layer configuration. [123,124] 1. Limited production capacity. The yield of dry solid fibers via mono-axial electrospinning is 0.01-0.3 g h −1 , making it suitable only for laboratory use or projects requiring small fiber outputs, e.g., sensor electronics, where a thin layer of NFs can be used as interface materials. [ 125] 2. Irrefutably the most well-studied electrospinning method, it can assess the spinnability of new materials and complex composites or optimize the solution, process, and ambient parameters before production on a large scale.
2. The NFs present a simple structure, presenting a circular cross-section with a smooth surface. When applied to drug delivery platforms, the lack of a complex fiber structure encourages an initial burst release of the incorporated compound. Although solvent-drug and polymer-drug compatibilities can be employed to control the drug release rate, mono-axially produced NFs perform poorly in sustained release profiles. [4] 3. Reproducible fine fibers can be obtained in the lower range of the nanoscale. In addition, distinct micromorphologies (such as randomly oriented or aligned fibers) can be obtained by adjusting the solution, process, and ambient parameters, the needle's inner diameter (e.g., gauge), and the type of collector (e.g., flat, drum, or liquid bath).
3. The fabricated scaffolds present a 2D network of small-diameter pores and high pore interconnectivity. The fibrous membranes become too compact under prolonged spinning periods, often performed to attain mechanical stability. Although overcoming the small pore size constraints associated with this method can, in some instances, be achieved by postfabrication processing methodologies capable of widening the pores (such as cryogenic electrospinning [ 126] and gas-foaming [ 127] ), in general, the pores produced via this technique are too small for the penetration of large particle and the majority of mammalian cells. [ 128] 4. Multineedle electrospinning apparatuses capable of facilitating high-throughput production are currently on the market. 4. Multineedle electrospinning apparatuses are challenging to operate and inconsistent due to electrostatic interactions between nearby needles and needle clogging.
Co-Axial Electrospinning
Co-axial electrospinning is a variation of the conventional electrospinning method invented by Sun et al. in 2003. [37] Co-axial electrospinning enables independent reservoirs of two different solutions fed onto a co-axial needle to form single composite fibers that present a core/shell morphology. [129] The co-axial spinneret consists of a double capillary compartment arranged concentrically, with the inner needle fitted within the outer needle. Independent solutions travel to the orifice of the co-axial needle from separate pumps, where the flow rates are adjusted accordingly. The inner capillary contains the core solution, while the outer capillary produces the shell polymer. [130] At the orifice, a compound Taylor cone forms as the shell polymer solution entraps the core fluid and is subjected to an applied electric field, conceptually similar to conventional electrospinning. [59] After the solvents evaporate, a heterogeneous but continuous fiber composed of the core and shell constituents is collected. [131] The basic concept of co-axial electrospinning is illustrated in Figure 5, and the limitations and advantages of the technology are summarized in Table 6.
The interactions that govern the properties of the resulting core/shell fibers are determined by the degree of rheological, physical, and chemical dissimilarities between the two solutions. [132] However, a uniformly assembled core/shell fiber can only form if a stable Taylor cone is maintained. Processing parameters related to co-axial electrospinning have been reviewed in the literature, [133,134] with the studies agreeing that the complexity of co-axial electrospinning originates from the difficulty in maintaining a stable Taylor cone. To induce a stable Taylor cone, process parameters should be such that 1) an electrospinnable shell solution is used; 2) the shell solution viscosity is higher than the core solution viscosity, so that the stress relating to the viscosity between the core and shell solutions overcomes the interfacial tension between them; [135] 3) a low vapor pressure solvent is used (as fast evaporation may destabilize the Taylor cone); and 4) the conductivity of the shell solution is greater than that of the core solution to inhibit core/shell structural discontinuities induced by the rapid elongation of the core polymer. [132] Co-axial electrospinning is an advantageous method since it can produce fibers with novel structures out of highly diverse polymer pairs ( Figure 5d); core-sheath, hollow, and nanoparticledecorated, with each component maintaining its separate material identity. [133] By exploiting this feature, sophisticated pairs of materials can unify their properties into a single composite fiber. Highly unstable materials, such as enzymes, growth factors, and rapidly degradable compounds that would otherwise be rapidly broken down within an intricate niche, can be preserved by the sheath material. [136] For this reason, the properties of polymers can be manipulated while employing the co-axial electrospinning technique, which is of interest in the biomedical sector because it can be used to develop biocompatible and mechanically stable materials. [75] Co-axially electrospun fibers are widely employed to develop drug delivery systems that can attain a tailored substance release. Through this process, nanofibrous scaffolds with superior properties to those of monolithic fibers, including a hydrophilic surface within a hydrophobic core, adjustable mechan-ical properties, and the controlled release of defined concentrations of active pharmaceutical compounds or nanomaterials, can be fabricated. [132]
Tri-Axial and Multi-Axial Electrospinning
As the name indicates, the tri-axial electrospinning process uses a tri-axial spinneret made of three concentric needles capable of simultaneously infusing up to three different materials. As with co-axial, tri-axial electrospinning belongs to the multi-axial electrospinning family, with variations reported that include quadraxial and multi-nozzle. Figure 6a shows a typical layout of a triaxial electrospinning setup, which consists of four modules: three individual solution pumps, a high-voltage power supply, a grounded or negatively charged collector, and a spinneret comprised of three concentric needles. Three pumps are used at adjustable feeding rates to drive up to three individual working solutions, referred to as outer, middle, and inner. As with all the multi-axial technologies, this method can be also employed to electrospin materials that are not solely electrospinnable. The simultaneous feeding of the three solutions forms a composite Taylor cone when an appropriate voltage is applied to the system. Ultimately, trilayer-structured composite fibers can be deposited on the collector. [131] Foreseeably, all the process parameters that need to work synergetically to obtain a stable Taylor cone in coaxial electrospinning are applicable in tri-axial electrospinning but at an even greater complexity. For this reason, since multiaxial fibers were first reported in 2009 by Kalra and co-authors, [40] only a small fraction of electrospinning research has been focused on multi-axial setups, with only 46 articles published since that date (based on a Scopus search for "triaxial electrospinning" OR "tri-axial electrospinning" OR "multi-axial electrospinning" OR "multi-axial electrospinning AND LIMIT-TO (DOCTYPE , "ar")"). Figure 6b indicates an SEM image of the trilayer structure. The trilayer structure adds extra complexity to the properties of the composite fibers due to the possibility of introducing different functionalities and compositions through the different layers, finding applications in a broad range of fields and, most significantly, in drug delivery. [147] The trilayer structure can function as a single carrier of multiple substances, adding an extra layer of protection from polymer degradation due to external stimuli. [121] As shown in Figure 6b,c, taking advantage of this concept, composite polymer NFs containing different drugs or variant drug concentrations can be incorporated within the three-layer format. In this regard, the drug concentration will indicate a gradient distribution from the inner core, containing the highest concentration, to the outer shell, containing the lowest concentration. Furthermore, due to the inherent advantages of the trilayer structure, under the premise of Fick's law of free diffusion, the drug release rate from the inner core layer will be further retarded, as it must first diffuse through the intermediate layer before reaching the sheath, gradually increasing in concentration from the inner to the outer layer, and thus resulting in extended release, programmable based on the chemistry of the sheath and the diffusability properties of the intermediate layer. [148] Table 7 summarizes the advantages and disadvantages of the process. [137] c) schematic representation of the charges forming the co-axial Taylor cone, i) surface charges develop around the surface of the shell solution, ii) a viscous electrified strain exerts the droplet causing it to be deformed, iii) a stable core-sheath jet develops. d) SEM images of i) core/shell, [138] ii) hollow, [68] and iii) nanowire-in-microtube structured fibers. [139] Copyrights: (b) Reproduced with permission. [137] Copyright 2017, Elsevier; (d) (i) Reproduced with permission. [138] Copyright 2019, American Chemical Society; (d) (ii) Reproduced with permission. [68] Copyright 2004, American Chemical Society; (d) (iii) Reproduced with permission. [139] Copyright 2010, American Chemical Society. Table 6. Advantages and disadvantages of co-axial electrospinning.
Advantages Disadvantages 1. Co-axial electrospinning can form novel core-shell fiber structures where the activity of the core compound can be protected by the external environment through the sheath material. Core/shell fibers have applications in nanocatalysis, [ 140] fiber-reinforced composites, [ 141] smart textiles, [ 142] energy storage, [ 143] and filtration [ 110] but predominately for biomedical applications, such as tissue engineering, drug delivery, and antimicrobial surfaces. [4,75] 1. The process is complex. Co-axial electrospinning requires a specialized spinneret consisting of two concentrically aligned needles that dispense two distinct solutions through two individual syringe pumps. This increases the complexity of appropriately adjusting the process parameters. Additionally, co-axial needles are expensive to purchase, while cleaning procedures for clogged needles can be time-consuming and arduous, as needles are not as often discarded, especially during the early stages of evaluating the compatibility of the core and shell solutions.
2. Hollow NFs can be formed by selectively removing the core material (e.g., chemically or thermally) from the core/shell structure post-fabrication. [ 140,144] 2. The process is difficult to implement. The core and shell solutions co-electrospun through a single orifice require good compatibility and similar physicochemical properties to prevent separation and attain a homogenous core/shell cross-section.
3. Core/shell drug-loaded fibers can retard the release kinetics of a substance, preventing the initial burst release commonly associated with monolithic fibers. This way, different controlled-release drug delivery systems requiring substantially smaller concentrations of a given substance can be attained.
Although the core/shell chemistry and surface properties can be modified depending on the desired release mode, in general, the drug loaded into the core compartment of the structure is released by permeation through the outer shell of the polymer fiber and degradation of the shell. [ 145] 3. A balance between the flow rate of the two solutions is required to obtain a homogenous distribution of the shell component within the core-shell structure. However, achieving this balance can be difficult as it requires tuning of the interfacial tension, viscosity, solvent volatility, and conductivity between the two independent solutions to ensure comparable flow rates. Differences in the extrusion rate will result in inhomogeneous compound fibers. For example, a low shell flow rate may disrupt fiber formation, whereas a higher flow rate produce a fragmented sheath structure. [ 146] 4. This process makes it feasible to electrospin materials that are not electrospinnable per se due to their chemistry (such as oligomers) by accommodating them within the fiber's core if the core and shell solutions are sufficiently compatible.
Centrifugal Electrospinning
Centrifugal, or rotary jet, electrospinning (CES) is a modified technique that combines electrospinning and centrifugal spinning principles (Figure 7 and Table 8). In traditional centrifugal spinning, spinning is initiated by the centrifugal force acting on the jet, which is influenced by the mass of the polymeric solution, the angular velocity, and the radius of the centrifugal disc (the distance between the spinneret and the collector). [153] The first CES apparatus was created by Andrady et al. [38] in 2005. A CES apparatus consists of a rotary feeding plate (spinneret), a high-voltage power supply, a feeding channel, a motor, and a collector. The key feature of this technology is the use of a high-speed motor to rotate the spinneret and, in some instances, the collector. Following the same electrospinning principles, high voltage is applied between the rotary feeding plate and the collector. The spinneret consists of needles evenly distributed around the edges of a disk, although needleless rotating spinnerets have also been reported in the literature. [154] The collector can be a cylindrical stationary plate where fibers are deposited horizontally in a downward or upward motion, or a ring/multiple-pole circularly-arranged metal strips/wires collector surrounding the spinneret, where fibers are collected either in a static or motion mode. As the polymer is fed into the spinneret, the spinneret's rotation speed must be appropriately adjusted to allow for Taylor cones to form at the end of each needle as the solution is evenly extruded and electrified. The synergetic effect of centrifugal and electrostatic forces governs the formation of Taylor cone, resulting in higher production rates of fiber than the conventional electrospinning method while requiring a lower working voltage or rotating velocity than the individual techniques.
When the combined forces overcome the polymer droplet's surface tension and viscous resistance, jetting initiates and ultrafine NFs form. The facile mechanical rotation and lower voltage requirements make CES the best-reported technique for achieving highly aligned NFs.
Recently, Norzain and Li [153] proposed a mathematical model based on Newton's second law taking into account the several forces the polymeric jet is subjected to during CES; centrifugal force, electrostatic force, surface tension force, and viscous force. As reported in two publications, Erickson et al. have developed highly uniform (based on the fiber diameter standard deviation), uniaxially aligned, chitosan (CS)-PCL and hyaluronic acid-coated NFs, illustrating the importance of fiber orientation in influencing tumor cell motility and tissue topography. [155,156] Works by Wang et al. [157,158] explored the effects of dual-rotation CES, where both the spinneret and collector rotate in the same, counter, or multidirectional orientation while assessing a range of polymers: PVP, polystyrene, PCL, and thermoplastic polyurethane (TPU) toward the development of complex drug release matrices, based on the fiber's morphological properties. Yanilmaz and Zhang [159] used this technique to develop polyacrylonitrile/polymethylmethacrylate (PAN/PMMA) carbonized NFs as a separator material for Li-ion batteries. The authors reported that compared to microporous polyolefin membranes, the centrifugally electrospun PMMA/PAN membranes presented better ionic conductivity, higher electrochemical oxidation limit, and lower interfacial resistance coupled with lithium.
Among the notable attempts to improve the process parameters, Valipouri et al. [160] developed an air-sealed setup that improved the stability of the jet, a commonly reported issue of CES. Kancheva et al. [161] achieved radial fiber deposition of highly Figure 6. The tri-axial electrospinning process. a) Tri-axial spinneret: i) Illustrative diagram of the setup; ii) SEM image of a trilayer structure. Adapted from ref. [147]. b) Transmission electron microscopy (TEM) cross-section depiction of tri-axial fiber loaded with two different substances, consisting of a PVP core loaded with Keyacid Blue (blue particles), a PCL intermediate layer, and a PCL outer layer loaded with Keyacid Uranine (yellow particles). Adapted from ref. [149]. c) Schematic depiction of a dual drug release system of the same substance. This system consists of a burst release (42% of the loaded drug within 2 h) of ketoprofen through a water-soluble outer sheath (PVP) and the subsequent sustained release (90% of the loaded drug within 60 h) of ketoprofen-loaded CA core by retarding its release through an intermediate layer of blank CA. Adapted from ref. [150]. Abbreviations: PVP, polyvinylpyrrolidone; PCL, polycaprolactone; CA, cellulose. Copyrights: (a) Adapted with permission. [147] Copyright 2015, American Chemical Society; (b) Adapted with permission. [149] Copyright 2013, American Chemical Society; (c) Adapted with permission. [150] Copyright 2020, Elsevier. Table 7. Advantages and disadvantages of tri-axial electrospinning.
Advantages Disadvantages 1. Tri-axial electrospinning has distinct advantages over other electrospinning methods due to its ability to form complex multilayer nanostructures. By alternating the physicochemical properties of each layer, this methodology finds applications in tissue engineering, where mechanically durable synthetic materials can be integrated within the core structures, allowing naturally derived materials, which may lack mechanical stability, to be included in the outer layer, enhancing, for instance, cell adherence and proliferation. [ 147,151] 1. The design of the concentric spinneret plays an essential role in the success of the process: variations in the intraneedle spacing and inner diameter between the concentrically aligned needles can positively or negatively affect the distribution of each material within the compound fiber during Taylor cone formation and jetting. [ 152] 2. Tri-axial electrospinning can overcome problems associated with limited drug solubility. This method can be used to load sensitive substances such as small molecules, proteins, and growth factors that may present inadequate drug release kinetics and be sensitive to pH fluctuations and the presence of harsh media. In such instances, tri-axial fibers could allow for the release of the desired compound to the appropriate tissue site (e.g., a tumor). [ 148] 2. The quality of the spinneret. A good tri-axial spinneret must be durable to obtain reproducible results by withstanding harsh washes and erosion from solvents. [53] Furthermore, the electrical distribution through the outer needle material must be sufficient and stable enough to electrify the composite fluid at the point of eruption.
3. Tri-axial electrospinning can create tunable drug release kinetics and transport mechanisms, such as multistep diffusion drug delivery systems. Tri-axial fibers can incorporate multiple single-substance drug release profiles or the possibility of loading variant substances in each compartment. [ 149,150] This way, for instance, combining an initial burst release (e.g., immediate-release and first-order systems) with a controlled-release profile (e.g., zero-order release) is feasible.
3. General difficulties of implementation. It can be challenging and, in many instances, improbable to attain three compatible spinning solutions with similar physicochemical properties to prevent separation. Even if that is feasible, it is exceedingly difficult to synchronize the inner, intermediate, and outer flow rates to form a well-distributed compound Taylor cone and keep the concentric structure continuous through the entire process, primarily due to gravity and surface tension effects. Figure 7. Centrifugal electrospinning. a) Schematic of the CES process. b) Diagram depicting the electric repulsion and centrifugal forces that work synergetically to overcome the solution's surface tension at the spinneret's surface and induce fiber formation. A rotating disk, attached to a motor comprised of multiple pits (spinneret exits), discharges polymeric solution at a controlled rate via a syringe pump system. An applied voltage and the rotational velocity of the disk facilitate the formation of multiple Taylor cones, which, via high frequency of rotation, expand to form ultrathin fibers. Reproduced from ref. [164]. c) Development of aligned multicompartment composite microfibers at 120 g h −1 production rate. On the left is a schematic of the CES setup consisting of a double solution reservoir at the spinneret and an iron wire ring collector. On the right is a fluorescence image indicating the successful production of blended aligned fiber configurations. Abbreviations: Ω, angular velocity; F cen , centrifugal force; F rep , electrostatic repulsion; F att , attraction toward the collector; F air , guiding air. Reproduced from ref. [158]. Copyrights: (b) Reproduced with permission. [164] Copyright 2020, American Chemical Society; (c) Adapted with permission. [158] Copyright 2018, Springer Nature. Table 8. Advantages and disadvantages of centrifugal electrospinning.
Advantages Disadvantages 1. Ultrafine alignment in the micro-and nanoscale can be attained much more straightforwardly than through conventional electrospinning due to the combined effect of electrostatic and centrifugal forces. Furthermore, in general, the process requires lower jet initiation voltage and rotating speed, which can improve the operational safety of the process by reducing injuries associated with high-voltage and high-speed centrifugation. [ 160] 1. CES is a relatively new method, with approximately a hundred articles published. Due to the integration of a centrifuge compartment, the design and development of CES (especially toward the spinneret and collector configuration) are more complex. As such, during CES additional process parameters must be investigated and optimized for successful fiber production. [ 141] 2. CES primarily produces loosely packed microfibrous structures that display fiber directionality with larger mean pore sizes. [ 165] This can find applications in tissue engineering and scaffold development.
2. One of this method's limitations is the difficulty of incorporating active substances due to the absence of a complex hierarchy. [ 166] Early reports of co-axial CES fibers have recently been published, [ 163] but further research is required.
3. CES can be used to electrospin higher concentration solutions, and polymer melts through the additional centrifugal forces applied into the system, assisting fluid transport where jet initiation may not be feasible due to increased viscosity.
3. The majority of CES reserach has concentrated on spinneret configuration using mon-axial needle or needle-like arrays designs. Although due to the centrifugal forces and the ability to distribute the individual nozzles in a 360°f ormat, near-electric field effects are not considered an issue, nozzle clogging and off-target fiber jetting can still occur (especially toward nearby needles), making the process laborious to set up and clean.
aligned fibers (fiber diameter 550 ± 90 nm) and produced electrospun mats with a large area (2200 cm 2 ) within 20 min. In this work, fiber alignment was achieved when using circularlyarranged metal strips as the collector but not with a cylindrical collector (at a rotating speed of 1900 rpm). Chang and co-workers studied the effects of a viscoelastic jet during CES and mathematically described, through dimensionless number and group analysis, that the strong stretching force and a fast extension speed obtainable during the process can significantly reduce the effect of the whipping instabilities and fabricate a series of uniaxially aligned polymeric NFs with improved physical properties such as high modulus, hardness, crystallinity, and good molecular orientation. [162] For the first time, Gu et al. [163] recently addressed the development of complex NF structures via the CES technique by integrating CES with co-axial electrospinning to produce core-sheath structures out of poly(vinyl alcohol) (PVA) (core) loaded with paclitaxel and poly(l-lactic acid) (PLLA) (shell), with a controllable drug release profile by adjusting the thickness of the sheath material.
3D Electrospinning
A significant restraint of conventionally produced electrospun membranes is their inherent 2D structure. This hinders the abil-ity to develop a highly porous 3D structure, which can benefit fields such as complex 3D tissue models with improved cell infiltration, and wound healing [143] (Figure 8 and Table 9). Initially, significant research focused on postprocessing, multilayering, and template-assisted electrospinning techniques to obtain 3D built-ups. [66] Postprocessing techniques involve producing 2D electrospun membranes and folding, freeze-drying, and gas-foaming the structure to create a 3D version from 2D electrospun mats. As the name suggests, multilayer electrospinning involves compiling multiple layers of sequential electrospun or co-electrospun materials. Finally, the template-assisted method consists of electrospinning onto a sacrificial 3D template, such as mechanical and matrix templates, which subsequently leached (postprocessing), leaving behind a 3D fibrous structure. Nonetheless, although these approaches have seen significant recognition in the literature, they cannot be considered 3D electrospinning technologies, as they cannot directly produce 3D electrospun structures.
Two variations of conventional electrospinning, i.e., wet and cold-plate electrospinning, and one self-assembly-inspired electrospinning apparatus that integrates 3D printing and electrospinning principles to produce CAD-assisted 3D micro/nanofibrous configurations, are the only technologies, to date, capable of instantaneous one-step production of 3D electrospun structures (Figure 8a). www.advancedsciencenews.com www.advmattechnol.de Table 9. Advantages and disadvantages of 3D electrospinning.
Advantages Disadvantages
1. It is one of the most straightforward and advanced techniques to manufacture 3D structures with tunable morphology, pattern, and physical and chemical properties.
1. Nano-microfiber blocks made by 3D electrospinning are soft and fluffy (woven); they have cotton-like structures when they are dry but often break down upon contact with a liquid, posing an issue for anisotropic lamellar deposition.
2. Due to non-contact operation and CAD-directed spinneret motion, 3D electrospinning is the only reported technology capable of directing the morphology of the 3D structures without requiring subsequent post-fabrications steps.
2. Polymer systems with higher conductivity are necessary for 3D assembly, hence narrowing the class of materials that can be used.
3. Waste produced is reduced via 3D electrospinning, as it does not require post-fabrication procedures to obtain 3D scaffolds.
3. Increasing the height of the constructs decreases the precision of the process, limiting upscaling.
4. CPE can produce nonwoven, microporous structures with better mechanical stability than 3D electrospun structures.
4. 3D electrospinning methodologies such as CPE, are limited to water-soluble polymers, require significant post-fabrication processing and can only attain random 3D macro-architectures. Similarly, wet electrospinning is limited by the range of coagulation solvents available for a specific polymer. Furthermore, the depth of the bath (from the bottom where the electrode is placed to the bath's surface) limits the upscaling of the process and the ability to produce diverse 3D structured macromorphologies.
Yokoyama and co-authors first described the wet electrospinning technology in 2009 as a novel method capable of fabricating 3D spongiform NFs. [44] The process is conceptually similar to conventional electrospinning, with the key difference being the use of a bath as the collector filled with a low surface tension solvent (e.g., tertiary-butyl alcohol), which is capable of solidifying and attracting the formed fibers [e.g., poly(glycolic acid)], toward a grounded metallic plate placed at the bottom of the bath (Figure 8b). This process produces nonwoven 3D structures that are relatively short, with a low bulk density and high porosity. Following the same principles, Ghorbani et al. [167] produced PLA porous 3D scaffolds in a sodium hydroxide (NaOH) bath for wound healing applications. Zhang et al. [150] employed this technology to produce Rana chensinensis skin collagen (RCSC)/poly(ɛcaprolactone) (PCL) Ag nanoparticle-loaded in an ethanol bath, creating 3D porous nanofibrous materials with ≈90% porosity.
Sheikh et al. [50] described the cold-plate electrospinning technique in 2015 when they produced 3D silk fibroin largepore nanofibrous scaffolds (Figure 8c,d). During cold-plate electrospinning, as the name suggests, a collector plate is placed over a heat transfer pipe connected to an immersion chiller that can lower the plate temperature to −90°C, at which ice crystals form, enhancing the conductivity and subsequently instigating the deposition of the fibers in a layer-by-layer format. In this work, silk fibroin was blended with PEO, where the scaffolds produced were subsequently freeze-dried, immersed in ethanol for crystallization, and finally immersed in deionized water to remove the carrier polymer (PEO). The 3D scaffolds improved cell infiltration in vitro (using human dermal fibroblasts and keratinocytes), compared to the NFs obtained using conventional electrospinning due to the higher porosity and larger pore sizes attained via this methodology.
Although the above technologies can produce 3D structures through electrospinning and have gradually evolved since they were first introduced in 2005, [143] after subsequent exploration, [168] 3D fibrous self-assembly via electrospinning, an exciting single-step fabrication method of producing 3D electro-spun structures, was developed. 3D electrospinning is the first technology that combines electrospinning and extrusion-based 3D printing to develop CAD-assisted 3D fibrous patterns. [51] Vong et al. [51] first described this technology in 2018, demonstrating the controlled deposition of 3D buildup by including a conductive additive in the electrospinning solution (H 3 PO 4 ). It is a non-contact printing technique suitable for fabricating complex and nonplanar surfaces. Complex electrospun 3D structures benefit from various biological, mechanical, and mass transport properties. A 3D electrospinning setup possesses a high-voltage source and solution controller with a fused deposition modeling 3D printer, which provides the x-y-z motion control. The polymeric solution is fed into the moving nozzle, connected to a high voltage that allows the directed deposition of 3D structures. The guided NF assembly process forms these structures into shapes due to electrostatic induction, rapid evaporation, and polarization. [146] In follow-up work, Vong et al. [169] analyzed the mechanism behind the 3D buildup, demonstrated that the incorporation of electrodes can further enhance the shape of the produced structures at the collector's surface, and demonstrated the upscaling of the process, creating 3D macrostructures up to 5 cm in height out of polystyrene, polyacrylonitrile and polyvinylpyrrolidone within 10 min.
Portable Electrospinning
Portable electrospinning refers to handheld and lightweight electrospinning devices designed to produce fibers on-site (Figure 9 and Table 10). The technology was inspired by wound care and management as an alternative approach to simultaneously achieving hemostasis, wound protection from infection, and promoting tissue regeneration. [172] The inspiration behind the development of in situ fiber deposition onto a wound was transduced by this approach being able to provide painless personalized deposition of lightweight dressings directly on the injured site. [66] The initial drawback of the portable electrospinning de- Figure 9. Portable electrospinning apparatuses. a) A representative depiction of a portable electrospinning device. i) Schematic diagram of its compartments; photographs of ii) jetting and iii) in situ fiber deposition onto a hand. Adapted from ref. [174]. b) A schematic diagram of a portable melt-extrusion electrospinning device. Adapted from ref. [183]. c) A photograph of the commercially available portable electrospinning device, currently undergoing clinical trial for its application in wound management (Spincare, Nanomedic, Israel). d) A 3D-printed apparatus. i) A rendered image of the CAD design; ii) a schematic of the electrospinning assembly, consisting of the 3D-printed compartments, a 12 V battery, a high-voltage converter, conductive wires for HV output, a syringe, and a metallic needle. iii,iv) Photographs of the assembled device. Adapted from ref. [182]. Copyrights: (a) Adapted with permission. [174] Copyright 2015, Royal Society of Chemistry; (b) Adapted with permission. [183] Copyright 2020, Springer Nature; (c) Reproduced from Nanomedic Technologies Ltd.; (d) Reproduced with permission. [182] Copyright 2020, Frontiers. vice first designed by Sofokleous et al. in 2013 was the requirement of a cord to power the high-voltage power supply, thus minimizing its accessibility and the notion of onsite use. [173] Xu et al. were the first to resolve this issue by miniaturizing an electrospinning apparatus by integrating a battery power source, producing a device with dimensions of 10.5 × 5 × 3 cm 3 , only weighing ≈120 g, naming it battery-operated e-spinning apparatus (BOEA). The compact structure produced fibers in a cordless, single-hand motion. [174] They were able to electrospun N-octyl-2cyanoacrylate (hemostatic glue) with a range of polymers; PCL, PS, PVP, PLA, and PVDF. Subsequent apparatuses inspired by these findings further miniaturized the electrospinning equipment and focused on evaluating in situ wound healing repair in animal models. [175][176][177] Several antibacterial polymer formulations have been successfully electrospun using handheld apparatuses to produce wound dressings, including PCL loaded with silver nanoparticles mesoporous silica nanoparticles (AgNP) and asymmetrically spun PVP iodine-loaded NFs (HHE-1; handheld portable electrospinning apparatus, Qingdao Junada Technology Co., Ltd.). [177] Recently, the same device has been used to deliver active herb extracts (Lianhua Qingwen Keli) incorporated within PVP blends. [178] Dong and co-workers used a handheld electrospinning device to electrospin a PCL blend incorporating aggregation-induced emission luminogens, a newly emerged group of photosensitizers able to generate reactive oxygen species, for the treatment of multidrug-resistant bacterial infection. [179] Earlier this year, Xu et al. [180] described for the first time the in situ electrospinning of PVA NFs incorporating bone marrow-derived stem cells (BMSCs) using a handheld apparatus for the treatment of non-healing wounds. Zhang et al. [181] developed a simple portable electrospinning device consisting of a syringe, a metallic needle, and a AA batterypowered high-voltage converter (where a 3 V battery can produce a 10 kV output) to in situ electrospin core/shell nanoparticles (NaYF4:Yb/Er@NaYF4:Nd@hypericin, 50 nm in diameter) blended with PVP dissolved in acetone (≈500 nm in diameter), to be used for photodynamic therapy; a type of treatment that can generate reactive oxygen species (ROS) to effectively eliminate bacteria under light irradiation.
Portable electrospinning has encouraged the establishment of an Israeli-based company, Nanomedic, which has successfully commercialized a handheld electrospinning device, Spin-Care. The equipment is currently undergoing clinical trial for the [66] 1. This is quite a new technology, with only a handful of patents filled and 25 research articles published to date. The equipment design is complex, particularly in regards to ensuring patient compliance and safety with regulations.
2. By formulating and depositing the fibers on-site, the technology can be considered more economically friendly by limiting excess fiber deposition. It is also beneficial for unstable substances that may not survive prolonged storage periods, post-fabrication treatments, or sterilization protocols.
2. In general, the production of in situ electrospun fibers is challenged by poor stability during the fabrication due to the inability to retain consistent spinning, applying opposite potential charge (always grounded, with no collector), and working voltages that do not exceed 10 kV. These issues often result in inconsistent fiber morphologies of a single material of low histocompatibility. Improvements concerning the reproducibility, quality, purity, potency, and solvent toxicity of the fibers produced are required.
3. As a portable technology capable of rapidly producing lightweight dressings, it can be utilized by emergency medical services, fire and rescue services, and the military.
3. Currently, only a limited number of materials, mostly water-soluble, have been electrospun through this process, due to the limited selection of solvents and additives . It is necessary to produce a wider range of naturally derived and synthetic polymers through this process to gain better undertsanding of the process parameters. Further improving the devices' interface will be required to eliminate issues with residual solvents.
external treatment of burns and wounds, and as of this year, 44 participants have enrolled. Five case studies have been made available, including the treatment of a graft donor site area and partial thickness burns (clinical trial: NCT02997592).
Recently, Chen and co-authors [182] fabricated a 3D-printed handheld apparatus consisting of three compartments; a cover, a handle, and the main body using Objet350 Connex 3D. The authors made the standard template library (STL) files publicly available. Upon assembly, the handheld electrospinning device was powered by a 12 V rechargeable Li battery (acting as a voltage generator), capable of producing up to 10 kV DC high voltage. A high-voltage inverter was connected to metal shrapnel through a lead wire and was used to electrify the stainless-steel spinneret needle. The polymer solution was extruded through a syringe using a "gun motion" (finger extrusion) and was attained via a pistol palm extrusion introduced to high-voltage static. The authors used this equipment to successfully electrospin a PLA/gelatin blend, where they assessed the in situ repair of skin defects in vivo.
Advanced Electrospinning Technologies: Needleless
Considering that the production output of needle-based electrospinning devices is commonly meager, ranging from 0.01 to 0.3 g h −1 , [184] scaling up the process has been progressively studied as a suitable approach for industrializing this fabrication process. One of the strategies that have progressed to overcome the limitations of this process is the development of nozzle-less electrospinning setups. This can be achieved by scaling up the spinneret's structure while retaining an energetically stable and welldistributed configuration. [185] Unlike multineedle electrospinning, in which the electric field around a given needle is affected by the nearby jets, which can produce inhomogeneous fibers, free-surface electrospinning is an alternative method of highthroughput production of fibers with no constraints of clogged needles, providing freedom over the spinneret's configuration.
In 2004, Yarin and Zussman initially described the production of free-surface NFs by placing a layer of polymer solution underneath a magnetic liquid that was overlapping a permanent magnet against a vertically placed oppositely charged magnet by applying high DC voltage. [19] A year later, Jirsak and co-workers patented a process in which a rotating charged electrode, immersed within a polymer solution, placed underneath a counter electrode, could fabricate NFs at an increased production rate, in an upward bottom-up motion, with the assistance of an airstream to increase the auxiliary drying efficiency of the system. [24] Lukas et al. [186] developed an electrohydrodynamic theory that describes the self-organization of electrified liquid jets from an open flat surface, based on the fact that fibers can arise during electrospinning from linear clefts even without the support of a magnetic fluid underneath. [187] The critical electric field intensity (E c ) required to produce fibers from free-surface electrospinning was described as where is the surface tension of the solution (N cm −1 ), is the density of the liquid mass (g cm −2 ), g is the gravitational acceleration (cm s −2 ), and 0 is the absolute permittivity (F cm −1 ). During the onset of free-surface electrospinning, the electric force is essential for Taylor cone formation and subsequent jet initiation. Prior to jet growth and the corresponding bending instabilities, the initial straight segment of the jets is amplified as the Coulomb forces concentrate on the leading segments that are trying to reach the collector. [188] The ultra-slow-motion images presented in Figure 10d indicate the stages from Taylor cone Modified from ref. [191]. b) Rendered CAD model of the nozzle-free roller electrospinning setup and its variant components. Retrieved from ref. [200]. c) Schematic diagram of Taylor cone formation via free-surface electrospinning. In the diagram, h represents the thickness of the layer, D the diameter covered by the Taylor cone, and f the electrostatic force. Adapted from ref. [200]. d) High-speed camera depicting jet formation: i) Conical droplet on an open surface in the presence of an electric field (time = 0 s), ii) extended conical droplet (time = 33 ms), iii) Taylor cone and jetting of the droplet (time = 66 ms), iv) depletion of the droplet (time = 99 ms). Adapted from ref. [201]. e) Photographs depicting multijetting based on various spinneret configurations: i) roller, ii) coil, iii) disc, and iv) wire. (i-iii) Reproduced from ref. [197]; (iv) Adapted from Nanospider (Elmarco, Ltd., Czech Republic). Copyrights: (a) Adapted with permission. [191] Copyright 2012, Hindawi; (b, c) Reproduced with permission. [200] Copyright 2021, Elsevier. (d) Adapted with permission. [201] Copyright 2012, American Chemical Society; (e) (i-iii) Reproduced with permission. [197] Copyright 2012, Taylor & Francis; (d) (iv) Reproduced from Elmarco, Ltd.
formation to jet depletion, which occur within a tenth of a second. The section below discusses in detail the different forms of needleless electrospinning equipment that have been developed.
Free-Surface Roller and Wire-Based Electrospinning
Roller electrospinning is the first described needleless method capable of continuous fiber production. This method was invented by Jirsak et al., who first applied for a patent application in 2004 (application granted in 2009, US7585437B2). [24] Needleless roller electrospinning setups consist of a roller-spinneret electrode, a grounded or oppositely-charged rotating collector, a solution tank, a motor, and a high-voltage power supply ( Figure 10 and Table 11).
During roller electrospinning, a rotating cylinder electrode (roller spinneret) is partially submerged in a polymer solution bath against a biased rotating collector electrode under constant airflow. Two motors control the rotating speed of the spinneret and collector cylinders. As the spinneret rotates, a fine layer of polymer forms at the upward-facing, non-submerged surface of the spinneret. A high-voltage power with a potential (generally greater than 50 kV) is then applied between the two rotating electrodes, inducing the formation of multiple Taylor cones emerging from the surface of the rotating electrode immersed in the solution bath. [189,190] When sufficient voltage is applied to the roller spinneret, the liquid layer electrifies, including multiple Taylor cones to formulate along the surface of the spinneret. When the voltage reaches a critical value, multiple jets stretch from numerous locations to form fibers in an upward motion on a large scale. Under a strong electric field, the jets are directed and deposited Table 11. Advantages and disadvantages of free-surface roller and wire-based electrospinning.
Advantages Disadvantages 1. Free-surface electrospinning based on the described configurations can attain high production rates through a continuous process, making it a viable approach for industrial production. [ 202] 1. The fiber diameters produced are usually larger than those produced by conventional electrospinning, while the process requires higher voltage for jetting. [ 191] 2. Increasing the polymer concentration increases productivity based on the weight of dry fibers collected. [ 203] Increasing the conductivity of the polymer solution will have a direct effect on the number of Taylor cones forming, and thus the incorporation of salts as additives is a common practice of further increasing fiber output.
2. Low controllability. Optimizing the parameters for consistency is much more complex than conventional electrospinning, primarily due to free-surface electrospinning being guided by random Taylor cone organization through the openly exposed polymer surface rather than a well-controlled individual Taylor cone in the case of needle-based electrospinning. [ 202] This, in most instances, is associated with much higher solvent and polymer wastage.
Further optimization of the process should focus on reducing the proportion of un-spun polymer solution during the process.
3. Higher fiber production can be achieved while bypassing issues associated with nozzle-based setups, such as clogging and neighboring needle jet repulsion and deviation.
3. Difficulties obtaining consistent fibers and advanced fiber configurations, such as multicomponent composite structures. This is primarily due to the simple design of the spinneret and problems associated with solvent evaporation, and more strict solution requirements for successful electrospinning. [ 203] on the collector's surface, which is placed at a fixed distance from the spinneret. Because of this, the roller electrospinning method is a continuous and efficient process for fabricating NFs. [191] Besides fluctuations in the conductivity of the polymer solution, [192] variations in the shape of the spinneret play a vital role in the morphology and diameter of the formed fibers. [193][194][195][196] Generally, variations of the first described roller electrospinning method differ in the architecture and geometry of the freesurface spinneret. Within the roller electrospinning derivation, a roller can be in the shape of a cylinder, disc, or ball. [197] To better control the energy distribution, polymer layer thickness, and solvent exposure time, which are essential to obtain morphologically consistent fibers, spinnerets of wire and spiral configurations have been designed. [45] These were inspired by work conducted by Zhou et al. in 2014 that designed a spinneret consisting of two metal wires aligned parallel and near each other, capable of formulating compound Taylor cones out of polyacrylonitrile (PAN)/isophorone diisocyanate (IPDI), ultimately producing the first core/shell nozzle-less electrospun fibers at a high production rate. [198] At present, Nanospider (Elmarco, Ltd., Czech Republic) has developed a commercially available industrial-scale electrospinning device based upon this concept, where a high-voltage potential (up to 80 kV) facilitates the formation of fibers out of a polymer-layered thread at a defined rate. In recent years, the device has seen great commercial success through its production lines, Infinity and Linea, with research groups using it to report high-throughput fiber production.
Recent developments of free-surface apparatuses have successfully managed to produce binary and trinary composite fibers incorporating synthetic (PVP, polyglycerol sebacate [PGS], and PCL) and naturally derived (silk fibroin) polymers that presented improved surface chemistry, good adherence, and proliferation of fibroblasts in vitro and superior mechanical properties for skin tissue engineering applications. Earlier this year, a roller electrospinning setup was used to produce 3D electrospun PVDF polyvinylidene fluoride-co-trifluoroethylene (PVDF-TrFE) fibers presenting superior intrinsically enhanced piezoelectric properties through the integration of high-throughput produced NFs onto a mechanical energy harvester, obtaining a higher instantaneous output power than similar state-of-the-art devices. [199] Although roller electrospinning presents a high-volume output and is easy to operate once the appropriate solution and electrospinning parameters have been established, it can be challenging to maintain consistent solution concentration and viscosity. Furthermore, due to the high electric force, incomplete solidification of the fibers can allow residual solvents to be incorporated into the scaffolds, which may affect the biocompatibility of the resulting constructs; nevertheless, postfabrication treatments may resolve this issue in most cases. In addition, a major concern is that as the polymer solution is openly exposed to ambient conditions, highly volatile solvents may rapidly evaporate, leading to fluctuations in the conductivity and viscosity that can negatively affect fiber uniformity and the consistency between experiments. This can be partially regulated by restraining the exposure of the polymer solution in the open air, the solvent system selection, regulating the ambient conditions, and the configuration of the spinneret (e.g., using a double-motion cartilage system to deposit the polymer solution and take up the excess polymer on the way back). Thus, it is necessary to accurately tune all solution, electrospinning, and ambient parameters to achieve a consistent fiber production output.
Bubble Electrospinning
Liu et al. invented bubble electrospinning in 2007. [39] As the name suggests, this innovative method facilitates free-surface jetting, out of an open polymer surface, by gassing a polymer solution, causing it to form polymer bubbles near the surface. The spontaneous formation of bubbles on the liquid surface reduces the surface tension of the electrospinnable solution, making it advantageous to other free-surface electrospinning configurations. Liu et al. showed that the process could yield ultrafine NFs at a 7.5 g h −1 production rate out of a single bubble by applying voltage ranging from 16 to 35 kV. [39] Figure 11 illustrates a typical bubble electrospinning setup consisting of a solution reservoir Figure 11. Bubble electrospinning. a) Schematic diagram of bubble electrospinning apparatus. b) A proposed method of producing core/shell NFs via co-axial bubble electrospinning; i) schematic of the process; ii) schematic of the mechanism depicting hybrid polymer bubbles forming at the surface between the two individual polymer solutions at the interface; iii) TEM image of the attained core/shell PVA and nylon-6 hybrid fiber structure. Adapted from ref. [204]. Adapted with permission. [204] Copyright 2021, Springer Nature. [39] This has been demonstrated by SNC Fibers (Stellenbosch, South Africa), which have employed bubble electrospinning for commercial production.
1. The constant evaporation of large amounts of solvent from the open surface area makes the process less safe for the operator and less environmentally friendly when harmful organic solvents are used for production.
2. Breakage of large bubbles and subsequent formation of daughter bubble cascades lowers the surface tension that must be overcome for Taylor cone formation, thus requiring lower working voltages compared to other high-throughput methods. [ 211] 2. The process is more susceptible to ambient conditions, with parameters surrounding the viscosity of the polymer solution and solvent volatility, rate of bubble formation (gas input), and electrospinning parameters requiring to remain at consistent levels to obtain homogenous reproducible fibers. These factors are affected by the pressure difference between the bubbles that have not reached the surface, and the external environment, which directs the surface tension of the bubbles. [39] with a submerged gas tube and a metal electrode fixed at the bottom of the reservoir, a gas pump, a high-voltage power supply, and a collector plate (Table 12).
Initially, the reservoir is filled with the polymer solution. Gas pushed from the bottom of the polymer liquid generates bubbles at the reservoir's surface. This will incite air bubbles of assorted sizes to emerge from the bottom of the reservoir and rise to the surface of the aerated working solution. An electric field is applied by wiring the solution with high voltage, causing the meniscus bubble to rupture. [204] Upon rupture, microscopic charged droplets form at the surface, which due to electrostatic repulsion, become finer in size and break into smaller bubbles. The force induced by the surface is much greater at a smaller bubble radius. [204] Once the critical surface tension is overcome, the microbubbles at the solution's surface become unsteady, formulating individual Taylor cones. Once the electrical force overcomes the surface tension, a jet will be discharged from the conically shaped microbubble toward the grounded collector.
During bubble electrospinning, bubble collapse and wrinkle of the liquid sheet are responsible for Taylor cone formation and jetting. [205] Based on work by Oratis et al. [206] on bubble collapse dynamics that mathematically showed that surface tension drives bubble collapse and initiates wrinkle formation, earlier this year, He et al. used this principle to evaluate the maximal wrinkle angle for bubble electrospinning at 49°-50°. [207] It is worth mentioning that the threshold voltage needed to overcome surface tension is influenced by the size of the bubbles and the gas pressure inside them.
Bubble electrospinning has been successfully employed to electrospin a range of synthetic polymers. Li et al. have fabricated polymer blends of PVA, PVP, and PAN incorporating ZrCl 2 to produce high-temperature-resistant adsorption and separation membranes. [208] Liu et al. successfully produced PVDF/FeCl 3 ·6H 2 O composite NFs, which were subsequently calcinated to create -Fe 2 O 3 NFs for catalysis. [209] Toward naturally derived polymers, Zhao et al. successfully electrospun silk fibroin/chitosan blends via bubble electrospinning. [210] Recently, Ali et al. [204] described the production of core/shell NFs via coaxial bubble electrospinning. The authors illustrated that it is feasible to attain composite core-sheath NF architecture via bubble electrospinning by incorporating two polymer reservoirs in a parallel configuration, as shown in Figure 11b, thus producing single polymer and hybrid fibers at the surface during the process. The mechanism is driven by a surface-induced force and geometrical potential. The authors theoretically and experimentally described that polymers mixed in a semi-solid state during the process could form an interface in a single fiber strand. [204] This exciting approach will require further characterization and optimization to facilitate the consistent production of core/shell NFs via this process rather than only a proportion of those present in the interface.
Among the several variations of open liquid electrospinning technologies, Korkjas et al. [99] recently developed a needleless ultrasound-enhanced electrospinning technique (USES) to generate multilayered nanofibrous membranes. USES generates an acoustic fountain by applying a high-intensity ultrasound to an electrified polymer solution instead of a gassing, depositing fibers in an upward motion. In this work, the conventional electrospinning parameters, along with the frequency and amplitude of the ultrasound signal generator, were appropriately adjusted to formulate bilayered PEO nanofibrous mats.
Corona Electrospinning
Corona electrospinning is an advanced high-throughput needleless electrospinning method patented by Molnár, Nagy, Marosi, and Meszaros in 2012. [202] Corona has benefits over other needleless apparatuses as the process works without an open liquid surface, with the solution flowing continuously through the unique architecture of the spinneret, significantly reducing problems associated with solution exposure. The setup consists of a corona spinneret, a high-voltage power supply, a circular electrode with a sharp edge, a grounded collector, and a feed supply unit. A schematic drawing of the procedure is depicted in Figure 12 ( Table 13).
The main working principle of this setup is to allow jets to generate from the edges of the circular electrode. The feed pump delivers the working fluid from the bottom to the top of the spinneret, and the polymer solution is continuously fed through a long, narrow gutter bound to a metallic electrode with sharp edges. Due to the rotating spinneret, the liquid is evenly dispersed and homogeneously distributed toward the edge of the circular electrode, forming cones and jets along the circular gutter. The sharp edge of the electrode contains the highest electrical (7) traction of the collector textile. Reproduced from ref. [202]. b) CAD depicting the design concept of the spinneret. c) Schematic drawing of the C-ACES method coupled with AC high voltage. Reproduced from ref. [212]. d) Photograph indicating multiple Taylor cone formations along the edges of the 100 mm diameter spinneret. Reproduced from ref. [46]. Copyrights: (a, b) Reproduced from patent; [202] (c) Reproduced with permission. [212] Copyright 2019, Elsevier; (d) Reproduced with permission. [46] Copyright 2016, Elsevier. Table 13. Advantages and disadvantages of corona electrospinning.
Advantages Disadvantages 1. As it combines a nozzle-free spinneret configuration where fibers can be generated from its edges while the polymer solution is continuously fed into the system at a high flow rate, high-throughput production is achievable. [ 202] 1. To achieve high-throughput production,it is essential to rotate the spinneret at a certain speed to prevent overflowing and to match the flow rate of the polymer solution with the rotation speed of the spinneret. [46] 2. As the method does not constantly expose the polymer solution to an open surface, this method does not have an open liquid surface; it minimizes morphological inconsistencies due to solvent evaporation affecting the electrospinning process parameters. Therefore, it is possible to use volatile and low boiling point solvents to fabricate NFs, making it an especially interesting approach to producing NFs for pharmaceutical use. [46] 2. The process requires extremely high voltage (as high as 100 kV), which can increase the puchasing and operating costs of the power supplies and make the process less safe for the operator. [ 212] 3. The process can be more economical since there is minimal wastage; the entirety of the polymer solution added to the system can be electrospun. In contract, in free-surface setups, whatever is not electrospun must be discarded due to the exposure to ambient conditions. [46] 3. The process has not been extensively studied or replicated by other groups, with only four papers reporting the use of this technology on Scopus (Elsevier's abstract and citation database). Further research is required to optimize and reduce some processibility parameters (such as the high voltage) and to attempt to produce more intricate structures.
charge density, promoting the formation of Taylor cones, which allows many Taylor cones to self-assemble at the sharp edges of the spinneret simultaneously. When the electric field strength increases, multiple jets will eject from the tip of the Taylor cones. After solvent evaporation, fibers are collected in an upward motion. The initial prototype design of the spinneret reached production rates up to 300 mL h −1 . [46] Recent work by Farkas et al. [212] has managed to further increase the production rate of the process, reaching 1200 mL h −1 via corona alternating current electrospinning (C-ACES), a variation of corona that combines the intense forces of an alternating electrostatic field with corona's sharp-edged spinneret configuration. The approach is conceptually similar to corona but uses an alternating current power supply rather than direct current Figure 13. High-speed electrospinning. a) Schematic diagram of the high-speed electrospinning method. b) Photograph of the device with a continuous cyclone sample collector. Reproduced from ref. [213]. SEM images of c) -galactosidase containing 2-hydroxypropyl-beta-cyclodextrin-based fibers and d) Kollidon VA 64 loaded with itraconazole. Reproduced from refs. [213] and [49] , respectively. Copyrights: (a, b) Reproduced with permission. [213] Copyright 2015, Elsevier; (c, d) Reproduced with permission. [49] Copyright 2020, Elsevier. 1. This process cannot produce complex structures (e.g., core-shell).
2. As the process does not present a free liquid surface, it can be used with volatile and low boiling point solvents, similar to corona electrospinning.
2. The process requires an extremely high rotational speed and high voltage. The fibers produced via this process present secondary morphologies (e.g., beads) and lack homogeneity with a large standard deviation in fiber diameter.
3. Fibers are collected in a cyclone rather than a collector, a new approach that can produce fragmented fibers, which helps their downstream processing for pharmaceutical applications.
3. More research is required to evaluate this relatively new process's advantages and limitations and better understand the processing parameters.
high voltage. During electrospinning, the authors used an annular orifice spinneret 110 nm in diameter, rotating at 100 rpm, applying a 100 kV voltage at a 50 Hz frequency at a feeding rate ranging from 100 to 1200 mL h −1 , collecting fibers in an upward motion from a 75 cm distance (between the spinneret and the collector's surface). The authors employed this technique to produce PVP K90 NFs loaded with spironolactone (an aldosterone receptor antagonist). [212]
High-Speed Electrospinning
In 2015, Nagy et al. first described high-speed electrospinning to produce co-polyvidone (Kollidon VA 64) NFs loaded with a poorly water-soluble antifungal drug, itraconazole. [49] High-speed electrospinning combines electrostatic and high-speed rotational jet generation and fiber elongation resulting in a significant increase in fiber production output. [213] The setup (Figure 13 and Table 14) consists of a stainlesssteel disc-shaped spinneret equipped with 36 equidistantly distributed orifices on the wheel's side wall. The spinneret rotates via a high-speed motor. The rotational speed of the spinneret can be increased up to 50 000 rpm. This high-speed rotation exerts a centrifugal force on the solution, which is forced through the orifices of the spinneret, allowing the jet formation to occur in the presence of a high voltage (greater than 40 kV). The fibers are collected within a cyclone rather than a collector surface after jetting. This way, no free liquid surface is present, and solvent evaporation from the solution is minimized before fiber formation starts, which helps maintain the solution's concentration and viscosity during the electrospinning process. Using this method, the authors evaluated the production rate at ≈450 g h −1 (at a feeding rate of 1500 mL h −1 ), with the possibility of each electrospinning reactor producing about 10.8 kg d −1 . [213] Such a scaled-up, continuous, flexible manufacturing process www.advancedsciencenews.com www.advmattechnol.de can meet the capacity requirements of the pharmaceutical industry.
Discussion and Limitations
Despite the apparent advantages and broader applicability of electrospun fibers over other fiber preparation techniques, as shown in Table 15 above, each and every one of the electrospinning methods described in this review carries its limitations.
The most recognized limitation of conventional electrospinning is the low production rate and the fact that not all materials are appropriate for electrospinning. For example, because of their low molecular weights, many conductive polymers have relatively low solubility in commonly used solvents. They lack enough chain entanglement to maintain stability. Additionally, due to their highly conductive nature, forming a steady jet is challenging. Therefore, electrospinning them into fibers is challenging. A commonly used approach is mixing them with other electrospinnable polymers. However, in this instance, the compatibility of the two materials should be taken into consideration.
When it comes to tissue engineering, one of the main concerns with conventional electrospinning is the limited cell infiltration into the electrospun scaffolds and restricted tissue ingrowth due to their highly dense 2D structure and small diameter pores. The compact and superficial porous structures of these scaffolds result in low cell penetration. The high porosity and small pore size associated with conventional electrospun fibers are directed by the diameter of the fibers and fiber interconnectivity. This phenomenon becomes more pronounced as the diameter of the NFs decreases. While the inherently porous structure of electrospun scaffolds provides fenestrations that are insufficient for most cell types to pass through, they are sufficient for nutrient and cytokine transport. Whilst cellular expansion in the form of a monolayer is not problematic for the development of wound dressings, as the construct is merely meant to facilitate cell migration and proliferation, 3D structures can further enhance fluid retention and exudate absorption. Further research in post-fabrication technologies such as freeze-drying and gas-foaming to expand the scaffold's 2D structure or advancements in technologies such as 3D electrospinning can, in due course, resolve these issues.
The possibility of residual toxic chemicals remaining within electrospun fibers is another limiting factor. The properties of the selected solvent are a critical electrospinning parameter, as it can determine whether a specific polymer can be electrospun and directly affect the morphology of the produced fibers . Highly volatile solvents are generally used in electrospinning to produce dry fibers. However, in a some cases, residuals remain on the surface of the electrospun fibers, which could lead in cytotoxicity when the fibers are used in medical and pharmaceutical applications. Moreover, many postprocessing procedures rely on harsh substances that can pass through to the final product. The use of organic solvents for electrospinning requires further investigation into the use of green solvents when considering the sig-nificant amounts of solvent needed for electrospinning and their negative environmental and health impact.
Co-axial and multi-axial electrospinning present similar limitations. The principal constraint of the process is that all the polymer solutions used must be compatible; otherwise, electrical forces cannot draw them together without coagulation. To ensure the development of a compound Taylor cone at the tip of the spinneret, it is essential to maintain its concentric structure throughout the jet expansion and for the entire process duration. The second limiting factor is that each solution must present similar physicochemical properties. A suitable viscous force is needed to maintain the Taylor cone and prevent its separation during the bending instabilities stages of jetting. This is a valid concern when one of the solutions dries faster than the other. When the working solutions are compatible, the co-axial and triaxial electrospinning methods tend to be successful upon appropriate parametric assessment. The third limiting factor is the balance between the flow rates of different solutions. Variations in the flow rate will affect the final compound fiber quality. A low flow rate would disrupt fiber formation, while a high flow rate would break the structure. Finally, the fourth limiting factor is the design of the complicated concentric spinneret, which plays an essential role in this method. The spinneret provides a suitable working environment for attaining NF of the desired configuration, and it can positively or negatively influence the composite droplet's behavior under the electric field.
3D electrospinning is the only reported technology capable of producing 3D microstructures in the micro/nanoscale in a single-step process-3D electrospinning benefits from unifying a fused deposition modeling 3D printing architecture with electrospinning. The technology has been successfully employed to produce woven electrospun structures by directing the deposition of the fibers via a Cartesian coordinate system. As a relatively new fabrication approach, research should focus on building more mechanically stable structures. Although cryo-electrospinning and wet electrospinning can produce non-woven randomly oriented 3D structures, both processes are limited to the availability of compatible polymer-solvent systems and, in most instances, require post-fabrication processing to obtain the final 3D form.
Portable electrospinning is a fascinating technology that illustrates the feasibility of electrospinning using a battery-powered handheld apparatus, producing NFs at a lower voltage in an in situ manner. This technology has potential in wound healing management, as it allows for the direct deposition of fine fibrous layers in open wounds, which can protect the wound bed from infection and promote healing while reducing patient discomfort. Although the process has been successfully used to incorporate pharmaceutical compounds into a range of synthetic polymers and a clinical trial assessing the effect of portable electrospinning in wound healing is ongoing, further research is needed to address operational safety concerns, solvent limitations (due to residual solvents) and the stability of the process toward fiber morphology.
The primary limitation of the needleless electrospinning techniques is mainly related to the relatively sizeable free liquid surface exposure during the process. Bubble electrospinning, in particular, has immense free liquid surface exposure due to bubbling. Alongside the formation of polymer jets, solvent evaporation can be detrimental to the surrounding environment and 1) Low productivity. 2) Simple fiber architecture (monolithic).
3) Single fiber configuration. 4) Compact 2D structure of high density and small pore size.
Co-axial 1) Creates novel core-shell or hollow structures.
3) Can produce NFs from otherwise unspinnable materials.
3) Difficult to implement and balance the flow rates of different fluids in a composite jet. Tri-axial 1) Creates a novel trilayer structure.
2) Can produce composite fibers of enhanced mechanical stability and biocompatibility (e.g., incorporating synthetic polymers in the multiple cores and naturally derived material in the sheath). 3) Can produce complex drug-release systems. 4) Can produce NFs from unspinnable materials. 1) The only single-step method capable of producing 3D fibrous structures. 2) Woven or nonwoven structures are obtainable with processes such as 3D electrospinning or cryo-electrospinning, respectively. 3) The shape and macromorphology of the 3D structure can be directed.
1) Relatively new method.
2) Polymer systems with higher conductivity are necessary for 3D assembly. 3) Increasing the height of the 3D structure reduces the precision of the process. 4) Poor mechanical stability.
Portable
1) The only method capable of in situ electrospinning.
2) Mostly applicable for wound healing, with the potential of being used for on-site wound management. 3) Cordless, handled electrospinning setup powered by a battery. 4) Can incorporate pharmaceutical compounds or other active substances and nanomaterials (e.g., nanoparticles).
1) Extremely new technology.
2) Poor stability during electrospinning and safety concerns (e.g., residual solvents). 3) Predominately used with water and ethanol-soluble polymers; issues associated with solvent evaporation need to be addressed.
Needleless
Roller 1) High-throughput production of micro/nanostructured fibers. 2) Continuous fabrication can be implemnted for industrial-level production. 3) Easy to manipulate the production rate and fiber diameter. 4) The most researched needleless method in the literature. 1) Predominately produces large fiber diameters of a high standard deviation and less morphologically consistent fibers than needle-based setups. 2) Requires a higher voltage to initiate jetting compared to needle-based setups. 3) Susceptible to ambient conditions; solvent volatility due to the exposed open surface, which can affect fiber homogeneity. Bubble 1) High production rates are attainable; it has been successfully employed for mass production. 2) Can be operated with a low voltage compared to other needleless methods. 1) A newly described and not yet thoroughly researched technology.
2) The large exposed area poses safety issues for the operator and environment when toxic solvents are used. 3) Susceptible to ambient conditions and air pressure, reduces fiber homogeneity when electrospinning is prolongated. Corona 1) Low-free liquid surface spinneret.
1) Requires a certain rotating speed to avoid overflow. 2) Requires extremely high voltage.
3) A relatively new process needing further research to understand its advantages and limitations.
2) Continuous fiber production is possible.
3) Fiber fragmentation in the collector cyclone can help downstream processing.
1) Production of complex fiber structures (core-shell) is not possible. 2) Process requires extremely high rotational speed and high voltage. 3) A relatively new process requiring further research to understand its advantages and limitations.
www.advancedsciencenews.com www.advmattechnol.de harmful to the operator. In extreme cases, when the concentration of the combustible solvent accumulates to a critical value, it will cause ignition. Another concern is that a large liquid surface enhances water absorption from the air, diluting the spinning solutions and thus affecting fiber consistency and quality. In addition, needleless electrospinning technologies are overall associated with poorer fiber homogeneity and reduced consistency among batches. The higher critical potential required to attain Taylor cone formation in needleless electrospinning methods often limits the selection of polymers that can be electrospun, and the complexity of the polymer system. Thus, these techniques have limitations in polymer selection, operating costs, and environmental concerns. Additionally, it will be a long time before needleless electrospinning methods can fabricate NFs with complex structures such as side-by-side and core-sheath crosssection configurations. Many approaches have been considered to overcome the problems associated with fiber quality and reproducibility when using needleless setups. First, needleless setups can be enclosed in sealed transparent containers to reduce solvent evaporation. The humidity inside the container can be controlled to prevent significant water absorption from the air. Second, by modifying the design of the spinneret and optimizing process parameters, the ejecting speed and jetting can be accurately manipulated. Finally, developing spinneret configurations that limit polymer exposure while retaining a high surface electrode area for high-throughput production is an effective way of improving consistency.
Future Perspectives
Although it is easy to recognize that electrospinning is a fascinating technique for fabricating a large variety of intriguing micro and nanoscaled materials, potential problems still need to be addressed.
A small proportion of the research has focused on modeling; however, a universally accepted simulation model for accurately predicting the needle-based or needleless electrospinning parameters has not yet been developed. As a result, the majority of electrospinning experiments rely on an empirical understanding of the process requirements and parametric studies. For instance, in needleless electrospinning, the challenge remains in producing uniform fibers with high output while simultaneously obtaining the desired fiber diameter, structure, and application-specific properties . To overcome these limitations, researchers should be more open to sharing positive and negative results on optimizing the different parameters of the techniques, including solution, process, and ambient conditions. This approach will help to better understand and control the morphology and reproducibility of each technology. This progress will help better predict the Taylor cone formation requirements, jet behavior, and fiber output.
Another critical concern is the economic and environmental aspects of the processes. Over the years, many solvents have been successfully used to produce electrospun fibers through solution electrospinning. However, the predominant number of solvents used to formulate fibers today can significantly impact the environment and human health by being harmful to humans and ecosystems. This is especially true for needleless electrospinning since it has a large liquid surface exposed to the air, and highly volatile solvents are evaporated into the surround-ing environment. This is not ideal for the mass production of fibers in the industry. Although a significant amount of work has been carried out using aqueous polymeric solutions as a less harsh alternative to organic solvents, when the processibility of the polymer in water is not feasible, directing the focus toward the use of "green" solvents is essential. Although using melt electrospinning seems like a simple approach to meet these requirements, the process has limitations concerning the complexity of the fibers produced, producing fibers large in diameter, polymer-related thermal degradation, and incompatibility with several high-throughput technologies discussed in this review.
Conclusion
It is universally acknowledged that electrospinning has played a significant role over the past two decades in developing diverse advanced nanostructured materials for almost every conceivable application. Researchers from around the globe have contributed to the evolution of the electrospinning principles, uncovering the process's capabilities and discovering technologies and methods to move forward and push the limits of this technology. Significant progress has been made in understanding its principles and exploring its applications, as has been documented by the exponential and consistent increase in the number of publications and patents filled in the past two decades.
This review focused on comparing the advantages and limitations of needle-based and needleless electrospinning technologies. A brief history and background knowledge of the electrospinning principles were highlighted. Generally, the fundamental problem associated with needle-based techniques is scaling up limitations and operational complexities. For needleless processes, on the other hand, the critical issues are related to the large free liquid surface, which results in economic and environmental issues and difficulties in obtaining morphologically consistent batches between experiments. Many parameters of newly invented techniques still need to be optimized.
The future of each technology and its advancement or dismissal will depend on specific application requirements, including specialized structures, multifunctional hierarchical organizations, and scaling for industrial production. The combination of electrospinning with other fabrication methods (e.g., bioprinting) holds a promising future for numerous applications. | 2023-05-08T15:05:17.539Z | 2023-05-05T00:00:00.000 | {
"year": 2023,
"sha1": "a0c850d0a7ca3ec164cae679203fd4a2fd13d2b7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/admt.202201723",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "ea066ffb5229396296a8ac62febd20890711f9cb",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
211663313 | pes2o/s2orc | v3-fos-license | Housing Situation of Holocaust Survivors Returning to Their Hometowns in Poland after the Second World War. Examples from Kraków and Łódź
Housing situation of Holocaust survivors returning to their hometowns in Poland after the Second World War. Examples from Krakow and Łodź
The topic of this article is the housing situation of Holocaust survivors returning to their hometowns in Poland. The analysis is based primarily on literary personal accounts 2 devoted to Kraków and Łódź -cities which were home to two of the largest prewar Jewish communities in Poland and were not destroyed during the Second World War.
The post-war housing situation of Polish Jews was shaped primarily by: wartime and post-war displacements, deportations, escapes, and migrations; the plunder of Jewish property (by the German occupiers, and by Polish society) during the war, and post-war Polish legislation regarding its restitution; as well as by the post-war housing policy of the Polish state which resulted in the creation of peculiar neighbourly communities full of tensions, misunderstandings and, sometimes, violence. However, before examining these communities described in the personal accounts of Jewish people, it is necessary to discuss the circumstances that led to their creation.
Kraków and Łódź -cities on the move
"A room in Kraków. A room in post-war Kraków. It was such a rare occurrence, a room," writes Alona Frankel. 3 This concise statement fully reflects the consequences of the war and post-war migration. 4 Anna Czocher calls occupied Kraków, the seat of the General Government's administration, a city on the move. 5 In the city, the policy of segregating the residents by race and then separating one from another led to, as put by Andrzej Chwalba, the creation of cities within the city: 6 German, Polish, Jewish 7 and Ukrainian. These divisions created the need to move. The migrations, however, did not only affect the population living in Kraków since the pre-war period. Refugees and displaced persons from other parts of Poland came to the city and, after the defeat of the Warsaw Uprising, there came a mass of those who had lost everything in the ruins of the capital. Deportations, expulsions and escapes became a permanent feature in the rhythm of wartime life, continually filling the train stations with new passengers and apartments with new tenants.
After the end of the war, Kraków, which suffered no major destruction, began to be flooded with new waves of people. Roma Ligocka writes, "Suddenly Kraków is full of people. That's because so many bombs fell on Warsaw that hardly any houses are left standing there. Almost none of the little Jewish towns still exist. Finding an apartment in Kraków is now very difficult." 8 Among the migrants were people returning from hiding, former prisoners of concentration camps, forced labourers who had been snatched from their former homes, and those who did not want to return to their own towns and villages: "It was a time of migration. Entire groups of people changed their places of residence," writes Ligocka. 9 Łódź can certainly also be called a city on the move. During the Second World War, the city was within the borders of the Reichsgau Wartheland (Warthegau), the western and northern areas of Poland that were directly incorporated into the Third Reich. An intense germanisation campaign ran throughout the occupation in Łódź, 10 which had a large German population before the war. 11 Poles were deported to the Third Reich or the General Government and were replaced by Germans from the Reich, the Baltic states, the Eastern Borderlands of Poland and the Białystok area, as well as from 3 Frankel 2007: 84. The texts cited in the article were translated from the versions published in Polish by G. For more on the population displacements resulting from the Second World War, post-war border changes, repatriation agreements, and the policies of the communist regime vis-à-vis national minorities, see Eberhardt 2000;Kersten 2005;Sienkiewicz, Hryciuk (eds.) 2008. 5 See Czocher 2011: 15. 6 See Chwalba 2002 After the ghetto was sealed in March 1941, Kraków Jews had to change houses several times within its limits because the Germans reduced the living space intended for them due to the deportations to the death camps. For more on this topic, see Bieberstein 1986;Zimmerer 2004;Löw, Roth 2011;Zimmerer 2017. 8 Ligocka 2003: 114. 9 Ligocka 2004 In April 1940, the German authorities changed the city's name to Litzmannstadt. 11 In 1937, the ethnic makeup of Łódź was the following: Poles numbered 389,500 (58.5%), Jews 207,000 (31.1%), Germans 53,700 (8.0%) and other nationalities 2.4%. See Rzepkowski 2008: 92. Romania (chiefly Bessarabia and Northern Bukovina). 12 Jews were imprisoned in the second-largest (after Warsaw) ghetto in occupied Europe, which was established in February 1940. It existed until August 1944 and was the longest functioning ghetto in occupied Poland. First, the Jews of Łódź were sent there, followed by those deported from Austria, Germany, Luxembourg, the Czech lands and liquidated ghettos in the Reichsgau Wartheland, and then others from different places in central and western Europe. Most of them were murdered in the Chełmno nad Nerem (Kulmhof) or Auschwitz-Birkenau death camps.
After the defeat of the Third Reich, the majority of the Germans fled Łódź with the retreating army. Those who remained in the city were mainly women, children and the elderly, and they became targets for the anger of Polish society, which placed on them the collective responsibility for the wrongs perpetuated by the Germans. Some of those who had been entered into the German People's List (Deutsche Volksliste) during the war were deported into the interior of the USSR, while others were expelled to the British or Soviet occupation zones in Germany.
Łódź was one of the few large cities in Poland to be nearly unaffected by the destruction of war. Due to its short distance from Warsaw, the most destroyed European capital, covered with approximately 20 million cubic metres of rubble, 13 it was decided that the state administration would be located in Łódź. Former residents -Polish and Jewish alike -returned to the city, but many residents of small towns in central Poland and the Eastern Borderlands also arrived. After the cities in the so-called Recovered Territories, Łódź became the largest gathering point for repatriates returning from the USSR. The character of post-war Łódź is described by authors such as Shimon Redlich: Lodz retained its character as an industrial and proletarian city in the postwar years, but lost its multiethnic and multicultural flavor. Most of its prewar Germans disappeared. Its Jewish population was drastically diminished, and most of those Jews who settled in Lodz in the postwar years were not original Lodzer Yidn -Lodz Jews. The official postwar Communist image of the city cultivated its 'proletarianism' and its 'Polishness.' Former industrialists' mansions and villas now housed various institutions of the Communist state. Still, at least in the bleak early postwar years, Lodz was a bustling urban centre, attractive to shoppers from all over Poland. For a while it also served as an unofficial 'interim capital,' while Warsaw was being rebuilt. 14 "The houses were obviously not vacant and did not wait for their lawful owners" 15 -post-war Jewish returns home they are so many," grumbled a woman at the train station in Kraków at the sight of a group of prisoners returning from the camps. 16 The returning Jews were viewed suspiciously and questioned about their wartime experiences -that they managed to survive was marvelled at. They were subjected to whispers behind their backs and often also openly attacked. Yoram Gross recalls the reception his mother met in Kraków: "My mother had many friends in Krakow who greeted her warmly and were delighted to see her again. There were also others, less friendly, who seemed surprised, who would say: 'Oh! Mrs Gross, so you're still alive?' as if they were somehow disappointed at finding her among the living." 17 To returning survivors, their hometowns seemed to be inhospitable places, populated by crowds of indifferent or hostile people living in a cemeterycity, a city of the shadows of murdered loved ones.
The interactions between the returnees and those who had taken over their former flats and homes during or just after the war were particularly negative. "Our apartment on the top floor was occupied by strangers who would not even let me in over the threshold to look at my own room," 18 writes Halina Nelken of her return to Kraków. She spent the first few nights in her hometown staying with various neighbours. Not all of the returnees could count on such kindness. Luna Kaufman recalls a visit to her family's tenement house, which was built by her grandfather. The pre-war janitor, still working there, greeted her with a bitter remark indicating her dissatisfaction with the survival and return of the rightful heir to the building. The old, non-Jewish tenants still occupied their flats. Only Kaufman's family flat had changed hands. Now it was they, alien and without roots in this place, who called her flat home. 19 It happened that Jewish homes were occupied not by total strangers, but close neighbours -some of them at the request of the owners themselves so they would be protected from confiscation by the German authorities and recovered after the war. Hope for this often turned out to be false, as it was for Józef Bau: "Nothing had changed. The house remained the same house, but my former neighbours who met me on the stairs told me that our flat was occupied by the caretaker who had received the keys from my father when we left our family nest on the orders of the German invaders and moved to the ghetto. They revealed to me, quietly and in secret, that next to the door was a chair, and on that chair lay a knife. I did not ask about anything more." 20 The experiences of Polish Jews returning home after the war and their relations with their Polish neighbours has been analysed by, among others, Jan Tomasz Gross. 21 In his reflections on the causes of post-war violence against Jews by Polish society, he treated antisemitism as the main explanation, not dedicating much space to the Poles' psychological and material conditions. These were carefully examined by Marcin Zaremba, who writes that post-war aggression of Poles towards Jews resulted from many factors. Zaremba claims that for Poles impoverished by the war, the return of the old owners of their homes was a threat to their economic existence. The answer 16 Bronner 1991: 164. 17 Gross 2011: 165. 18 Nelken 1999: 267. 19 See Kaufman 2009: 128. 20 Bau 1995 Gross 2008. to the question of hostility towards returning Jews could thus be found not only on the basis of antisemitism, but also on the basis of sociobiology and the animal struggle for a nest, which was fuelled by the institutional vacuum in the immediate post-war period and the sense of chaos. Zaremba stresses, however, that the fear of losing one's flat due to the return of its former owners brought about inter-ethnic distance and, in effect, intensified antisemitism. 22
"Abandoned property" -restitution of Jewish property
Some of the returnees took their fight for their flats and homes to the courts. The issue of the restitution of property, both movable and immovable, was regulated by an 8 March 1946 decree on abandoned and formerly German property (Dekret o majątkach opuszczonych i poniemieckich). 23 The property of Polish Jews was classified as "abandoned." Pursuant to the decree, this category was to include assets over which the rightful owners lost control as a result of the war starting 1 September 1939, as well as those that were transferred by the owners to a third party in order to avoid confiscation. Immediately after the end of the occupation, "abandoned" property remained in the hands of private persons or the local administration units. Control over them was soon taken over by the liquidation offices (urzędy likwidacyjne). According to the law, people who had come into possession of "abandoned" property were obligated to report it to the authorities. In practise, however, many people did not report their possession of "abandoned" property.
The pre-war owners (or their next of kin) could apply to the courts of first instance (sądy grodzkie -district courts;) to reclaim ownership (przywrócenie posiadania). If successful, the claimant would reclaim ownership of the physical property without a title (tytuł własności). 24 Due to the frequent lack of documents allowing for the unambiguous determination of who had owned property before the war, what had happened to those people and their property during the war, and the identity of their closest relatives, a decisive role was played by witnesses, which created a field for various abuses. Henryk Vogler, a lawyer by education, appearing before the Kraków courts during the post-war period, writes: "Sometimes it happened that the alleged deceased -killed off by the self-proclaimed relatives, planted witnesses, advocates and judges -returned unexpectedly from abroad. But it was too late. Perjury, fraud or ordinary theft were in the meantime sanctioned by legal regulations and the new owners could sleep peacefully." 25 Some survivors gave up the fight for their property or tried to recover it amicably, without the participation of the courts -this particularly concerned the monetary equivalent of the taken-over real estate, as well as the movable property given to neighbours, employees, acquaintances or strangers for safekeeping. Rut Kornblum- The authors of literary personal accounts note the different attitudes of those to whom property, often valuable, was left for safekeeping. For example, Ester Friedman describes the friendly attitude of a neighbour lady who managed to save the objects entrusted to her during the war. Most of them were returned to their rightful owners, who could also count on additional help from the woman. This help, provided without the knowledge of her husband, certainly required empathy, but also courage. It was thus of greater value for those who experienced it: I asked my Mama and we went to our former neighbour, Mrs Pekajowa. She was very pleased. She was doing well and had a shop. I could always bathe at her place, but only when her husband was not at home. He couldn't look at Jews. Mrs Pekajowa gave Mama a diamond ring, carpets and other things that we had left with her. She also gave us some money. The paintings, which her husband had already seen, she was afraid to give back to us though. And there would be no place to hang them.
[…] She gave me a beautiful dress and presents for Mama. She truly liked us.
[…] When she came to our place, she always wiped away tears and crossed herself. 27 Some people with whom possessions had been left during the war were not willing to give them up afterwards. Some claimed, sometimes truthfully, that they had lost everything during the war, that the property was taken away by the occupiers, or that they were forced to sell it to support their families. "[…] They simply did not expect us to return," writes Roma Ligocka. "Most did not. Maybe they sold those things during the war because they themselves had no money…" 28 Some of those who were given items for safekeeping claimed that nothing was stored with them, or they did not remember that they had been entrusted with anything. Roma Ligocka's mother complained about this to her daughter, who would remember the conversation for years: "Imagine, my parents left almost all of our possessions with our neighbours and acquaintances. Silver, carpets, paintings and furs. Even a piano. And now people don't remember it at all." 29 Stella Müller-Madej raises questions as to the fate of pre-war Jewish property. The drama of the situation, where survivors found themselves deprived of any kind of property, resonates: "Where have the belongings of the millions who were killed gone? My father always repeated, 'The most important thing is that we're alive.' He's right, but it is not even a substitute for our pre-war life. Why can we not recover our flat? What has happened so that my parents have nothing from before the war?" 30 In the maze of regulations, in the labyrinth of coteries and personal dependencies -flat allocations Fearing for their own safety and suffering from health problems and difficult financial situations, few returnees decided to fight in the courts for the return of their property and homes. In light of their inability to return to their own homes, they tried to find a new roof to put over their heads, but this was made difficult by the overcrowding of the cities as the result of migration, and by places being taken over by representatives of the new communist authorities, army and security services. 31 The authorities did not only occupy flats, but also decided on their distribution among those residents with no ties to the state apparatus. 32 Post-war legislation introduced the concept of flat allocations (przydział mieszkania). Housing commissions (komisje mieszkaniowe) and then accommodation offices (urzędy kwaterunkowe) obtained the right to assign new tenants to the flat of the main tenant. The minimum number of people that could occupy a residence as well as the minimum usage area per tenant was also established through a top-down process.
In Polish cities, flooded by successive waves of displaced people, free flats were worth their weight in gold. If one was found, it most often was occupied without waiting for the official allocation. Henryk Grynberg recalls the fight for elegant, wellfurnished, formerly German flats in post-war Łódź: We stayed with the Nusens, the Fryds [friends from the author's hometown], and the Meinemer brothers and their sister Belcia from Minsk Mazowiecki, who were all sharing an apartment with kitchen, bath, and four other rooms laid out in railroad style. Each of the rooms was furnished in a different color: coffee, cream, cocoa, and chocolate. There was a gleaming grand piano in the living room and the floors were a shiny parquet that squeaked slightly. Even the squire in Radoszyna [the place Grynberg hid during the war] had never lived like this. Or had furniture like this. […] The hangings with German lettering had been placed upside down on the floor and were used for wiping your feet. There had been many apartments like this when the Nusens and the Fryds arrived in Lodz. They were almost completely untouched because they had been guarded by soldiers […]. A note on the door would say that the apartment had been taken by corporal, noncom, or sergeant so-and-so, and everyone knew you had to find that corporal or sergeant, pay him a suitable sum, and then you could move in. 33 Grynberg and his mother soon moved out of the Fryd-Nusen flat and got their own room with a kitchen. They quickly found that it was not only necessary to know how to find a flat, but also how to keep it: A few days after we moved in, somebody who had paid the same soldier my mother had, arrived and demanded we vacate the apartment. Fortunately, both Gaworczyks [the neighbours occupying the front part of the flat] were home. They took the man by the arms and threw him out. My mother went immediately to the Housing Department to get an official allocation. They told her they didn't issue allocations to people who had moved in illegally. Which meant they wanted a bribe, too. My mother went to see Comrade Jasinski, an older man who was the head of the Housing Department, and told him some of what we'd been through during the Occupation. She was given an official allocation, two copies. Jasinski told her to nail one up on the door, and to hide the other one well. 34 In the Grynbergs' case, those who helped them keep their flat were their Polish neighbours and a sympathetic official, who not only circumvented the law but also did so without a bribe. The card with the information about the flat's allocation was soon pulled off the Grynbergs' door and the man who claimed to be occupying their flat came to their room accompanied by a policeman. The second copy of the confirmation of allocation, which Henryk Grynberg's mother had hidden on the advice of the official, discouraged him for good. The danger of losing their home had been averted.
"The charms of life in a comunalka" 35 -Kraków and Łódź Jews and their neighbours
It was safer to live with a group of friendly people, with friends or relatives, than with strangers. It was thus easier to defend one's state of ownership, and sharing living quarters with people one knew and accepted was less troublesome than doing so with strangers. Marcin Zaremba points out how the accommodation offices most often divided the large pre-war flats according to the model of "one room -one family." In this way, some got a room with a bathroom and others with a kitchen. Some had access to a staircase, others came in via the former servants' entrance. These flats were called "comunalkas" [komunałki -communal apartments] or kolkhozes. 36 Salomea Kape recalls one such flat in Łódź: "[…] The new, incompetent city authorities assigned a pair of newlyweds to the room that separated our kitchen from the bedroom. This surreal, senseless division of the flat robbed us all of the sense of freedom in our own home." 37 Everyday living in these conditions required patience, self-control, tolerance and flexibility. Nevertheless, it was sometimes possible to create the atmosphere of a real home. This was the case with the residents of the tenement at 31 Piotrkowska Street, among whom was Włodzimierz Szer: [The flat] was located in the front part of the building, on the third floor. From the hallway, one entered a large front hall. On the right were doors to two big, nice rooms, overlooking Piotrkowska Street. These were the brothers' rooms, and later the couples': Oskar and 34 Ibid.: 60. 35 […] And further on was an entrance to the very important, large dining room that was the epicentre of all social and other kinds of life's experiences. A long passage led from the dining room to the kitchen; on the left there was a bathroom and a small room called 'the staff room' since in the past it was where the maid used to live. Hanka and her baby lived there; it was convenient, located between the bathroom and the kitchen. In the kitchen there was an entrance to a separate hallway, which led to the backyard, so the lady and the gentleman of the house who were using the main hallway wouldn't, God forbid, run into the servants and suppliers of milk, bread etc. The apartment was typical of the rich upper middle class of the end of the nineteenth and beginning of the twentieth centuries, and it was not remarkable. But for us, at that time, it was the height of luxury and chic. I am not able to name all those who passed through our dining room during the year and a half when 31 Piotrkowska Street was operating (Spring 1945-Fall 1946. Passing through meant: they ate, drank, slept -mainly on the floor -how else, how would one get so many beds and blankets -until they found either relatives or a better place under the sun. This was a warm and a welcoming home created by decent people. 38 Jews returning to Kraków and Łódź searched for friends and relatives with whom they could live, not only to avoid sharing their flats with strangers, but primarily for emotional reasons. They built a substitute family and could feel more comfortable and safer amongst them. Because they were united by their pre-war past and community of war experiences, they felt obligated to mutually support one another. In Włodzimierz Szer's home lived friendly lodgers, some of whom were related, joined by shared history and shared experiences, their fates many times interwoven. Thanks to this, their everyday lives were full of warm relations. The mother of Yoram, Natan and Klara Gross took in her sister and her children who had been repatriated from the Soviet Union: "Our dear Mama always had a big heart. So, when her sister Mala with her four children returned from Russia -and as a widow -naturally, she took them in. Two days before my delivery date, the lodger in the neighbouring room died and, before the accommodation office could allocate it to someone, Mama transported my aunt and her children there." 39 Creating a community of survivors, a camp family, 40 taking in friends, relatives, and sometimes also complete strangers seeking shelter and met by chance, was a rather common phenomenon. Stella Müller-Madej writes: "[Father] often left. He nearly always brought back some lonely man, usually in concentration camp rags […]. 41 Through the house passed many people, those who had returned from the camps, searching and waiting for their families, and also old friends of my parents. It's cramped; seven people live in two rooms." 42 The flats became increasingly cramped and crowded, but for those who had shared difficult wartime fates there was always room. Halina Nelken writes: "Every apartment was bursting at the seams, and it was normal to take strangers, 38 Szer 2016: 136-137. 39 Gross 2006: 33-34. 40 For more on creating a community of survivors, see Dvorjetski 1963: 213-214;Koźmińska-Frejlak 1999: 133. 41 Müller-Madej 2001 or people who had shared the same misery, in under one's roof." 43 Rachel Grynfeld recalls that after returning to Łódź she went to the house at 26 Kościuszki Street, which her parents had purchased with a friendly Pole, Mr Wesołowski. Mr Wesołowski welcomed her with open arms -a rare occurrence -and gave her, as the sole inheritor of her family, a three-room flat in the tenement. Unfortunately, it was quickly requisitioned by a Russian officer, so Mr Wesołowski gave her another and equipped it with furnishings and bedding. Rachel Grynfeld writes that she immediately took in friends who, like her, had survived the camps and who after the war could not count on the support that she received. These young, orphaned women created a substitute family, shared their money and jointly ran a household. 44 More often than not, Jewish people returning to Kraków and Łódź shared flats with strangers rather than friends and family. In the subdivided flats people of different backgrounds lived side-by-side, with different levels of education, different pre-war and wartime experiences, and diverse possessions. In observing the shared apartments so widely described in Jewish personal accounts, we can see a cross-section of Polish society of the 1940s. The elegant, spacious apartments belonging before the war to the most affluent social strata became quarters for workers, migrants to cities from the countryside, repatriates, former prisoners of concentration camps and, not infrequently, people from the margins of society. The loss of the right to freely dispose of one's flat -the last symbols of social prestige and position -was a visible sign of the degradation of their owners and an effective tool of the class struggle used by the state. Alona Frankel, whose father was already a communist before the war, writes that thanks to the favouritism her family received a room in an elegant, luxurious flat whose owner -a pre-war landowner -considered them to be the worst of the lodgers because they were Jewish: It finally paid off to be a communist. It […] was a room in a shared flat. A peculiar, post-war community. A six-room apartment with a shared kitchen and bathroom. In the corner house at 1 Sobieskiego Street. The entrance was once elegant and magnificent.
[…] The flat was huge. Expensive parquet floors, side glass doors and high French windows. In the past, before the war, was occupied alone by an oldish landowner, Mrs Jarosława Morawska. It was her city residence. She also had a manor house not far from Kraków. The estate, including the manor and all of its furnishings, was nationalised after the war, while in Mrs Morawska's lovely Kraków apartment a variety of strange occupants were quartered in accordance with the decree of the new authorities. We arrived last -and we were Jews. The worst of all. Mrs Morawska, in the name of fairness and equality, must content herself with one room, and not even the largest or most elegant. 45 Alona Frankel mentions that the elegant tenement on Sobieskiego Street had a janitor, Józefowa, who was engaged in various activities in order to feed her ever-expanding family, as her husband was an alcoholic: Her husband hung around all day in an alcoholic haze on a stool on the staircase […]. From his corner, belched the odour of alcoholism, sadness and embitterment. He never worked. 43 Nelken 1999: 268. 44 Grynfeld 2005: 57-58. 45 Frankel 2007 […] Józefowa, who the communist regime had not yet managed to embrace with its justice, heroically and with great determination attempted to feed her beloved children and husband, the sad alcoholic who drank denatured alcohol and sucked the blood out of her. In addition to working hard cleaning the staircases and the courtyard, she took in rolled up carpets from wealthier families and beat them with a wicker carpet beater. 46 Poor people, often on the margins of society, populate the social space of most of the analysed literary personal accounts. They are present e.g. in the neighbourhood community described by Eva Hoffman. She writes that people of various backgrounds lived in the tenement at 79 Kazimierza Wielkiego Street, making it an arena for unusual events; it represented the world in microcosm, with all of its absurdities and misfortunes, but also its small joys: The three-story building is full of talk, visits, and melodrama. The dragon caretaker is married to a thin, forlorn man, at whom she shouts perpetually and whom one day she stabs with a knife. After that, he slumps even more sadly than before, avoids everyone, and takes to breeding chickens in the enormous attic under the roof. Their squawks and flying feathers turn the interior into a place of Bruno Schulz surrealism, and I'm drawn there as if it were inhabited by magic. The other downstairs apartment is occupied by a shoemaker, who, in more classic style, gets drunk and beats his wife. [...] Then are the real neighbours -people between whose apartments there's constant movement of kids, sugar, eggs, and teatime visits. 47 The shared apartments became a space for a constant test of strength between neighbours. The winner was the person with the largest family, better connections and the most cunning. Holocaust survivors, often living alone or in small families and deprived of communal support, were condemned in advance to lose these competitions. They often did not have the strength to fight for the right to use the shared spaces, such as the kitchens or bathrooms. Janina Katz writes: We moved to 22 Zielona Street, to the house where my parents lived before I was born. But we did not live in the same two-room flat […]. We lived together with the Nowak family and Mrs Nowak's mother.
[…] Lola [the mother] and I got the smallest room, but it was rather large because we did not have any furniture. 48 It was Mama who got this flat thanks to her old acquaintance, but from the very beginning the Nowak family, which was larger, that decided what we were allowed or not allowed to do. First off, they forbade us access to the kitchen. But we did not really need the kitchen. Lola immediately purchased an electric stove for making coffee and frying eggs. 49 The initial organisational difficulties were eventually overcome, and Mrs Nowakowa even became Janina Katz's godmother when she decided to convert to Catholicism. Mrs Nowakowa even gifted her with a gold chain with a medallion depicting the Madonna. Nothing could have foretold the disaster that ensued. As Janina Katz 46 Ibid.: 213-214. 47 Hoffman 1990: 12. 48 Katz 2006: 85. 49 Ibid.: 86. writes, one day the lady became "sick with anger" 50 and turned against her goddaughter and Lola. The author states that the torment at home caused her to often seek shelter at her friend's house: "Mornings and evenings were the worst. In the morning, she relieved herself in front of our door; in the evenings she took long baths. The bathroom adjoined our room and Mrs Nowakowa drilled a hole in the wall. At night, we were woken by her throwing pieces of the wall on the floor." 51 Katz recalls that she and her mother had nowhere to escape for a long time because they knew no one who could help them obtain another flat. The situation worsened: "Mrs Nowakowa hit her [the author's mother] in the stomach, screaming: 'You damned Jewess!' My godmother! They were supposed to meet in court." 52 The author writes that they finally managed to free themselves from this embarrassing and awful neighbour. They found a new flat: "Once again, we lived in one room, but we were friendly with the family who occupied the second room of the flat, although their life was much more colourful than we would have liked; blood and, more often, vodka, flowed frequently." 53 In personal accounts written by Jews, we can not only find images of conflict between Polish and Jewish neighbours, but also between Jewish neighbours. When an opportunity to take over an entire apartment appeared, close ties sometimes fell apart. Włodzimierz Szer, whose recollections of the shared apartment at 31 Piotrkowska Street are the most positive, did not have many good memories of his next residence. With his family growing larger, he moved to a room on Bandurskiego Street vacated by his cousins. The change of address took place without the participation of the accommodation office. In his memoirs, which are addressed to his children, he describes the conflict with the new neighbour -his cousin Sabina: […] Sabina treated us like intruders. She was furious, hoping to have the entire apartment to herself. She made our lives a misery, especially Felusia's, who spent more time at home than I did. One day your mom, while holding Karusia [the daughter] was cooking something in the common kitchen, which was, because of its function, the main battlefield. Sabina, holding the dog, said to her boyfriend: 'There isn't a child in this world whose eyes are more beautiful than my Fly's [dog's].' Your mom, who wasn't very resilient, fled the kitchen and began to avoid the cousin like a plague. One day, Sabina told us that she would sue us since we had occupied the apartment illegally and that I would go to jail, but that she, in respect for my parents, whom she had known before the war, would be the first one to bring me food parcels to prison. The woman wasn't altogether sane. According to Felusia [the author's wife], I responded that I would be the first to bring flowers to her grave. Those
Summary
Kraków and Łódź Jews who survived the Holocaust returned to their hometowns where seemingly nothing had changed -the trams still traversed the city, riding on the same tracks; the familiar streets led to the pre-war squares and parks; even their homes still stood on the same streets. Only the flats of the survivors were not empty; they were not waiting for their rightful owners. It could be attempted to recover a flat that has been occupied by others through the courts, but a survivor did not often have the strength or the resources to do so. As a result, in accordance with the post-war legislation, the survivors found their ways to apartments divided by the accommodation offices. Their neighbours were different people -from pre-war aristocrats to people on the margins of society. Images of these difficult, tension-filled neighbourhoods can be found in Jewish personal accounts. Certainly, it was hard for everyone involved -it is not easy to share space with strangers. It seems, however, that for Holocaust survivors, deprived of communal support and traumatised by war, it was the most difficult. The indifferent, often hostile, world around them took on the figure of their neighbours, forced its way into their homes and stripped them of their sense of safety, stability, and hope for a return to normal life. Many survivors could not bear the burden of such a difficult day--to-day existence -they left Poland, looking for safe haven elsewhere. | 2019-10-24T09:19:26.526Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "8498471def4851bff6e5ffff3879c1e64dedeb62",
"oa_license": null,
"oa_url": "https://www.ejournals.eu/pliki/art/14180/",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "33c2c3bad602b208ad67543fe938fd0032795187",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
} |
253374554 | pes2o/s2orc | v3-fos-license | Development of small intestine and sugar absorptive capacity in goslings during pre- and post-hatching periods
This study was conducted to investigate the development patterns of small intestine, intestinal morphology, disaccharidase activities, and sugar transporter gene expression in goslings during pre- and post-hatching periods. Small intestine was sampled on embryonic d 23 and 27, day of hatch, and d 1, 4, and 7 post-hatching. A total of 18 eggs with the breed of Jilin White geese were selected at each sampling timepoint for measuring relevant parameters. Three eggs were considered as a group, with 6 groups in each sampling timepoint. Rapid development of small intestine was observed around the hatching, of which jejunum and ileum had relatively higher development rates. Villus surface area from three intestinal segments started to increase on embryonic d 27, and kept relatively stable during day of hatch to d 1 post-hatching, and following increased till d 7 post-hatching. A high priority of villi enrichment was observed in duodenum and jejunum. The activity of disaccharidase increased before hatching and kept relatively high-level post-hatching, of which the activity of disaccharidase was highest in jejunum. The expression of sugar transporter gene increased prior to hatching and then decreased post-hatching, of which jejunum and duodenum were sites with high sugar transporter gene expression. Rapid development in intestinal morphology, disaccharidase activities, and sugar transporter gene expression around the hatching indicated that goslings have high potential to digest and/or assimilate carbohydrates during its early-life, which provided a preparation for further digestion of exogenous feed. This study provided a profile of development patterns for intestinal morphology, disaccharidase activities, and sugar transporter gene expression in goslings, which was beneficial to understanding the characteristics of nutrient absorption during the early-life of goslings.
INTRODUCTION
Geese are widely raised all over the world, which is a nutritious and healthy food resource. Because of the short reproductive periods, low hatchability, and high embryo mortality of geese, goose farming industry is not as prosperous as broilers industry (Tai et al., 2001;Rosinski et al., 2006a,b). However, it is reported that the content of protein and trace elements in the meat of goose is higher than in other poultry products (Xu et al., 2018). Therefore, goose husbandry has great economic prospects. Future goose farming husbandry will benefit from research on the organ development pattern of goslings. Among them, intestine plays a key supporting role in the growth of animals. It is very important to investigate the morphological development of small intestine, intestine internal disaccharidase activities, and sugar transporter gene expression during late-term period of incubation and the early period post-hatching to understand the nutrient requirements of goslings during their early-life.
Recently, a comprehensive investigation has been conducted on the development patterns of embryonic intestines of chicks, ducks, turkeys, guinea fowls, and pigeons (Dong et al., 2012a,b;Wilson et al., 2018;Ara ujo et al., 2019;Li et al., 2019;Givisiez et al., 2020). Dong et al. 1 (2012a) observed the expression of intestinal nutrient transporters increased with age, moreover, they noted that the expression of sugar transporter genes in pigeon jejunum was higher than that in other intestinal segments. They further investigated the post-hatching development of intestinal morphology of pigeons and found a rapid development in intestinal morphology and digestive enzyme activities (Dong et al., 2012b). In turkey, Wilson et al. (2018) reported that the length and width of villi developed with age. Givisiez et al. (2020) reviewed the functional development of gastrointestinal tract in chicks, they noted that the morphological changes, digestive enzyme activities, and nutrient transporter gene expression developed with age. However, the embryonic development patterns in intestinal morphology, disaccharidase activities, sugar transporter gene expression of goslings during pre-and post-hatching periods have not been fully understood.
To understand the intestinal development patterns of small intestine and sugar absorptive capacity in goslings, we conducted this study to investigate the small intestine development pattern, intestinal morphology, and sugar transporter gene expression in goslings during preand post-hatching periods.
MATERIALS AND METHODS
This study was approved by the Animal Care and Use Committee of Jilin Agricultural University (Changchun, Jilin, China) with a code of S83520130804.
Experimental Animals
A total of 150 fertilized eggs (Jilin White Geese) used in this study were obtained from the Geese Experimental Central of Jilin Agricultural University. A commercial incubator (Keyu CFZ microcomputer automatic incubator, Dezhou, Shandong, China) was used to incubate eggs. Preheat the eggs (30℃ for 12 h) and sterilize them (37% formaldehyde and potassium permanganate in a ratio of 2:1), then move them into the incubator. Incubation period was divided into 3 phases: phase 1, embryonic d 1 to d 14, the temperature was 38℃ and the humidity was 65%; phase 2, embryonic d 15 to d 28, the temperature was 37.5℃ and the humidity was 55%; phase 3, embryonic d 29 to d 31, the temperature was 37.2℃ and the humidity was 70%. During the incubation period, eggs were turned every 2 h for 180 s.
On embryonic d 23, eggs were candled to remove unfertilized eggs. A total of 120 fertilized eggs with a similar weight were used for further studies. In this study, the sampling timepoints were embryonic d 23, d 27, day of hatch, and d 1, d 4, and d 7 post-hatching. At each sampling timepoint, a total of 18 eggs were randomly selected and randomly assigned to 6 groups with 3 replicates in each group.
After hatching, geese were transported to the farm and assigned to cages (25 birds per cage), and immediately received feed (Table 1). All geese were raised under uniform management conditions with 30°C. During the experimental period, birds had free access to receive feed and water.
Feed Analysis
Feed samples were dried using a thermostatic oven (70°C) for 72 h. Samples were then ground and sieved by a 1-mm sieve. Then, the contents of dry matter, crude protein, calcium, and crude fiber in the diet were analyzed according to methods provided by the Association of official analytical chemists (AOAC, 2000). Moreover, the contents of neutral detergent fiber and acid detergent fiber in the diet were analyzed according to methods provided by Mertens (2002). Before analyzing amino acid contents, feed samples were hydrolyzed with 6 N HCl at 110°C for 24 h. Then, the amino acid contents in the feed sample were analyzed by an amino acid analyzer (2690 Alliance, Waters, Inc., Milford, MA).
Sample Collection and Measurement
A pair of surgical scissors was used to open the eggs. Then, the embryo with yolk sac was weighed and the small intestine was taken out. Ice-cold saline was used to remove adherent materials and/or internal contents from the small intestine. Record the weight and length of small intestine, then divide it into duodenum, jejunum, and ileum. Intestinal segment samples were duplicate taken from the middle of intestinal segment with a length of 1-cm. Intestinal segments were stored individually in 2 tubes. One sample was fixed with 10% neutral- buffered formalin solution to measure histology, and the other sample was frozen in liquid nitrogen to measure the activities of disaccharidase and the expression of sugar transport genes.
Small Intestine Parameter Analysis
Below equation was used to measure the relative weight of small intestine:
Organ index ¼
Organ weight Live body weight  100 %: A graduated ruler was used to measure the length of small intestine.
Small Intestine Morphology Analysis
Intestinal segment samples were cut into small pieces to measure the morphology according to the method described by Dang et al. (2022). In brief, small pieces of intestinal segment samples were fixed by 10% neutral buffered formalin for 12 h, and then dehydrated with alcohol of gradient concentration and xylene. Treated samples were then used to make paraffin blocks. A cryostat was used to make tissue sections. After removing paraffin, samples were then stained with hematoxylin and eosin. An optical microscope (Olympus, BX53F, Tokyo, Japan) was used to measure the values of villus height and width at 10X magnification. For each parameter, each slide was measured 5 times and represented on average. Villi area was calculated by the villi height (from the villi tip to the junction of villi crypt) and width at half height. Values given are averages from 10 adjacent villi, and only vertically oriented villi were measured.
Disaccharidase Activities Analysis
Intestinal segment samples were homogenized by 10 times of the volume of cold normal saline. The homogenates were then centrifuged at 3,500 £ g at 4°C for 15 min for collecting supernatant. According to the method described by Dang et al. (2022), colorimetric method was used to measure the activities of sucrase and maltase.
Sugar Transport Gene Expression Analysis
Based on the method described by Dang et al. (2022), RNAiso Reagent (TaKaRa, Dalian, Liaoning, China) was used to isolate the total RNA from intestinal segment samples. The integrity and concentration of RNA were then determined. The primer sequences of the test gene were specially designed according to the sequences in GenBank (Table 2). Total RNA samples were purified by a specific kit, and then reverse transcribed, followed by cDNA synthesis. RT-PCR was used to analyze the relative expression levels of sodium/glucose cotransporter protein-1 (SGLT-1), glucose transporter-2 (GLUT-2), and sucrase-isomaltase (SI) mRNA isolated from geese intestinal segment tissues. b-actin was used as the internal reference.
Statistical Analysis
All data were analyzed using SPSS18.0 software. Tukey test was used for multiple comparisons among different ages. Values from each group (3 eggs) were pooled to form a sample. Variability in the data was expressed as the standard error of means (SEM). Results were considered significant at P < 0.05.
RESULTS AND DISCUSSION
The intestinal tract developed rapidly with the increase of body weight (Table 3). The weight and length of intestine, and its proportion to embryo weight are considered important parameters reflecting the development of the small intestine (Chen et al., 2021). The rapid development patterns of small intestine in the late-term embryonic development have been confirmed in chicks and pigeons (Uni et al., 2003a,b;Dong et al., 2012a,b). Uni et al. (2003b) reported that the proportion of intestine to embryo weight increased rapidly in the last 3 days of incubation in chicks. Dong et al. (2012b) also observed that the intestine of pigeons developed rapidly in the late-term of incubation. In the present study, we also observed a rapid development of small intestine during the late-term of incubation (Table 3), of which the increase in duodenal and ileal weight, length, and those proportion to embryo weight occurred on embryonic d 27. Additionally, we observed that the weight of jejunum and its proportion to embryo weight increased continuously during this period. The rapid development in small intestine during the late-term of incubation provided a preparation to receive the exogenous feed after hatching. Subsequently, the development of small intestine continued at a high-speed during posthatching (Table 3). The weight and length of these 3 intestinal segments and the proportion of intestinal segments to the embryo weight increased continuously during the post-hatching period. A similar development pattern in chicks was also reported by Katanbaf et al. (1988), they noted that the intestine developed with age. The rapid development of small intestine after hatching may be attributed to the stimulating effect of exogenous feed on small intestine. Jin et al. (1998) reported that the relative weight of the whole intestinal tract among the birds receiving exogenous feed in time increased by approximately 20% during the first 5 d post-hatching, however, there was almost no change in fasting chicks during this period. A good development of intestinal tract laid a physiological foundation for realizing the maximum growth potential, which was crucial for the growth of goslings. As the results observed in this study, the intestinal tract of goslings developed rapidly during the late-term of incubation and the early-life post-hatching, which contributed to supporting the increase of body weight. However, the development patterns in different intestinal segments were different. Dror et al. (1977) and Uni (1999) reported the highest growth rate of intestinal segments in chicks was duodenum, followed by jejunum and ileum. In contrast, this study observed that jejunum and ileum were the segments with relatively high growth rates, followed by duodenum. Additionally, we observed a reduction effect in the relative weight of ileum from d 4 post-hatching to d 7 post-hatching, which probably indicated that the high-speed development of ileum was stopped/retarded on d 4 post-hatching, however, this statement also needed to be verified by further studies. To sum up, we considered that jejunum and ileum have a high priority in intestinal growth of goslings during early-life. Unlike chicks, geese have a large body size and strong tolerance/adaptability to roughage (Li et al., 2017), which may determine its unique intestinal segment development patterns. Moreover, the development of small intestine is not only presented in the improvement of the apparent parameters of small intestine, but also in the variation of its morphological structure. The height and width of villi, and its surface area are commonly used parameters to reflect the function of nutrient absorption (Rajput et al., 2012). The epithelium of small intestine protrudes into gut lumen to form long folds, that is, villi, which are the functional units of small intestine involved in nutrient absorption. In this study, we observed the increase of villus absorptive area in three intestinal segments started on embryonic d 27, after incubation, it kept relatively stable during day of hatch to d 1 post-hatching. Subsequently, on d 1 post-hatching, it continued to increase till d 7 post-hatching (Table 4). Similarly, Sklan (2001) and Uni (2006) reported that the morphology of the small intestine changed rapidly during the early-life of chicks. The optimization of villi morphology laid the physiological foundation for nutrient absorption. However, the villi have different ontogenetic timetables in different intestinal segments (Sklan, 2001). Sklan (2001) and Scanes and Pierzchala-Koziec (2014) detailed small intestinal morphological development of chicks and found duodenum was almost covered completely by villi on d 7 post-hatching, while not jejunum and ileum. Baranyiova and Holman (1976) investigated the development status of villi in chicks immediately after hatching, Different superscripts within a raw indicate a significant difference (P < 0.05). 1 Age refers to embryonic d 23 (E23), d 27 (E27), day of hatch (DOH; after hatch but before feeding), and d 1 (D1), d 4 (D4), and d 7 (D7) post hatching. they observed that the longest villi appeared in duodenum, and its length was twice that of jejunum and ileum. Additionally, Uni et al. (1998) and Uni (1999) reported that the development of duodenal villi in chicks was completed around d 6 or 7 post-hatching, whereas the development of jejunal and ileal villi will be completed till d 14 post-hatching. Therefore, results reported by the above studies seem to indicate that duodenal villi in chicks have a high priority in the development of intestinal villi. In contrast, in the present study, we observed that duodenum and jejunum had similar villi surface area, which was higher than those of ileum (Table 4). Therefore, in goslings, we considered that villi enrichment has a high priority in duodenum and jejunum, followed by ileum.
The activities of intestinal digestive enzymes increased with the development of intestinal morphology (Moosavinasab et al., 2015). In chicks, Uni et al. (2003a, b) and Uni (2006) reported that the total intestinal disaccharidase activities started to rise during the last 2 d before hatching and increased rapidly post-hatching. In pigeons, Dong et al. (2012a,b) noted that the activity of mucosal and total intestinal disaccharidase increased with age. Similarly, in this study, we observed that the sucrase activity in three intestinal segments started to increase on embryonic d 27 and continued to increase with age (Table 5). The activity of maltase in three intestinal segments started to increase on embryonic d 23 and increased gradually with age, but kept relatively stable on d 4 post-hatching (Table 5). Moreover, from d 4 post-hatching to d 7 post-hatching, we observed a decrease in duodenal maltase activity, which probably indicated that duodenum was not the primary site for maltose digestion (Uni et al., 1998). Indeed, the regional activity of mucosal enzymes was different in these three intestinal segments. In chicks, it was reported that jejunum has the strongest ability to digest disaccharides, followed by ileum and then duodenum (Uni et al., 1998). In the present study, we observed the highest disaccharidase activities occurred in jejunum, followed by ileum and then duodenum, which was the same as the distribution of disaccharidase activities in the intestinal segments of chicks (Uni et al., 1998). Therefore, we considered that jejunum was the main site for disaccharide digestion in goslings.
Digestion occurs when carbohydrates are degraded into disaccharides by the enzymes, nutrients are absorbed by the nutrient transporters and passed through intestinal epithelial cells (Ashwell, 2009). The nutrient transporters are expressed at a low level before hatching, but are upregulated after hatching (Yadgary et al., 2011;Wong et al., 2017). During the late-term of incubation, very small amounts of carbohydrates are presented in the intestine, the increase of SI expression at the apical membrane allows more carbohydrates to degrade into glucose (De Oliveira et al., 2009;Speier et al., 2012;Dong et al., 2012a). Maintaining high expression levels of SI in the small intestine can provide sufficient substrate supply for the nutrient transporters such as SGLT-1 and GLUT-2 (Dong et al., 2012a). Glucose is the key fuel and the important metabolic substrate, which is absorbed by intestinal epithelium via apically located SGLT-1 and transported to the blood via basolateral membrane expressed GLUT-2 (Mace et al., 2009;Wong et al., 2017). In the present study, we observed that the expression of ileal SGLT-1 and SI genes increased prior to hatching and then decreased while hatching, whereas ileal GLUT-2 expression increased with age (Table 6). Additionally, the expression of SGLT-1, GLUT-2, and SI genes in duodenum gradually increased till d 1 post-hatching and then continued to decrease (Table 6). However, the expression of jejunal SGLT-1, GLUT-2, and SI genes were kept relatively stable during incubation period, after hatching, jejunal SGLT-1 expression increased with age, whereas jejunal GLUT-2 and SI expression continued to increase till d 4 post-hatching and then decreased (Table 6). In chicks, Gilbert et al. (2007) and Sklan et al. (2003) reported that the expression of SGLT-1, GLUT-2, and SI genes increased with age. Similar results were also observed in pigeons and turkeys (Dong et al., 2012a;Weintraut et al., 2016). Additionally, Weintraut et al. (2016) studied the GLUT-2 expression in turkeys during post-hatching period, they found that the expression of GLUT-2 decreased with age. Barfull et al. (2002) observed the SGLT-1 expression in chicks decreased with age during the first week post-hatching. The upregulation of these sugar transporter expressions prior to hatching obviously makes preparations for further digestion of exogenous feed. However, the reason for the downregulation of sugar transporter expression after hatching is still unclear and it seems to be determined by a series of factors. The expression of sugar transporters also differed temporally and spatially in the small intestine. Kaminski and Wong (2018) noted that the expression of sugar transporters in jejunum of chicks was higher than that in duodenum and ileum. Dong et al. (2012a) observed that expression of SGLT-1 and GLUT-2 were the highest in jejunum and ileum of pigeons. In the present study, the highest expression of SGLT-1 was observed in duodenum and jejunum of goslings, followed by ileum. The GLUT-2 expression was highest in jejunum, followed by duodenum and then ileum. The SI expression was highest in duodenum, followed by jejunum and then ileum. Therefore, we considered that jejunum and duodenum were main sites for sugar transportation in goslings.
CONCLUSIONS
We observed that the small intestine developed rapidly throughout pre-and post-hatching, of which jejunum and ileum have a high priority in intestinal development. The villi were preferentially enriched in duodenum and jejunum and its surface area increased with age. The disaccharidase activities increased with age, of which the highest disaccharidase activities were in jejunum. Additionally, sugar transporter genes were upregulated prior to hatching whereas downregulated post-hatching. Higher sugar transportation gene expression was observed in jejunum and duodenum. A rapid development in intestinal morphology, disaccharidase activities, and sugar transporter gene expression around the hatching indicated that goslings have a high carbohydrate digestion and assimilation potential during its early-life, which provided a preparation for further digestion of exogenous feed.
DISCLOSURES
We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome. | 2022-11-07T16:19:12.908Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "11416142696b9f910de67844896433fed966c468",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.psj.2022.102316",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dbcc6d994e1f4aff13dde4950b7e44966986bbc7",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34258365 | pes2o/s2orc | v3-fos-license | A Laboratory Assessment of Factors That Affect Bacterial Adhesion to Contact Lenses
Adhesion of pathogenic microbes, particularly bacteria, to contact lenses is implicated in contact lens related microbial adverse events. Various in vitro conditions such as type of bacteria, the size of initial inoculum, contact lens material, nutritional content of media, and incubation period can influence bacterial adhesion to contact lenses and the current study investigated the effect of these conditions on bacterial adhesion to contact lenses. There was no significant difference in numbers of bacteria that adhered to hydrogel etafilcon A or silicone hydrogel senofilcon A contact lenses. Pseudomonas aeruginosa adhered in higher numbers compared to Staphylococcus aureus. Within a genera/species, adhesion of different bacterial strains did not differ appreciably. The size of initial inoculum, nutritional content of media, and incubation period played significant roles in bacterial adhesion to lenses. A set of in vitro assay conditions to help standardize adhesion between studies have been recommended.
Introduction
Contact lenses provide several benefits over spectacles, but their wear has remained as a risk factor for the development of various adverse events, such as microbial keratitis (MK) [1], contact lens related acute red eye (CLARE) [2], contact lens peripheral ulcer (CLPU) [3] and infiltrative keratitis (IK) [4]. Adhesion and colonization by variety of microbes, particularly bacteria [1], to contact lenses is implicated as a major factor in the initiation of these adverse events. Pseudomonas aeruginosa and Staphylococcus aureus are the two predominant microorganisms implicated in contact lens related microbial adverse events [1,5] and other microorganisms such as Serratia marcescens [2], coagulase-negative staphylococci [1], fungus [6] and Acanthamoeba [7] are less frequently involved. Depending on the study design and location, P. aeruginosa and S. aureus together account 44% to 57% of total culture positive contact lens related microbial keratitis [1,8].
Bacterial adhesion to contact lenses is a complex and multifactorial process and previous in vitro and ex vivo adhesion data differ widely between various studies [9]. This is mainly due to variety of methodology used to evaluate bacterial adhesion and there are a range of assay conditions that have been used to evaluate bacterial adhesion to lenses. These conditions have included different strains/types of bacteria, contact lenses types, inoculum sizes, the nutritional content of media and the incubation time for adhesion to occur [9]. Viable plate count [10 12], number of cells adherent to parallel plate flow chambers [13], scanning electron microscopy [14], bioluminescent ATP assay [15], light microscopy [16], and assessment of the number of cells after radio-labeling [17] have been used to quantify microbial adhesion to lenses. Various solutions are used during adhesion experiments which include phosphate buffer saline (PBS) [18,19], which is nutritionally inert, and broths such as Tryptone Soy [20] or Mueller Hinton which are nutritionally rich. The reported inoculum sizes in bacterial adhesion assays have varied from 1 × 10 3 colony forming units (CFU) mL 1 up to 1 × 10 9 CFU mL 1 [10,21] and the incubation period for adhesion has ranged from 10 minutes to 72 hours [16,22].
The wide variety of bacterial assays used in previous studies and consequent differences in bacterial numbers adhering to lenses, signify a need to develop a set of standardized in vitro assay that can allow comparisons within and between studies on adhesion of different bacterial strains to different contact lenses. This study aimed for a better understanding of these major influencing factors that affect bacterial attachment and furthermore suggest key standard assay conditions that are best suited for laboratory assessment. As biofilm formation on contact lenses during wear is infrequent [23] the primary focus of this investigation was on initial steps in bacterial adhesion.
Experimental Section
Two of the most widely used contact lens materials, the hydrogel etafilcon A (ACUVUE® 2; Johnson & Johnson Vision Care Inc., Jacksonville, FL; Base curve: 8.7 mm, Diameter: 14.0 mm, Power: 3.00 Diopter) and the silicone hydrogel senofilcon A ( Johnson Vision Care; Base curve: 8.4 mm, Diameter: 14 mm, Power: 3.00 Diopter) were used [24]. The properties of these materials are described in Table 1.
Bacterial Strains
As the majority of the causative microorganisms for contact lens related microbial adverse events are Gram negative Pseudomonas aeruginosa and Gram positive Staphylococcus aureus [1,5], selected strains of these were used. Table 2 details the bacterial strains used in this study [19,25 28]. Table 2. Details of bacteria used in the study.
Incubation Period
Contact lenses were incubated for two hours and 18 hours with the bacterial suspensions.
Adhesion Conditions
Stock cultures were stored in 30% glycerol at 80 °C. Bacteria were grown overnight in TSB at 37 °C with aeration. The harvested bacterial cells were centrifuged for 10 mins at 3,000 rpm and the cells washed three times with PBS. All the bacteria were then resuspended in one of the four media to an OD 660nm of 1.0 (1 × 10 9 CFU mL 1 ). The bacterial cell suspensions were then diluted to 1 × 10 6 and 1 × 10 3 . The bacterial suspension of 1 × 10 10 CFU mL 1 was made by centrifuging 10 mL of 1 × 10 9 CFU mL 1 and resuspending it in 1 mL respective media. Contact lenses were washed three times in PBS and transferred to 1 mL of bacterial suspensions in wells of 24-well tissue culture plates (CELESTAR®, Greiner bio-one, Frickenhausen, Germany), concave side up. To allow adhesion of bacterial cells, lenses were incubated for two hours or 18 hours at 37 °C with shaking (120 rpm). Lenses were aseptically removed from the suspension and washed three times with 1 mL PBS in a 24-well plate by shaking at 120 rpm for 30 seconds to remove non-adherent cells. Following washing, contact lenses were stirred rapidly in 2 mL of PBS containing a small magnetic stirring bar. Following log 10 serial dilutions in PBS, 3 × 50 µL of each dilution were plated on a nutrient agar (NA; Oxoid, Basingstoke, UK). After 24 hours incubation at 37 °C, the viable bacteria were enumerated as CFU/lens mm 2 . The inoculum sizes were retrospectively counted by plating and overnight incubation on nutrient agar. Results are expressed as the numbers of adherent viable bacteria from three independent experiments with three samples evaluated each time.
Statistics
The adhesion data were log 10 (x+1) transformed prior to data analysis where x is the number of adherent bacterial colonies mm 2 . All data were analyzed using Statistical Package for Social Science for Windows version 21.0 (SPSS, Inc, Chicago, IL). Interactions between different factors influencing bacterial adhesion to contact lenses such as bacterial strain type, assay media, incubation time and inoculum size were investigated in a nested model of all the variables. Based on this estimation, by factoring all the variables, the estimated mean was calculated which is adjusted for the other variables in the model. To evaluate and compare the influence of tested assay conditions on bacterial adhesion, partial Eta squared was estimated. Bacterial adhesion and contact lens parameters were analyzed using independent two sample t test. Differences between the groups were analyzed using a linear mixed model ANOVA, which adjusts for the correlation due to repeated observations. Post hoc multiple comparisons were done using Bonferroni correction. Statistical significance was set at 5%.
Results and Discussion
Figures 1 and 2 show the adhesion of P. aeruginosa and S. aureus respectively when incubated in the four different media and at three different bacterial concentrations over time. Analysis of strain differences within a genera/species found that only P. aeruginosa ATCC 9027 showed higher adhesion to etafilcon A than senofilcon A (p < 0.01) and not for any other bacterial type. P. aeruginosa adhered at higher number compared to S. aureus (p < 0.01).
For each bacterial type and strain there was a significant increase in adhesion from 2 to 18 hours (p < 0.01) when incubated with 1 × 10 3 CFU mL 1 or 1 × 10 6 CFU mL 1 bacterial suspension. For P. aeruginosa strains, adhesion to the contact lenses increased as the initial inoculum increased (p < 0.01). However, for strains of S. aureus adhesion reached a maximum when 1 × 10 6 CFU mL 1 bacterial cells were incubated with lenses; addition of bacteria at 1 × 10 10 CFU mL 1 did not increase adhesion. The differences between the number of bacterial cells recovered from the washed solutions of the contact lenses incubated with different concentrations of bacteria was less than 0.3 log.
When comparing the effect of different media on adhesion, there were differences between the bacterial genera/species. For P. aeruginosa, adhesion was significantly lower (p < 0.01) when incubated in PBS after 18 hours for concentrations up to and including 1 × 10 6 CFU mL 1 , but not at 1 × 10 10 CFU mL 1 . At 1 × 10 3 CFU mL 1 adhesion of P. aeruginosa was significantly higher when incubated with TSB (p < 0.01) compared to all other media, but this difference tended to lose significance at higher bacterial concentrations. For S. aureus, adhesion was significantly lower in PBS (p < 0.01) than all other media at all bacterial concentrations, at all time points and on both contact lens types. When 1 × 10 6 or 1 × 10 10 CFU mL 1 of S. aureus was used, there was a reduction in bacterial numbers adhered to lenses when incubated in PBS after 18h adhesion compared to 2 hours adhesion; this was not the case with other media. After adjusting for effects of incubation time, inoculum size and lens material, incubation with PBS showed significantly (p < 0.01) less adhesion for all the bacteria studied. There were no significant differences in bacterial adhesion (p > 0.05) when incubated with 1/10 TSB or TSBG. Incubation in the nutritionally rich TSB was often associated with higher adhesion (Figures 1 and 2) compared to other media especially after 18 hours. Table 3 shows the estimated degree of association between bacterial adhesion and influencing assay conditions. Higher partial Eta squared value implies higher influence over bacterial adhesion. Variation in S. aureus strains did not influence bacterial adhesion (partial Eta squatted = 0.00; p = 0.41). Rest all the factors including various P. aeruginosa strains, lens types, assay media, incubation period and inoculum size had significant influence (p < 0.05) on P. aeruginosa and S. aureus adhesion. In this study, adhesion of P. aeruginosa or S. aureus strains to contact lenses was assessed under several assay conditions. In most cases there was no significant difference in adhesion to hydrogel etafilcon A or silicone hydrogel senofilcon A lenses, which is consistent to some earlier studies [19]. However, this result was somewhat different to studies showing higher P. aeruginosa and S. aureus adhesion to silicone hydrogel contact lenses compared to hydrogel lenses [10,15,29]. Senofilcon A lenses have been shown to result in lower bacterial adhesion compared to other silicone hydrogels such as balafilcon A or lotrafilcon B used in previous studies [30]. Different strains of P. aeruginosa or S. aureus did not have significantly different adhesion to contact lenses. Previous studies have shown considerable variation in adhesion between different strains of P. aeruginosa or S. aureus, ranging up to 2.00 × 10 5 CFU mm 2 and 1.23 × 10 5 CFU mm 2 respectively [19, 31 33]. Thus it is important to use the same strains across studies for meaningful comparisons to be made. Other strains can be incorporated as well to test for strain differences.
E S E S E S E S E S E S E S E S E S E S E S E S E S E S E S E S E S E
P. aeruginosa adhered at higher levels than S. aureus and this is in agreement with the previous reports [19,34]. However, the reason is not known in any great detail. It is known that cell surface appendages such as flagella and pili aid in the adhesion of P. aeruginosa [35] as does the relatively hydrophobic nature of some strains of P. aeruginosa compared to S. aureus [36]. This finding has been hypothesized to be a reason for the finding that P. aeruginosa is a predominant causative agent in contact lens induced-MK.
Previous studies have elucidated that the initial bacterial adhesion to contact lenses increases with time, peaked at 3 to 18 hours of incubation and then remained steady, suggesting the end point of primary adhesion [22,31,37]. Bacterial adhesion during two phases of the process, two hours and 18 hours exposure of contact lenses to bacterial suspension were determined in this study. The viable bacterial numbers after 18 hours adhesion were generally higher compared with after 2 hours, an observation that agrees with some previous studies [22,38]. Combining our results with Tran et al. [35] showing linear kinetics of bacterial adhesion up to 70 minutes and Randler et al. [22] investigating up to 72 hours but having incremental adhesion only up to 24 hours, illustrates that adhesion to contact lenses increases in a time dependant manner up to 18-24 hours of incubation and then viability is reduced. Perhaps, the reduction in viability is due to the bacteria entering a biofilm mode of growth, which is known to result in lower viability of cells [39,40] or due to biofilm dispersal that can occur when the environment nutrients are not favorable for bacteria. In contrast, Stapleton et al. [41] and Andrews et al. [37] reported a plateau in adhesion that was reached after 45 minutes and four hours incubation respectively, with the adhesion that remained at those levels for more than 18 hours. These findings illustrates that investigators need to select incubation period of a bacterial adhesion carefully, depending upon study hypothesis being tested.
Bacterial incubation in the nutritionally rich media TSB resulted in the highest adhesion of both bacterial types. PBS, being nutritionally inert, resulted in apparent death or the more fastidious S. aureus strains used in the current study, and so PBS is not recommended as a media for adhesion experiments. This fact was supported by a test showing 18 hours incubation of 1 × 10 3 CFU mL 1 , 1 × 10 6 CFU mL 1 and 1 × 10 10 CFU mL 1 significantly (p < 0.001) reduced mean S. aureus viability to 2.13 × 10 2 CFU mL 1 , 2.55 × 10 3 CFU mL 1 and 9.13 × 10 4 CFU mL 1 in PBS (data not shown). This study demonstrates that diluted TSB can function as an adequate media for adhesion experiments. Since there was no significant difference in bacterial adhesion with 1/10 TSB and TSBG, addition of glucose is not recommended.
Since it is difficult to quantify exposure of contact lenses to microorganisms during wear, a wide range of numbers were selected for testing; 1 × 10 3 CFU mL 1 represented a low inoculum size, 1 × 10 6 CFU mL 1 a medium inoculum size and 1 × 10 10 CFU mL 1 represented very high inoculum size. 1 × 10 10 CFU mL 1 was usually associated with highest adhesion, especially when incubated for 2 hours. Previous studies have also used higher inoculum sizes when incubation times were short [41 43] and a lower inoculum size when incubated for longer [20,21]. Contact lenses will rarely be exposed to such high numbers of bacteria such as 1 × 10 10 CFU mL 1 during contact lens wear or even in lens cases. The range of bacterial numbers isolated from contact storage lens storage cases has been reported to be 1.24 × 10 4 CFU/case to 6.32 × 10 4 CFU/case [44 49]. Therefore, exposing contact lenses to this level of bacteria may be unrealistic. The data from the current experiments suggest that an inoculum size of 1 × 10 6 CFU mL 1 may offer a more realistic level of bacteria to expose contact lenses to, and results in medium to high levels of bacterial adhesion.
Inoculum size was the greatest influencing factor for P. aeruginosa adhesion, followed by incubation period and assay media. Interestingly, nutritionally variable assay media was the greatest influencing factor determining S. aureus adhesion, confirming that S. aureus is sensitive to the nutritional content. Incubation period and inoculum size were the other major influencing factors. Lens types and bacterial strains had a minor influence.
A limitation of this study is that bacterial adhesion to contact lens was not evaluated at frequent time intervals, which might have provided better understanding regarding kinetics of bacterial adhesion. Bacterial adhesion after longer incubation period such as 18 hours is complex procedure because of the bacteria are more likely to be replicating during this time, especially under nutrient enhanced conditions, probably combinations of initial biofilm formation and continued initial adhesion of daughter cells. This study has evaluated adhesion at a fixed stirring rate (120 rpm), thus altering the rate will undoubtedly implicate the rate of bacterial arrival to lenses. Since, it is difficult to reproduce in vivo blinking motion onto contact lens surfaces in vitro, we would recommend using a constant shaking rate (such as used in this study; 120 rpm) for a particular study design. In addition, total microbial load cannot be investigated by this type of assay. However, viable plate count is a vital method to evaluate reproducible microbial count, essential for development of infection and inflammation especially at the ocular environment [3]. Based on the results obtained in this study we suggest 18 hours incubation of 10 6 CFU mL 1 S. aureus or P. aeruginosa in 1/10 TSB or PBS respectively to study the attachment of bacteria to contact lenses. The advantages of this recommended assay also include that better results could be achieved with the use of basic laboratory apparatuses and does not require expensive machines such confocal or optical microscope and microtitre plate reader. The bacterial adhesion assay used in this study suits best to investigate increase or decrease in viable count such as used in antimicrobial research.
It is important to carefully select assay conditions depending on the study purpose. Adhesion of P. aeruginosa to contact lenses ranged from 1.38 CFU mm 2 to 4.57 × 10 6 CFU mm 2 and S. aureus adhesion ranged from 1.37 CFU mm 2 to 1.13 × 10 5 CFU mm 2 , depending on the assay conditions. If experiments are designed to investigate effect of materials on bacterial adhesion, or whether antimicrobial lenses can reduce adhesion, it is important that assay conditions are chosen that allow adhesion to control lenses at a medium range so that increases or decreases in adhesion can be measured. A set of such assay conditions is given in Table 4 that have provided moderate adhesion of between 1 × 10 3 CFU mm 2 to 1 × 10 5 CFU mm 2 for P. aeruginosa and 1 × 10 3 CFU mm 2 to 1 × 10 4.5 CFU mm 2 for S. aureus for both contact lens types. In conclusion, this study has determined that different strains of P. aeruginosa or S. aureus do not adhere very differently to contact lenses. Adhesion is more affected by the environment and numbers of bacteria initially applied to lenses. At least for etafilcon A and senofilcon A lenses, adhesion was not affected by lens polymer type. There are varieties of ingredients used to evaluate bacterial adhesion and investigators are required to select a set of bacterial assay depending on the study hypothesis. The proposed conditions that give intermediate levels of bacterial adhesion to contact lenses could be used for subsequent evaluation of bacterial adhesion to lenses or antibacterial efficacy of contact lenses.
Conflict of Interest
This work is original, has not been published and is not being considered for publication elsewhere. There are no conflicts of interest for any of the authors that could have influenced the results of this work. The first author is supported by the University International Postgraduate Award (UIPA) UNSW, and top-up scholarships from the OVRF Maki Shiobara Scholarship and Brien Holden Vision Institute. | 2014-10-01T00:00:00.000Z | 2013-11-01T00:00:00.000 | {
"year": 2013,
"sha1": "98c34fbb22d55e8ce3326ecd43c31b42ca487b63",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-7737/2/4/1268/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98c34fbb22d55e8ce3326ecd43c31b42ca487b63",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
222124860 | pes2o/s2orc | v3-fos-license | Electron acceleration in laser turbulence
We demonstrate that electrons can be efficiently accelerated to high energy in spatially non-uniform, intense laser fields. Laser non-uniformities occur when a perfect plane wave reflects off a randomly perturbed surface. By solving for three-dimensional particle trajectories in the electromagnetic field of a randomly perturbed laser, we are able to generate electron energy spectra with temperatures well above the ponderomotive scaling, as observed in many experiments. The simulations show that high electron energies can be achieved by the laser fields alone, without the need for plasma fields. The characteristic temperatures of the electron spectra are determined by the characteristic features of the laser field turbulence. The process is very rapid, occurring on 10-50fs timescales, indicating it is likely a dominant acceleration mechanism in short-pulse laser-solid interactions when the intensity is at or below the relativistic threshold. A simple analytic model shows how electrons can reach high energy by undergoing repeated acceleration in laser wavelets for short periods of time.
We demonstrate that electrons can be efficiently accelerated to high energy in spatially nonuniform, intense laser fields. Laser non-uniformities occur when a perfect plane wave reflects off a randomly perturbed surface. By solving for three-dimensional particle trajectories in the electromagnetic field of a randomly perturbed laser, we are able to generate electron energy spectra with temperatures well above the ponderomotive scaling, as observed in many experiments. The simulations show that high electron energies can be achieved by the laser fields alone, without the need for plasma fields. The characteristic temperatures of the electron spectra are determined by the characteristic features of the laser field turbulence. The process is very rapid, occuring on 10 − 50f s timescales, indicating it is likely a dominant acceleration mechanism in short-pulse laser-solid interactions when the intensity is at or below the relativistic threshold. A simple analytic model shows how electrons can reach high energy by undergoing repeated acceleration in laser wavelets for short periods of time. PACS numbers: PACS numbers.
Intense, short-pulse laser-plasma experiments allow us to readily produce extreme states of matter in the laboratory. Many laboratories around the world, at both the university and national scale, are readily able to produce extreme states of matter by directing very intense, short, laser pulses onto specially designed targets. This allows us to study a wide variety of physical processes, from particle acceleration and intense radiation sources to laboratory astrophysics and positron generation. The possibility of observing high-field phenomena, such as quantum electrodynamics processes [1,2], or of utilizing these formidable tools for igniting dense fusion plasmas [9], are active areas of research. The theoretical study of the non-linear and multiple-scale behavior of particles and matter under these conditions has been greatly aided by the particle-in-cell simulation (PIC) technique [3]. Large-scale, multi-dimensional simulations are themselves often so complex that isolating and clearly identifying the dominant physical processes can be challenging, and arriving at unambiguous conclusions is difficult.
One striking feature of intense laser-plasma experiments is the observation of thermal electron energy spectra, i.e. spectra with functional form close to dN/dE ∼ e −E/Te (E is the electron energy and T e the best-fit temperature). The thermal form of the spectra is strongly suggestive of a stochastic acceleration process, and many such processes have been proposed, mostly on the basis of one-dimensional simulations [10,[12][13][14]. The most well known heating mechanism is the ponderomotive acceleration, which predicts electrons gain kinetic energy equal to the ponderomotive potential ϕ p = 1 + Iλ 2 µm /1.37 × 10 18 − 1 mc 2 (with intensity I in W cm −2 , wavelength λ µm in µm) by undergoing a single j × B oscillation in the wave close to the critical surface. After crossing the critical surface, electrons retain this energy because they are no longer in the vicinity of strong fields. Robinson [4] has shown one route to electron energy gain is the "breaking of adiabaticity" by some force of non-laser origin (i.e. not the Lorentz force associated with the laser plane wave fields). Examples of non-laser forces are the electric field formed in ion channels [4], the electrostatic sheath field [13] and plasma waves [5]. Other processes have been identified which occur in two [15] and three dimensions [6]. These mechanisms are associated with electron motion across large-scale features (such as the ion channel, plasma sheath, laser spot or target dimensions), and operate on relatively long timescales because the anti-dephasing is driven by plasma fields, which are relatively weak in comparison to the laser fields. In contrast, the mechanism we describe in this letter occurs on much shorter timescales, and is therefore likely a dominant acceleration mechanism. For example, by the process described here, electrons can be accelerated to multi-MeV energies within 10f s at the non-relativistic intensity of I = 4 × 10 17 W cm −2 . The mechanism is based on electrons accelerated by non-uniformities in the laser fields themselves (which are approximately an order of magnitude stronger than the induced plasma fields). We show that the random nature of typical laser nonuniformities ("laser turbulence") causes electrons to undergo stochastic motion and non-adiabatic acceleration, producing electron energy spectra with characteristic energy determined by the length of time electrons spend in the turbulence, as well as the spectral content of the turbulence. In short scale length plasmas this produces the well-known ponderomotive temperature scaling observed in early PIC simulations [7], in longer scale-length plasmas this can produce temperatures more than an order of magnitude above ponderomotive, as observed in many experiments. The heating mechanism can be identified unambiguously because no plasma fields are necessary. A simple analytic model complements the simulations by showing how electrons cumulatively gain energy in a series of "kicks" by staying in phase with the wave for short periods of time.
Since the earliest two-dimensional PIC simulations, irregular, non-plane-wave spatial structure in the laser fields has been noted [7,8,20]. These non-uniformities are referred to as "disturbances", "ripples", "filaments" or "fluctuations". Brady [8] studied this phenomenon in depth, showing that for intensities above I ≈ 5 × 10 17 W cm −2 the non-uniformities, which have spatial scale on the order of the laser wavelength (λ 0 ) and localized intensity maxima ≈ 4× the incident intensity, are seeded by the Raman instability. Laser envelope and phase distortions also frequenctly occur as a result of imperfections in the laser system itself, or in the form of speckles. Whatever the cause, it is clear that when a uniform plane-wave reflects off a perturbed surface, the reflected light will contain significant non-uniformities if the surface contains depth perturbations on the order of λ 0 , as seen in multi-dimensional PIC simulations. These diffractive non-uniformities, or fluctuations, have spatial scale λ 0 , sometimes with a longer length scale in the direction along the laser k-vector (z), depending on the nature of the surface perturbations and distance from the surface. Filamented light is also sometimes associated with density filaments rather than diffraction, although these filaments do not appear to be a dominant cause of absorption, and electrons do not become trapped inside such filaments [18]. We use the term fluctuations to distinguish the time-dependent non-uniformities discussed here from their time-independent counterpart (speckles), although both phenomena arise from diffraction.
To study the acceleration of electrons in laser turbulence, we have developed the Quartz simulation code, which specifies the laser fields analytically, thereby removing numerical heating associated with the computational grid, which can be a source of significant error when studying a heating process. In order to obtain unambiguous results, the plasma response is ignored, so that all effects can be attributed to the laser fields only. This has the added advantage of making 3D simulations computationally feasible. The electron equations of motion ( dp/dt = −e (E + v × B) and dx/dt = v) are updated numerically with 4th-order relativistic Runge-Kutta integration, which provides excellent energy conservation (p is the momentum, v is the velocity, E is the electric field, B is the magnetic field, e is the electron charge and m its mass). The fields of the reflected wave are obtained from the vector potential A (x, t) = Lz + φ j,l , where k = k zẑ is the laser wavevector, ω the laser frequency, 2N is the number of surface perturbations considered per dimension (typically 2N = 18), R ⊥ is the generated mean transverse intensity radius, L z the generated mean longitudinal intensity length, , A 0R the amplitude of the vector potential of the reflected wave, and φ j,l are phase factors randomly generated with value 0 or π. The total potential is the sum of the reflected wave and the incident wave, corresponding to p-polarization at normal incidence. In this study, the reflected wave intensity is assumed to be 0.8 that of the incident wave, representing an absorption fraction of α = 0.2, typical of short-pulse laser-matter . The above expression for A is the vector potential envelope obtained using Fourier optics for a speckled laser beam [19]. One could envisage calculating the fields from the Kirchoff integral if the surface is specified, however this would necessitate a means of specifying the surface analytically. The advantage of the above approach is that the fluctuation turbulence is generated with Gaussian statistics whose mean properties can be specified by the two parameters R ⊥ and L z . This enables, for example, a spectrum of mostly "short" (R ⊥ ≈ L z ≈ λ 0 ) or mostly "long" (L z R ⊥ ≈ λ 0 ) fluctuations to be studied independently. Real systems with long scale-length plasmas are likely to contain both short and long features, while those with short scale-length plasmas will mostly contain short features. In Fig. 1 we plot the instantaneous laser electric field for two cases: (a) R ⊥ = 1µm and L z = 1.2µm (referred to as the "short case"), (b) R ⊥ = 1µm and L z = 12.2µm (referred to as the "long case"), on an arbitrary x-z plane at arbitrary time. The fields A (x, t) are infinitely periodic in each direction, so that boundary effects do not complicate the analysis.
In this letter we study the acceleration of electrons in fields with short and long scale non-uniformities and obtain a scaling relation for slope temperature as a function of intensity. We first consider in detail the spectra for the short case for a relatively low average incident intensity of I = 4 × 10 17 W cm −2 , in Fig.2. This intensity is of particular interest because according to ponderomotive scaling [7], the slope temperature should be non-relativistic (ϕ p ≈ 70keV ), since the electron quiver energy in the wave is non-relativistic. However, some experiments [16][17][18]22] in this intensity range have measured temperatures an order of magnitude above ponderomotive. In our simulations, electrons very rapidly reach a slope temperature of T e ≈ 210keV in the first 10fs, then proceed to increase in temperature less rapidly, reaching T e ≈ 240keV by 50fs. In the case of long fluctuations (Fig. 2), electrons gain energy more rapidly, reaching a slope temperature of T e ≈ 0.5M eV in 10fs, then proceeding to generate spectra that deviate from a simple thermal form (similar to the non-thermal spectra observed in [16,22,23]). To estimate the effective temperature for non-thermal spectra, we introduce the parameter E tail defined as the average energy of electrons in the upper 90% of energy ranges -i.e. the average energy of all electrons with energy E > E max , where E max is the maximum particle energy. This provides a convenient means of excluding the low energy bulk, which accelerates slowly and is unlikely to reach a detector, and gives a result which is approximately independent of the number of computational particles used. For the long case, we find E tail ≈ 1.1M eV at 10fs, rising to E 90 ≈ 3.3M eV by 50fs, which is remarkably high given the relatively low average intensity.
This mechanism may explain the thermal nature of ponderomotive scaling. Although ponderomotive scaling [7] is widely cited, there is currently no theoretical understanding of the process that creates the associated quasi-thermal spectrum: according to the basic principle, electrons gain kinetic energy equal to the ponderomotive potential ϕ p = 1 + Iλ 2 µm /1.37 × 10 18 − 1 mc 2 (with intensity I in W cm −2 , wavelength λ µm in µm) by undergoing a single oscillation in the wave close to the critical surface. After crossing the critical surface, electrons retain this energy because they are no longer in the vicinity of strong fields. However, the original simulations [7] showed thermal spectra, indicating electrons experience a wide range of energy gains, with sub-ponderomotive being the most probable and energy gains ϕ p less probable (but nonetheless observed).
We summarize a range of simulations in Fig. 3, where we plot E tail (at 10fs) as a function of intensity for both the short and long cases, along with ponderomotive scaling. Note that the short and long cases merge to a common energy in the relativistic intensity range. The experimental data at low intensity are taken from [17,18] and [16] at high intensity. We have developed a simple Chirikov map which reproduces the essential features of the particle simulations and gives analytic insight into the acceleration mechanism. This is based on the simple concept of dividing space into a collection of relatively small plane waves ("wavelets") separated by field nulls where the phase changes abruptly. Each wavelet is an isolated plane wave of finite length L z (along the direction of laser propagation) and constant intensity A 0 , with unique phase (φ 0 ) and field A (z, t) where (x) is the unit box function (= 0, unless |x| ≤ 1). The motion of an electron through each wavelet is simply motion in a plane-wave for a limited period of time τ c , equal to the fluctuation crossing time ≈ L z |γ 0 /v z0 |. Yang [21] has obtained implicit an-alytic solutions for electron momenta p (s) in a plane wave with arbitrary initial momenta (p 0 = γ 0 mv 0 ) and phase φ 0 , in terms of the electron proper time s (t) = t t0 γ (t ) −1 dt . However, the relation between laboratory time (t) and proper time (s) is non-trivial: 2Rt = , a = −eE 0 /ωmc is the normalized vectorpotential, and time is normalized using the laser frequency (t → ωt). Making the small angle approximation, Rs 1, which corresponds to short acceleration times t ∆t max , where 4R 2 ∆t max ≈ 2 1 + γ 2 0 + p 2 x0 − 2γ 0 p z0 + p 2 z0 − 0.32ap x0 + 0.016a 2 , allows us to invert the expression for laboratory time and obtain: s 2Rt/ 1 + γ 2 0 + p 2 x0 − 2γ 0 p z0 + p 2 z0 . This approximation, satisfied for short scale-length wavelets (τ c ∆t max ), allows us to obtain expressions for the change in momentum of the electron ∆p = p (τ c ) − p (0) when crossing a wavelet: . These equations predict that electrons crossing a wavelet can experience a period of acceleration (deceleration) by remaining in phase with the wave during their transit, and exit the wavelet before deceleration (acceleration) occurs. By making repeated transitions of this type, electrons gain energy on average (predominantly in the forward direction), while undergoing dynamic diffusion in momentumspace.
Since most electrons move with speed ≈ c, we can re-express the scaling in terms of plasma scale length (L = cτ ). This suggests increasing the plasma scale length is an equally good route to obtaining high temperatures as increasing the intensity, which may explain the high temperatures observed in experiments and PIC simulations in long scale plasmas [17,18].
The expressions for ∆p z and ∆p x allow us to form a time-discrete map model in which N electrons, given random initial momenta (−0.11 ≤ p x,z ≤ 0.11), undergo N t transitions (i): p i+1 → p i 0 + ∆p (p 0 , φ 0 ), with the entrance phase treated as a random variable 0 ≤ φ 0 ≤ 2π. Three example energy spectra using the map are plotted in Fig. 5 for the cases a = 0.47 (I = 3 × 10 17 W cm −2 ), a = 0.66 (I = 6 × 10 17 W cm −2 ), and a = 0.94 (I = 1.2 × 10 18 W cm −2 ), with N = 2.5 × 10 4 and N t = 8. The map reproduces the essential features of the 3D simulations: approximately thermal spectra, relativistic temperatures despite a 1, temperature increasing with intensity. The main source of inaccuracy in this simplified picture is the assumption that electrons enter with random phase and therefore always interact with a wavelet non-adiabatically -this leads to an overestimation of the rate at which energy transfer occurs because in reality electrons do not exit a wavelet abruptly and the energy changes tend to be smaller in magnitude. Although the map model demonstrates how a plausible physical interpretation can give rise to thermal, relativistic spectra, it cannot be used in the case of long fluctuations (because it relies on τ c ∆t max ) and it should be used with caution at relativistic intensities because electron acceleration has been demonstrated to be chaotic in this regime [10].
We now discuss how this acceleration mechanism compares to other well known mechanisms. According to the literature, the ponderomotive mechanism occurs near the critical surface, where electrons are given a single kick by the j × B force in the forward direction. Unlike motion in a plane wave, the electron retains its energy when crossing the critical surface because the field decays evanescently beyond critical. However, this simplified picture does not account for the so-called ponderomotive spectrum, which contains electrons at much higher energy than the ponderomotive potential ϕ p . The existence of higher energy electrons can be explained by the presence of laser turbulence, with spatial scale close to the wavelength, in the vicinity of the critical surface. In the presence of this turbulence, some electrons will undergo > 1 kicks close to the critical surface, which accounts for the electrons found at energies above ϕ p . In longer scale length plasmas, most electrons are found far away from the critical surface and they have a chance to interact with the laser and plasma fields over long distances, giving rise to a host of acceleration mechanisms that explain the obsevation of very high energy tails [5,6,12,14,15] and most notably Two-Wave Chaos (TWC) [10,11] which explains how the bulk of the spectrum can exceed ponderomotive scaling. Like laser turbulence, TWC is a rapid, highly non-linear mechanism, but it only occurs at intensities above relativistic (I I T W C ≈ 1.2 × 10 18 W cm −2 ). For intensities below I T W C , TWC is not active, and we expect laser turbulence to dominate. We have compared acceleration in turbulent waves (as described here) and equivalentintensity colliding waves (as in TWC) at intensities above the I T W C threshold and find that turbulence further enhances the characteristic TWC spectrum energy by ≈ 60%, and that both processes operate on similarly rapid time scales. We conclude that TWC and turbulence are approximately comparable mechanisms when I I T W C . The transition to TWC for I I T W C is the reason for the change in the electron energy scaling with intensity near I ≈ I T W C .
In summary, we have demonstrated that spatial nonuniformities in short-pulse laser intensity profiles enables electrons to rapidly gain energy, leading to relativistic, thermal electron spectra even when the laser intensity is below the relativistic threshold. This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. | 2020-10-05T01:00:49.611Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "b6fba9c891cd41ea7bdb10b11cd5733a6b63e64b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b6fba9c891cd41ea7bdb10b11cd5733a6b63e64b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
144596722 | pes2o/s2orc | v3-fos-license | Personal Assistance in Sweden and Norway: From Difference to Convergence?
Within the same welfare state model, Norway and Sweden have established very different models for personal assistance. Sweden has developed a model with a strong consumerist profile with extensive rights and choices for users. In Norway, state control of the arrangement has been stronger. Users’ rights have been weaker and decisions are left to the discretion of the professionals in the welfare services. Recent political signals in both countries indicate that the models might converge in the future. In Sweden, authorities are worried that users’ rights have become too extensive. Efforts have been made to restrict users’ rights and to make public control stronger. In Norway, the target group for the arrangement has been extended and stronger individual rights to obtain personal assistance are proposed. The article will clarify the tendencies in the two countries and discuss the consequences for the arrangement and for the users’ control over their assistance.
Introduction
In personal assistance different discourses are unified. Pearson (2000) has characterised personal assistance as a mix between a social justice discourse and a liberalist market discourse. Radical trends among disabled people seeing themselves as suppressed and discriminated against are merged with market-based trends characterised by consumerism and the freedom to choose as fundamental principles. As a consequence, personal assistance is applauded both by the left and the right in the political landscape. However, actors may have very different agendas for supporting the arrangement, as personal assistance is filled with tensions. Different ways of organising personal assistance arrangements can therefore be seen as efforts to unite the different trends in various ways.
Within the same welfare state model, the neighbouring countries Norway and Sweden have established very different models for personal assistance. Sweden has developed a model with a strong consumerist profile with extensive Correspondence: Ole Petter Askheim, Lillehammer University College, Box 952, 2604 Lillehammer, Norway. Tel: '47 61288332. Email: ole-petter.askheim@hil.no rights and choices for the users. The arrangement is primarily organised as direct payments, at least for the users with extensive need for assistance. Private companies as well as municipalities and user controlled cooperatives offer personal assistance. In Norway, in contrast, state control of the arrangement has been much stronger. Users' rights have been weaker and decisions are left to the discretion of the professionals in the welfare services. The arrangement is presented as an alternative way of organising public social services. As a rule, users should be able to act as manager for his/her assistants to obtain the service. However, recent political signals in both countries indicate that the models might converge in the future. In Sweden, the authorities are worried that the users' rights have become too extensive. Efforts have been made to restrict users' rights and to make public control stronger. In Norway, the target group for the arrangement was recently extended and stronger individual rights to obtain personal assistance have been proposed. This article will clarify the different tendencies in the two countries and discuss the consequences of the arrangement and of the users' control over their assistance. As a background for the discussion, it is necessary first to describe in some detail the different models of personal assistance in the two countries before the new reforms.
Strong Individual Rights in Special Acts versus Integration in the Ordinary Social Services Act
The history of personal assistance in Sweden goes back to 1986 when the user controlled cooperative STIL (The Stockholm Cooperative for Independent Living) was established. Cooperatives in other cities followed in the years to come. In 1994, two particular acts were passed where personal assistance was established as an individual right for persons who qualified for the service (Lag om stö d och service til vissa funktionshindrade (LSS) Á ''The act concerning support and service to certain groups of disabled people''; and Lag om assistansersä ttning (LASS) Á ''The act concerning assistance and compensation'').
The primary aim of both acts was to secure persons with extensive impairments better opportunities to live independent lives. Three groups of disabled people obtained individual rights to personal assistance: (1) persons with learning disabilities, people with autism or conditions similar to autism; (2) persons with considerable intellectual disabilities/learning disabilities as a result of a brain injury in adult age (acquired brain injury); and (3) persons with other major and permanent disabilities which cause considerable difficulties in their daily life and as a consequence of this have a considerable need for supporting services and where the disability is not caused by the normal process of ageing (Socialstyrelsen 1997). Persons belonging to either of these categories got an absolute right to personal assistance. No conditions were placed on the users' abilities to manage the service.
In Norway, personal assistance was enacted later. In 2000, the arrangement was included in the Social Services Act. Personal assistance at that time had a 10-year history in the country as the first experiments started in the early 1990s, initiated by the national association of persons with physical disabilities (Norges Handikapforbund 1994). In the commentaries to the Social Services Act personal assistance was described as ''an alternative organisation of practical and personal help for people with comprehensive disabilities with need of assistance in their day to day life, both within and outside their own homes'' (Ot. prp. no. 8 1999Á2000:1). Even if the Social Services Act generally emphasises that the services are to consult the user and attach importance to his/her preference, in the end the municipality has the final word about which services are the most appropriate for the users. In this way, the rights of the users are considerably weaker than in the Swedish acts. Personal assistance is not limited to certain categories of disabled people in Norway. Until 2005, the decisive criterion for being entitled to the arrangement was the user's ability to act as manager for his/her assistants. Before a person is granted the service the user is assessed by representatives for the municipality, who should evaluate whether the user is sufficiently competent as manager of his/her assistant and whether personal assistance is seen as the most appropriate service to cover his/her needs for care and help. As a consequence, very few persons with intellectual impairments were granted personal assistance.
Different Financial Solutions
The main reason for the authorisation of personal assistance in two different acts in Sweden is a separation of the financial responsibility for the arrangement. LSS services are a municipal responsibility, while LASS was originally established as a solution to relieve the municipalities of the costs of users with extensive needs for assistance. Placing the responsibility for these users on the national level, the intention was to ensure that personal assistance should not be dependent on municipal financial priorities. From the start, municipalities were responsible for financing assistance for users with a need for assistance up to 20 hours each week, while the arrangement was fully financed by the national authorities for users with more extensive needs. However, in 1997 this principle was changed, and national authorities now pay for assistance exceeding 20 hours a week.
In Norway, personal assistance is exclusively a municipal responsibility. However, to stimulate the municipalities to implement the arrangement, national authorities offer support for a period of three years to cover additional costs for the municipalities when the arrangement is introduced to a new user. However, the amount is rather low: 100,000 Norwegian Kroner (approximately t 12,500) the first year, and NOK 50,000 the next two years.
The Prevalence of Personal Assistance
The different principles for organising personal assistance influence the prevalence of the arrangement. There are more than 10 times as many users in Sweden as in Norway (16,000 1 and 1500, respectively; see Andersen, Personal assistance in Sweden and Norway 181 Askheim, Begg & Guldvik 2006, SOU 2005. Compared to the total number of inhabitants in the two countries 1.78 per 1000 inhabitants have personal assistance in Sweden, while the corresponding number for Norway is 0.24 (Edebalk & Svensson 2005). The diversity of impairments is also much wider in Sweden. While 35% of the Swedish users are classified as belonging to category 1 (persons with learning disabilities, people with autism or conditions similar to autism), only 4% of the Norwegian users are classified as persons with intellectual impairments (Guldvik 2003, SOU 2005. The majority of the Swedish users also receive considerably more hours of personal assistance than the Norwegian users do. LASS users, who constitute 75% of the Swedish users, receive on average 97 hours of assistance each week, while the Norwegian users receive 36 hours each week (Guldvik 2003, SOU 2005.
Employers' Responsibility
In both Sweden and Norway, either the municipality, a user-led cooperative, or the user him-or herself can assume employers' responsibilities for personal assistance. In Sweden, private companies can also employ personal assistants, while Norway has not opened up for private, commercial actors. The number of persons who prefer to assume employers' responsibilities is very small in both countries. The municipality is the main employer (Guldvik 2003, SOU 2005. Still, both in Sweden and Norway the position of the municipalities has been weakened, as cooperatives, and in Sweden especially private companies, have strengthened their position. The share of the users who left employers' responsibilities to private companies increased from 14 to 23% in the period 1994Á2004 (SOU 2005:100). The cooperatives' share in Sweden remained relatively stable in the same period and in 2004 recruited 12% of the users. In Norway, there is only one user cooperative, and in 2002 it recruited about 25% of the users (Guldvik 2003). Studies from both countries indicate that users regard user control as better in the cooperatives than when the municipalities are the employer (Guldvik 2003, Larsson & Larsson 1998. In Sweden, private companies also score higher on user control. It seems like it is more difficult for the municipalities to hand over the responsibility and control to the users (Andersen et al. 2006).
Direct Payments or Alternative Service?
In Sweden, personal assistance is primarily carried out as direct payments. LASS users receive personal assistance exclusively as direct payment, while personal assistance according to LSS can be delivered both as a service and as direct payments. As mentioned, 75% of the users are offered personal assistance authorised through LASS and these are the users in need of the most comprehensive assistance. The arrangement's primary character as a cash allowance is reinforced as the users can freely choose employer for their assistants. The character of direct payments was further reinforced when the costs of personal assistance became standardised in 1997 (Riksfö rsäkringsverket 1999). The costs are covered by the same hourly rate for all the users.
In Norway, personal assistance has primarily been defined as an alternative organisation of services, and it is emphasised that the arrangement should be seen in combination with other municipal services. The guidelines of the Social Services Act state that the municipality has the right to choose the employer in each case. Finally, the municipality is responsible for making provisions for the basic training of the assistants, irrespective of who is the employer.
Different Solutions Á Remaining Dilemmas
Comparing the different solutions of personal assistance in the two countries, the first impression is that the Swedish model seems to fulfil the goals of user control and empowerment better than the Norwegian version does. More users are granted personal assistance in Sweden and most of them receive more hours of assistance than do the Norwegian users. More hours give better opportunities to utilise the flexibility of the arrangement. The possibility of adjusting the assistance to personal needs will improve and user control will be easier. Studies of Norwegian users indicate that users with the most hours of personal assistance are more satisfied and have the best control of the arrangement (Guldvik 2003). Further, the consumerist profile of the Swedish arrangement is consistent with how disabled activists and organisations of disabled people want personal assistance organised (Oliver &Barnes 1998, Barnes, Mercer andShakespeare 1999). The users have strongly advocated that personal assistance be given as an individual right for persons who prefer the arrangement and that it should be organised as direct payments in order to liberate them from dependence of the service.
However, in practice it is an open question as to whether the Swedish model always attends to user control in the best way. A more precise description would probably be that the different solutions in the two countries in different ways illustrate dilemmas and tensions of the personal assistance arrangements. One dilemma is the strength of the ideological profile of personal assistance. When great importance is attached to users' rights to control the arrangement, the users' ability to act as employers is fundamental. At the same time this will restrict the target group for personal assistance. On the other hand, more pragmatic solutions would include a wider group of users, and one consequence might easily be that the ideological basis of personal assistance will fade and be eroded. One important consequence for personal assistance in Sweden seems to be that pragmatic solutions are gaining ground, especially in cases where the municipality is the employer (Lewin 1998, Socialstyrelsen 1997. As a consequence, the difference between the models and the traditional home based services has been reduced (Larsson & Larsson 1998). The user organisations see such a tendency as a serious threat to the arrangement (Bengtsson 1998).
On the other hand, one consequence of the Norwegian requirement of users to be able to act as a manager for the assistants has been that several Personal assistance in Sweden and Norway 183 groups are excluded from an arrangement that seems to give much better opportunities than ordinary services for the users to gain influence and autonomy. A Norwegian study of personal assistance among people with intellectual disabilities shows that personal assistance offered much better opportunities for user influence than the ordinary services (Askheim 2001a). It offered better opportunities for flexibility, predictability and individual solutions. The capability to act as manager for the assistants, being a condition to qualify for personal assistance, could accordingly lead to the arrangement becoming a solution for the ''élite'' among disabled people, while other disabled persons are left with the routines of the ordinary services.
The Swedish arrangement has been criticised for being based on the idea that the user is always a competent and rational actor. The user is seen as having the qualifications and competence to choose the best solutions for her/ his needs. This conception of the assistance users as always competent and well informed is criticised as illusory. The critics are especially concerned that the special needs of weaker groups among the disabled are ignored and made invisible (Caruso 1999, Sundran 1994. In other words, the question is whether there is a discrepancy between the basic assumptions of the model requiring user competence, and the competence that users actually possess. From such a position it would be relevant to ask whether the Swedish arrangement favours disabled people who are able to successfully present their interests, to exert a ''rebellious influence'' (Barron et al. 2000, Lewin 1998). Since personal assistance is authorised as an individual right, one consequence is that the disabled person has to be active to get the service (LSS §8). Consequently, public reports have raised the question of whether persons who are not able to speak for themselves or do not receive assistance from their relatives receive less satisfactory public support than they ought to (Riksfö rsäkringsverket 2002:8).
If so, the question is whether some users are denied the qualified help they need (Askheim 2001b). Some users might need treatment or rehabilitation rather than personal assistance. Critics are worried that some users might become passive and lose functional skills because they are not granted the services they need, or that demands are not made of them from the assistants in fear of undermining the users' right to self-determination. If users do not express wishes for leading a different or more active life, the assistants might find it difficult to intervene even when they wish to do so. Because there is such a multitude of users who are granted the arrangement, the question has been asked whether ''more paternalism to strengthen the individual's autonomy'' is needed in some cases (Lewin 1998:226). A public report concludes that 40% of the users who have been granted personal assistance according to LASS have difficulties in managing their assistance (Riksrevisionen 2004). Therefore, more limitations of the target group are suggested, so that only persons who can act as managers should have the right to personal assistance. However, the suggestion was met with strong resistance from organisations recruiting people with intellectual impairments, and the Government quickly withdrew the proposal.
Public control of personal assistance is more extensive in the Norwegian model than in the Swedish one. The principle that the user always is seen as competent and in a position of knowing what is the best for him/her is not absolute. As mentioned before, the professionals in the municipality have to make an assessment of the users' competence in managing the arrangement before personal assistance is granted. In this way, the Norwegian model appears much more paternalistic than the Swedish one.
However, paternalism is an ambiguous concept. Christensen & Nilssen (2006) make a distinction between ''weak and strong paternalism''. Weak paternalism deals with restrictions in the right to self-determination for people who for different reasons are not able to present reflection or judgements in making voluntary and deliberate choices. It deals with collective obligations of the welfare state to citizens who are not capable of living an autonomous life, for instance people with limited cognitive abilities. From such a position the power which is exercised is seen as individualised care, since it contributes to prevent hardship, or is a contribution to improve the individual's ability to make autonomous choices. The authors emphasise that this kind of paternalism requires empathy and the involvement in the user's situation and a strong ethical consciousness from professionals. If not, weak paternalism could easily distort into guardianship. On the other hand, the way Christensen & Nilssen define strong paternalism, restrictions are not set up because the individuals are unable to make autonomous decisions, but to prevent choices which are seen as unacceptable.
The paternalism in the Norwegian personal assistance model can be classified as weak paternalism. The public control can be explained as a means to make sure that the interests of weaker groups among the disabled do not become invisible and are ignored in the name of user involvement, as critics maintain is a consequence of the Swedish arrangement. Since access to personal assistance is so closely linked to the users' ability to act as a manager of the service, the paternalism in the Norwegian system could be explained as a way of ensuring that users are actually able to manage it. Stronger public control could further be seen as a way to secure good quality of assistance.
However, the classification of personal assistance into different kinds of paternalism should not be taken too far. Strong paternalism can be obscured by a rhetorical support of weak paternalism. For instance, lack of ability to act as manager for the assistant can be overcome by training and practice. Many of the Norwegian personal assistance users criticise municipalities for not taking their responsibility as employers seriously by not training the users in the role as managers of their assistants (Andersen et al. 2006). In this way, one consequence of the municipalities' right to take the final decision with regards to personal assistance being the most suitable solution for the individual user could be that some users who qualify for the arrangement are excluded.
Towards Convergence?
Within the same welfare state model, personal assistance in Norway and Sweden has developed very differently. However, recent observations in both Personal assistance in Sweden and Norway 185 countries indicate that the arrangements will converge in the time to come. In Sweden, efforts are made by the authorities to limit the arrangement and make public control stronger. In Norway, the development takes the opposite direction. The tendency there is an extension of the arrangement and stronger individual rights for the users.
In 2004, a parliamentarian committee was set up in Sweden to give a broad overview of personal assistance. Among other things, the commission was mandated to discuss the formal requirements for assuming employers' responsibility for personal assistance, and to discuss ways to implement stronger public control. One reason involved reports showing considerable variation in quality among employers. Further, direct payments were made in different ways, and some companies were suspected of using the money for other purposes than for which it was intended. As a consequence, the committee in its first report proposed a more thorough and active public inspection (SOU 2005:100). Also, the national authorities, financing personal assistance, should be placed in a better position to control the spending of the money to secure appropriate use of resources. The committee proposed that more specific criteria should be developed to define what good personal assistance is. These criteria should then constitute the guidelines for inspection of the personal assistance arrangement.
The committee stated that in future reports proposals would be presented to restrict or stabilise the costs for personal assistance and improve cost control. A main reason for the eagerness to restrict personal assistance in Sweden is that public costs have been much higher than expected. Just one year after LASS had been passed in 1994, the expected costs were increased by 900 million SEK (t 130 million), and the expenses have continued to escalate (Socialstyrelsen 1997). In the period 1994Á2004, costs increased by an average of 15% each year. At the end of the period, the expenses were 12.7 billion SEK (t 1.9 billion) (SOU 2005:100). The parliamentary committee was thus just one of many efforts by the government to gain better control of the public expenses, which by far have exceeded the official expectations.
In Norway, the target group for personal assistance was, late in 2005, extended to persons who are not able to act as managers of the arrangement on their own (Helse-og omsorgsdepartementet 2005). Another person than the user can now be the manager instead of or jointly with the user. The person can be one of the user's parents, his/her guardian or an administrative assistant appointed by the user. The government especially mentions adults with intellectual disabilities and families with children with impairments as groups that could profit from having their services organised as personal assistance. In 2007, the Ministry of Health and Care issued a Green Paper proposing that personal assistance should be authorised as an individual right for disabled people in need of extensive services (Helse-og omsorgsdepartementet 2007). More exactly, the right should come into force when the need for services extends to 20 hours a week. The ministry further proposes that the users have the right to decide who should take on employers' responsibilities for the assistant. As mentioned above, this has so far been the responsibility of the municipality. The new proposal represents a transition of personal assistance to a direct payments model. The municipalities are to grant the users a certain number of hours calculated on the basis on fixed hourly rates. Users may spend the money on more expensive assistance if they want to, but will then have fewer hours at his/her disposal. Or they can administer the arrangement themselves and get more hours at their disposal. The ministry explicitly emphasises that one consequence of the proposal will probably be that more employers will be interested in entering the personal assistance market.
The consequences of the changes in the two countries are still uncertain. The proposals in Sweden to strengthen control and inspection have received support from the users. Since 1995, user organisations (Interessefö reningen fö r Assistansberättigade IfA) have set up procedures for the approval of employers. The intention has been to assist the users in selecting serious employers, who offer assistance of good quality. However, suggestions to reduce the costs by narrowing down the arrangement will probably be met with strong resistance. The national authorities have also earlier made efforts to reduce costs, which were met by strong resistance from the users, and the authorities have been forced to retreat (Askheim 2001b).
At first glance the implication of the extension of the Norwegian arrangement seems to indicate a democratisation. More people will get personal assistance and the rights for at least some of the user groups will be stronger. However, whether this will be the only consequence will depend on different circumstances.
Firstly, the consequences of the extension will probably depend on how the reform is supported financially. As mentioned above, the national authorities at present contribute for a period of three years to cover the municipalities' additional costs when the arrangement is introduced to a new user. Otherwise, personal assistance is exclusively financed by the municipal budgets. According to the new proposals, the national transfers in the future will not be linked to the individual user, but will be directed towards more general information and guidance about the arrangement. At the same time, nine out of 10 users have received increased hours of service after they were granted personal assistance, compared to other services. Many of them have attained considerable increases (Guldvik 2003). The main reason for this is the low number of persons in each municipality who are granted personal assistance. The municipalities appear to use different criteria when allocating resources to personal assistance compared to other services. However, if personal assistance becomes more common and more people claim the service, a more modest number of hours to each user could easily be the result. Due to scarce municipal resources and need for help from other groups, the municipalities will probably become more restrictive when personal assistance is allocated and thus adapt to the level of other services. In the city of Trondheim, the third biggest municipality in Norway, the authorities quickly indicated that there would be fewer resources available to personal assistance as a the consequence of the extension of the service (Handikappnytt 2006).
A consequence of fewer resources to personal assistance might therefore easily be that the users get less assistance and that their needs are not met.
Personal assistance in Sweden and Norway 187
A further consequence is that the users' opportunities for self-determination will be weakened. As mentioned above, the average number of hours of personal assistance is much lower in Norway than in Sweden. At the same time, users who are allocated the most hours experience the best opportunity for user influence. If the hours allocated to personal assistance are further reduced, assistants will only have time to carry out duties that are strictly necessary like personal care and practical tasks in the house. The user's influence will then be further reduced. The opportunities for active participation and social integration in society will be considerably reduced.
If the intention is to protect the personal assistance users against municipal financial priorities, a divided financial responsibility between the state and the municipalities, like in Sweden, will probably be necessary also in Norway. However, in the Green Paper from 2007 the Government makes it clear that the financial model of personal assistance will not be changed. A divided solution, i.e. partly national, partly local funding, will change the principles of responsibility between the different administrative levels as they were established in Norway in the 1980s. There has been a wide political consensus about these principles (Hagen & Sö rensen 1997). They imply that the administrative level with the authority and responsibility for making decisions should also assume responsibility for financing the costs of the decision. In addition, there is no doubt that the very high state costs for personal assistance in Sweden have also influenced the Norwegian authorities to modesty.
The proposal to extend the individual right to personal assistance can in a similar way undoubtedly have unexpected consequences. The rights are limited to persons with extensive need for assistance (minimum 20 hours a week). An extension can make the day-to-day situation more secure and predictable for these persons. However, the consequences for persons with less extensive needs are uncertain. About 25% of the Norwegian users today have less than 15 hours of personal assistance per week (Guldvik 2003). An unforeseen consequence of the reform may be that the municipalities claim that the users must be in need of at least 20 hours each week to get the service. In other words, the extension of rights to users who need most assistance could easily turn into a limitation of eligibility. For users in need of less assistance the result can be diminished chances of getting their services organised as personal assistance.
The consequences of extended rights combined with organising personal assistance as direct payments are also unclear. As mentioned above, such a model receives wide support from disabled activists and organisations of disabled. However, critics point out that the dangers and risks connected to such a model have also been neglected from the organisations of disabled (Glasby & Littlechild 2002). They are criticised for not considering seriously the special needs and problems of persons with learning disabilities. Also, there seems to be a growing concern in Sweden that the special needs of the weaker groups among disabled people are ignored and made invisible as a consequence of their personal assistance model. Similar worries might appear in Norway as the target group for personal assistance is extended and as persons with extensive needs are given an individual right the arrangement within a direct payments model.
Old Dilemmas in New Costumes
The different models for personal assistance in Norway and Sweden illustrate fundamental dilemmas with personal assistance as a welfare service. Important dilemmas refer to whether personal assistance should be an arrangement for a limited group, and maintain a strong ideological profile, or whether the user group should be extended, at the risk of more pragmatic solutions at the cost of weak user groups. Additionally, user control seems to depend on how the financial situation of the arrangement will develop. Few hours of personal assistance will limit the assistance to strictly necessary tasks and diminish the user's opportunities for activity and participation in society. On the other hand, different treatment of personal assistance users compared to other people dependent on public services could quickly result in a strong increase in other public expenses. Thus, assistance users might gain a privileged position compared to other needy groups. In turn, this might lead to a conflict of interests between different groups, who are all dependent on services from the welfare state. It is interesting to note that the result of Swedish and Norwegian efforts to solve the dilemmas may be a closer convergence between the models in the two countries. Sweden is approaching the Norwegian model while Norway is moving towards the Swedish solution. It may well be that this convergence will not solve the dilemmas, but that they will reappear in new clothing. | 2019-05-05T13:05:52.881Z | 2008-08-19T00:00:00.000 | {
"year": 2008,
"sha1": "56432550eab4bfe290cdc10c48a0410a95d652bb",
"oa_license": "CCBY",
"oa_url": "http://www.sjdr.se/articles/10.1080/15017410802145300/galley/455/download/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6137bd7da566e928356d99ca76bbd0e80f9bafa9",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Business"
]
} |
69219564 | pes2o/s2orc | v3-fos-license | Active Fault-Tolerant Control of Timed Automata with Guards
: In this paper, an approach for active fault-tolerant control of discrete event systems modeled by timed automata with guards is proposed. Time is essential to detect some faults, and will be used as a criterion to select the control law. A model representing the behavior of the whole system that respects time constraints is first constructed. Hence, given a diagnosis result, a reconfigured control law is extracted from the previous model on the basis of the fastest execution time of desired tasks
INTRODUCTION
Availability of industrial processes within a company is a constant concern, with significant economic implications. It depends among others on the ability of the systems to adapt to faults before they can have a negative impact on production. Fault-Tolerant Control (FTC) is a means of dependability that allows the interaction with the system controller, in order to adapt the control to a faulty behavior of the plant. The production strategy can be accommodated before the productivity of the system is reduced. Basic definitions of FTC are presented in (Blanke et al. 2016).
Concerning FTC of Discrete Event Systems (DES), the different methods can be separated in two categories.
Passive FTC approaches generally consist of a single controller model that can be used for both nominal and faulty behavior. In (Seong-Jin Park and Jong-Tae Lim 1999), the controller is designed to respect the nominal specification with and without the occurrence of a fault. Some approach allows degraded modes of operation (Wen et al. 2008) (Wittmann, Richter, and Moor 2012). An extension of the latter introduces a module that hides the fault to the controller ( Wittmann, Richter, and Moor 2013).
On the other hand, active FTC methods use several models of the controller that can be switched. In (Shu and Lin 2014), the controller model is selected in a bank of precomputed models according to the diagnosis result, while in (Paoli, Sartini, and Lafortune 2011), only the current state of the controller is adapted. Recently, approaches based on tracking controller reconfiguration were proposed, for both unambiguous (Schuh and Lunze 2016b) and ambiguous diagnosis (Schuh and Lunze 2016a).
In a previous work (Niguez, Amari, and Faure 2015), it has been shown that passive approaches require explicit models of faults, which is not feasible for an industrial application. Furthermore, there is no method for FTC of DES taking the physical time into account. This is particularly limiting as it is not possible to treat faults that are only detectable thanks to the measurement of time, and which results in most cases in a system failure.
This paper proposes a method for active FTC of DES modeled by timed automata with guards. This formalism has been selected because it allows to represent execution date of an event with an interval. This represents the fact that in practice, an event does not occur at the exact same time, and a task does not have an exact duration. Fig. 1 details the architecture of the system considered. A faulty plant is controlled by a controller through controllable events , and reacts by generating uncontrollable events . The diagnoser is in charge of detecting the occurrence of a fault and to compute a diagnosis result. This result is sent to the reconfiguration block that consists of two units. The reconfiguration model ( ) can be seen as a database of acceptable behaviors. The reconfigurator ℛ must select and extract a reconfigured control law from the reconfiguration model based on the diagnosis result. Then this new control law is sent to the controller in order to accommodate the fault .
The main contribution of this paper is the construction method of the reconfiguration block. For this reason, it has been chosen to use an existing solution for the diagnoser. Since time was a major criterion, the diagnoser proposed in (Schneider, Litz, and Danancher 2011) was selected.
Fig. 1 -A fault-tolerant control loop
The paper is organized as follows: section 2 details the formalism of timed automata with guards and the hypotheses of this work. In section 3 the construction of the reconfiguration model ( ) is detailed. Section 4 exposes the different cases of reconfiguration. Finally, an example of application on a sorting case is provided in section 5.
Timed automata with guards
Definition 1 (Cassandras and Lafortune 2008): a timed automaton with guards, denoted by , is a 6-tuple = ( , Σ, 0 , , , ) where: • is the set of states; • 0 ∈ is the initial state; • ⊂ is the set of final (or marked) states; • Σ is a finite set of events; • is the set of clocks, 1 , … , , with ( ) ∈ ℝ + , ∈ ℝ + ; • is the set of timed transitions of the automaton with ⊆ × ( ) × Σ × 2 × where ( ) is the set of admissible constraints for the clocks in the set .
The set
of timed transitions is to be interpreted as follows: if ( , , , , ) ∈ , then there is a transition from to with the complete label ( , , ) where ∈ ( ), ∈ Σ and ⊆ .
The set of admissible clock constraints ( ) is specified as follows: • If ⊆ ℝ + , then all conditions of the form ( ) ∈ are in ( ); • If 1 and 2 belong to ( ), then 1 ∧ 2 belongs to ( );
Remarks:
• There is no need for the bounds of admissible clock constraints to be integer. • All clocks are set to 0 when the system is initialized.
• corresponds to the subset of clocks that will be reset when the transition is fired. This mechanism allows modeling systems in which duration is stated for sequences of events.
An example of graphical representation of Timed Automata with Guards (TAG) is presented in part 2.3.
The determinism of timed automata with guards can be defined in two ways: • Time-determinism: an automaton is deterministic if for all events in all states, the guards of the outgoing transitions are mutually exclusive. • Event-determinism: an automaton is deterministic if for all states, there is at most one outgoing transition triggered with the same event.
It can be denoted that any event-determinist TAG is also timedeterminist.
Hypotheses
Several hypotheses and limitations can be stated: • For small systems, only one clock is generally sufficient to operate the system. Concerning larger systems, they can be handled by using decentralized approaches, in which each sub-system is modeled with a single clock. In that specific case of single clock systems, the parallel composition could be simplified since the conjunction of two guards would become equivalent to the intersection of the intervals. If the result of that intersection is the empty set, then the guard can never be validated, and the associated transition can be deleted. • All models will be event-deterministic.
• The repartition of occurrence dates of an event in a given interval will be modeled with a normal distribution.
Graphical representation and notations
Fig. 2 depicts an example of a system modeled with a TAG, called ( 1 ). It consists of two processes and ℬ. Each process can be started with controllable events and , respectively followed by the sequences of uncontrollable events and . The objective of the system is achieved when the event is generated. Both processes end with the occurrence of the event , which means that process ℬ can be seen as a redundancy of the process . The system can be restarted with the controllable event . State 0 is considered initial (shown with an incoming arrow). State 5 is considered as final (shown with an outgoing arrow). Each time the clock is reset, it is stated in the transition with the element (for example, in the transition , ( ), from state 5 to state 0 ). Otherwise, it is indicated with −. In this example, every uncontrollable event is expected to occur before an upper bound time units (t.u.), while every controllable event is considered as occurring instantly at the current clock value ( ) when entering a new state (the interval of these transitions should be [ ( ); ( )]). However, for the sake of clarity, the notation ( ) is used instead of [ ( ); ( )] in the graphical representations of TAGs. Controllable (resp. uncontrollable) events are represented by uppercase (resp. lowercase) letters.
CONSTRUCTION OF THE RECONFIGURATION MODEL
The objective of this part is to provide a construction method of the reconfiguration model ( ) of Fig. 1. This step of the approach must be done offline.
Problem statement
The main idea is to construct a reconfiguration model of the system that describes the entire behavior complying with a set of timed rules. Two kinds of models can be used to obtain this result: Plant models and Specification models. These models are then composed in order to obtain the reconfiguration model. Every succession of states that leads from the initial state to the final state correspond to a sequence of operations that meets the time constraints and performs the expected tasks.
Plant models
Plant models are used to represent the components of the system. They correspond to their logical behavior, without taking time constraints into account. The TAG ( 1 ) of Fig. 2 can be considered as a plant model. We will consider that: • Controllable events are generated as soon as they are expected, which correspond to the current clock value when entering a new state. This is represented by the interval ( ) in the associated transitions. • Uncontrollable events are expected to occur between 0 and t.u., with an unknown upper bound. The corresponding interval depicts the fact that the date of the occurrence of the event is not constant. This is represented by the interval [0; [ in the associated transition.
The TAG of Fig. 2 depicts the two sequences of events that include the event from the initial state.
Specification models
Specification models are graphical representations of the timed rules that the system must satisfy to operate in its nominal conditions. They are used to specify the intervals of the transitions associated with uncontrollable events. ( 1 ) and ( 3 ) Fig. 3 shows two specifications that ensure timed rules on the system of Fig. 2. ( 1 ) states that the event must occur between 1 and 3 t.u. after the occurrence of the event , and that the system is reinitialized through event before any other cycle of process . It can be noted that it is not necessary to reset the clock on the occurrence of since is supposed to occur instantly when the transition from S1 to S2 is fired.
( 3 ) describes the fact that must occur between 1 and 2 t.u. after the occurrence of either or . The specification ( 2 ) (not presented here) is similar to ( 1 ) in that it ensures that occurs between 2 and 5 t.u. after . For specifications of Fig. 3, all states are considered as final, but the outgoing arrows were deleted for the sake of readability.
Reconfiguration model
Given the plant and specification models determined as explained above, the following algorithm is proposed to compute the reconfiguration model.
Result: reconfiguration model ( )
If there is no final state in ( ), this means that the specifications are too restrictive. One or more restrictions must be relaxed in order for the system to perform its expected behavior. The TAG of Fig. 4 presents the reconfiguration model obtained by composition of ( 1 ), ( 1 ), ( 2 ) and ( 3 ), and represents all the evolutions of the components that respect the time constraints of the specification. Both states sequences 0 1 2 5 and 0 3 4 5 lead from the initial state 0 to the final state 5 . However, it can be denoted that the first sequence is on average faster to execute than the second one for a normal distribution of occurrence dates (resp. 3,5 t.u. and 5 t.u.).
RECONFIGURATION OF THE CONTROLLER
The objective of this part is to detail the method of reconfiguration of the controller given the reconfiguration model and the diagnosis result, which will be performed by the reconfigurator unit ℛ of Fig. 1. Since the reconfiguration step is done accordingly to the diagnosis result, it must be only done online.
Reconfiguration cases
For the nominal behavior, the control law is directly extracted from the reconfiguration model by selecting the fastestexecution-time path on the average from the initial state to the final state. In the example Fig. 4, it corresponds to the sequence of states 0 1 2 5 , with an average execution time of 3,5 t.u.
Concerning the behavior in case of a fault, we will distinguish cases based on the three types of diagnosis results considered (Schneider, Litz, and Danancher 2011): Residual: { } -event was expected but did not occur
Result: control law ( )
If it is not possible to reach a final state after step 1, it means that it is not possible to reconfigure the system. In practice, this means that the system possesses no redundancy for the component associated to the faulty event. Let us consider the diagnosis residual { }. In Fig. 4, the transition from 1 to 2 must be deleted. However, it is still possible to reach the final state through the sequence of events 0 3 4 5 . The submodel extracted from this sequence corresponds to the reconfigured control law ( ), with an average execution time of 5 t.u. , with a date of execution of 5. In order to compute the new control law, the transition from 1 to 2 is modified to , [1; 5], . The average execution-time of the sequence 0 1 2 5 become 4,5 t.u., which is still faster than the average execution-time of the sequence 0 3 4 5 . Hence, the reconfigured control law can be extracted from states 0 1 2 5 .
Case of ambiguous diagnosis
The case of ambiguous diagnosis corresponds to the situation when the diagnoser proposes a set of faulty events instead of single one. It is possible to treat this case by successively applying step 1 of algorithms 2 and 3 for each possibly faulty event and then apply step 2.
Case of multiple final states
It is necessary to distinguish two cases: • All final states have the same signification for the system (e.g. two processes that product the same pieces. One of the processes can be seen as a redundancy).
In this case, it is sufficient to find a path from the initial state to any of the final states, since they all share the same physical meaning.
• Final states have different meanings for the system (e.g. a machine producing pieces depending on the input raw piece) It is necessary to compute a sub-control law for every set of final states that holds a different signification. Hence, the control law is obtained by composition of all the sub-control laws. An example of this kind of system is treated in section 5.
APPLICATION: SORTING SYSTEM
In this section, the reconfiguration method is applied for illustration purpose. The example used for this application is a turntable from a sorting system (Fig. 5), whose purpose is to separate packages of two different sizes arriving from conveyer B, small packages sent to the right, large packages sent to the left. This system is inspired by the one which is proposed in the ITS PLC software and has been modified to highlight the interest of the method with the addition of a second controller to rotate the table.
Presentation of the system
The turntable is composed of a table (C) and a set of rollers (D) which can both rotate in two directions. This specificity allows to distribute the packages on each side in two different ways, so the system can be reconfigured in case of faults. Conveyors B, E and G are not considered in this paper.
The table below lists all the events that are used to model the system. In the case of controllable events, (resp. � ) means that the actuator is set to 1 (resp. 0).
Construction of the reconfiguration model of the turntable
For simplicity reasons, only the reconfiguration model of the turntable will be presented in Fig. 6. It was built using two plant models (one for the table and one for the rollers) and three specification models (one ensuring that large (small) packages are delivered to the left (right), one for the delay of loading/unloading of the rollers and another one for the delay of the and 24 25 26 27 28 ) describe the only two sequences of events that ensure the distribution of a large package (resp. a small package) to the left (resp. right): + followed by − or − followed by + (resp. + followed by + or − followed by − ). States 9 , 14 , 23 and 28 are final. However, 9 and 14 mean that a small package has been successfully transferred to the right, while 23 and 28 have the same meaning for a large package delivered to the left. Hence, for the step of control law selection, it is necessary to keep one of the states 9 and 14 and one of the states 23 and 28 , as well as the sequences leading to these states.
Reconfiguration scenarios
In this part, the selection of the control law for the controller will be detailed in different cases of reconfiguration.
First scenario: faultless case
In the case where no fault has occurred, it is possible to extract the control law directly from the reconfiguration model. Since the execution time is not a discriminant criterion here, the control law can be obtained arbitrarily as long as it contains exactly one of the states 9 and 14 and one of the states 23 and 28 . A possible solution for the control law can be obtained from the reconfiguration model of Fig. 6 without states 10 , 11 , 12 , 13 , 14 , 24 , 25 , 26 , 27 , 28 , 32 , 33 and 34 .
Second scenario: faulty case 1 In this case, the actuator allowing the counterclockwise rotation of the rollers cannot be activated. The corresponding diagnosis result emitted by the diagnoser is : { − }, that can be interpreted as "the event − was expected but did not occur". According to the Algorithm 2 of the section 4.1.1, the first step consists in the suppression of all transitions labeled with the faulty event. According to Fig. 6, transitions from 7 to 8 and from 26 to 27 must be deleted. The consequence is that final states 9 and 28 cannot be reached anymore, but states 14 and 23 are still accessible. Hence, the only possible solution for the reconfigured control law corresponds to the reconfiguration model of Fig. 6 without states 5 , 6 , 7 , 8 , 9 , 24 , 25 , 26 , 27 and 28 . Third scenario: faulty case 2 In this case, the sensor is subject to activation delays, valued at 0,5 t.u. The diagnosis result is { }, . According to the Algorithm 3 of the section 4.1.2, transitions from 5 to 6 and from 19 to 20 are both adjusted to , [2,9; 3,6], , resulting in a difference in the average time of sequences leading to final states. Namely, it is faster in terms of execution time to reach states 14 and 28 . Hence, the reconfigured control law corresponds to the reconfiguration model of Fig. 6 without states 5 , 6 , 7 , 8 , 9 , 19 , 20 , 21 , 22 , 23 , 29 , 30 and 31 .
6. CONCLUSION A method of fault-tolerant control of timed automata with guards has been presented, based on the diagnosis obtained with timed-residuals. The reconfiguration is performed in two steps. First, the reconfiguration model is computed, representing the entire system behavior that respects timed rules. Secondly, this model and diagnostic results are used to search the fastest paths from the initial to the final states. Finally, these paths are used to compute the control law of the system for each case of operation. An example of application is provided on a simple system.
In future works, it would be interesting to use a linear representation of TAG (Niguez, Amari, and Faure 2016) in order to search for the fastest path during the reconfiguration step. | 2019-02-19T14:07:24.287Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "14c82ad125cf7f695c767027699346692dcb3a00",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ifacol.2017.08.2398",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f12aa6ac4c88f67234a5305e5d0c205954bf5539",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
222272103 | pes2o/s2orc | v3-fos-license | Thermodynamics of Near BPS Black Holes in AdS$_4$ and AdS$_7$
We develop the thermodynamics of black holes in AdS$_4$ and AdS$_7$ near their BPS limit. In each setting we study the two distinct deformations orthogonal to the BPS surface as well as their nontrivial interplay with each other and with BPS properties. Our results illuminate recent microscopic calculations of the BPS entropy. We show that these microscopic computations can be leveraged to also describe the near BPS regime, by generalizing the boundary conditions imposed on states.
Introduction
Physicists have recently made significant advances towards a microscopic understanding of black hole entropy in AdS spacetimes [1][2][3]. Nearly all progress has relied heavily on supersymmetry, such as using the supersymmetric index to count states or supersymmetric localization to compute the effective action. These methods are powerful and quite rigorous, but they also have obvious limitations. For example, some physical black hole properties change discontinuously in the strict supersymmetric limit [4].
In this paper we study nearly supersymmetric black holes in AdS. Such black holes are important because they have many physical properties in common with generic black holes, yet they inherit some of the technical advantages held by their strictly supersymmetric relatives. More precisely, our specific goal is to develop properties of nearBPS black holes in AdS 4 and AdS 7 and compare with analogous results previously established in AdS 5 [5]. Some details differ between these settings, of course, but several aspects are so similar that they may be described by the same effective theory. This agrees nicely with the understanding of universality emerging in the context of nearAdS 2 /nearCFT 1 correspondence, another research direction with rapid progress over the last few years [6][7][8].
SUSY or Not?
Given the central importance of supersymmetry in nearly all current work in the area we now address, before getting to further details, how any progress can be made at all. To do so, recall the microstate counting of asymptotically flat black holes which has a long history and is understood incomparably better. Many precision agreements were established, not just at the leading order but also for higher derivative corrections, quantum corrections, and far beyond [9][10][11]. Moreover, in most cases it has been understood why these agreements hold with the precision they do. The reasons vary according to the setting and, although they often involve supersymmetry, that is not always the case. In particular, it has proven fruitful to study black holes that are solutions to theories with supersymmetry without themselves preserving any supersymmetry.
Specifically, experience with asymptotically flat black holes suggests that small deformations away from the BPS limit are under good control. One avenue is revealed geometrically by the near horizon AdS 2 being enlarged to an AdS 3 . In this situation a combination of anomaly arguments and modular invariance ensures agreements at leading and subleading order, even when supersymmetry is broken [12,13]. This success is not a feature of AdS 3 alone, the entropy of extremal but non-BPS black holes can be accounted for correctly even at four-derivative level, as shown by application of the entropy extremization formalism in AdS 2 [14]. Such successes for asymptotically flat black holes motivate studying nearly supersymmetric black holes in AdS.
nAdS 2 /CFT 1 Correspondence
As we mentioned tangentially already, a somewhat complementary motivation for studying near BPS black holes is presented by recent progress on their holographic description through the nearAdS 2 /nearCFT 1 correspondence [7,15]. A central aspect of this duality is a nontrivial symmetry breaking pattern which coincides between the two sides. It is realized in melonic quantum theories such as the SYK model and its avatars [16][17][18], novel settings that have justifiably generated much interest. Importantly, the symmetry breaking pattern is also realized in gravitational theories such as the Jackiw-Teitelboim model and its relatives [6][7][8]. This sets the scene for a holographic duality.
However, nearAdS 2 /nearCFT 1 correspondence is not a straightforward equivalence, it is an IR duality where the effective theory in the bulk and on the boundary are dual to one another only at large distances. The significance of the near BPS AdS black holes we study is that they offer a UV completion of the description on both sides of the duality. Thus, for each of the black holes, there is a specific dual theory that is well-defined in the UV.
The most precise studies of black holes focus on supersymmetric ground states and their ability to describe the entropy of BPS black holes microscopically. This is very interesting, of course, but BPS states are relatively inert ground states, unsuitable as proxies for physical black holes. It is therefore important to explore the low lying excited states as well [5]. That is one of our motivations.
The low energy effective theory is usefully summarized as a theory of Schwarzian type, characterized by one or more dimensionful coupling constants. These coefficients are response parameters such as the specific heat of the black hole. They are arbitrary inputs from the effective field theory point of view. However, in the context of a UV complete theory the response parameters can be determined from microscopic principles [19][20][21][22]. One goal of this article is to do so explicitly.
Black Holes in AdS
The bosonic symmetry groups of AdS 4 × S 7 , AdS 5 × S 5 , and AdS 7 × S 4 all have rank 6. Therefore, in each case the BPS condition derived from supersymmetry becomes a linear relation between 6 conserved charges that arise as eigenvalues of their respective Cartan generators. In other words, it expresses the mass M as a sum of 5 conserved charges as: supergravity in five dimensions are characterized by 5 charges that satisfy the constraint 2) where N 2 refers to the dual N = 4 SYM theory with SU (N ) gauge group. This type of constraint may at first appear novel and special to AdS but actually it is not, it is just more complicated in AdS than in the more familiar settings with asymptotically flat spacetime. For example, within the family of Kerr-Newman black holes in four-dimensional Einstein-Maxwell theory (supersymmetrized as minimal ungauged N = 2 supergravity), the BPS black holes have mass given in terms of charge as M = Q and they have angular momentum J = 0. There are certainly regular extremal Kerr-Newman black holes that rotate, including the extremal Kerr black hole that is neutral under the gauge field. Alas, supersymmetry demands more than vanishing temperature, it imposes the constraint that the angular momentum J = 0. The recognition of the constraint plays an important role in this article. We study thermodynamics of near BPS black holes, i.e. parameter values that differ slightly from those of a reference BPS black hole. Importantly, there is a two-dimensional space of deformations. The most obvious is to increase the black hole mass beyond its BPS value while keeping charges fixed. This is equivalent to increasing temperature to T > 0. The alternative is to maintain extremality (T = 0) but modify charges so they violate the constraint.
The two directions away from the BPS surface both have a preferred sign: stability imposes not only T ≥ 0 but also h ≥ 0. Thus the parameter space is a quadrant of the plane with the BPS point at the origin. Moreover, the interplay between the two directions is quite nontrivial. It is characterized by 3 response functions, each of which depend on all the conserved charges (subject to the constraint). A successful microscopic theory of the near BPS black holes must account for all these nontrivial functions.
Microscopic Description of near BPS AdS Black Holes
In this article we do not attempt to derive a microscopic theory of the AdS black holes we study ab initio but we explain how recent progress on the strict BPS limit can be leveraged towards that goal.
The understanding of the AdS black hole entropy from microscopic principles has admittedly proven quite subtle even in the BPS limit. However, even though significant technical questions remain, the outline is now generally agreed upon. Accordingly, we take it as a given that the entropy of BPS black holes in AdS can be interpreted in terms of a dual field theory. We think of the required estimate of the asymptotic density of states as a two step-process: enumeration of all states in a "large" Hilbert space, followed by identifying physical states as those satisfying the constraint (1.2), or one of its analogues in other dimensions, Our strategy for addressing near BPS black holes is to take the "large" Hilbert space established in the course of investigating BPS black holes as a given starting point. We then identify physical states by imposing a constraint that has been relaxed to accommodate a departure from the BPS limit. The less demanding constraint permits more physical states and this allows computation of the excess entropy enjoyed by near BPS black holes. We find that the entropy computed from this microscopic reasoning agrees with the one found from gravitational thermodynamics.
Our prescription describing near BPS thermodynamics is intuitive and physically reasonable but it goes against conventional wisdom on what quantities can be reliably computed. It suggests that agreements are justified not by preserved supersymmetry alone but also by broken supersymmetry and/or anomalies. We stress again that we do not claim fully principled comparisons between nonsupersymmetric black holes in AdS and their holographically dual boundary theory. It would in fact be premature to expect such results since the BPS agreements themselves remain beset by questions. However, the agreements we report are quite impressive as they involve many parameters and apply in each of the dimensions we develop. This persuades us that they are accurate and we expect they will ultimately acquire a solid justification.
Organization of This Article
This paper is divided into two parts: section 2 on AdS 4 black holes and section 3 on AdS 7 black holes. We have purposely written these sections on AdS 4 and AdS 7 so they are largely independent and can be studied in any order.
Within each section, we first discuss black hole thermodynamics generally and then consider the nature of the BPS limit. This sets up the development of several distinct near BPS regions and their interplay, all from the gravitational point of view. Our discussion of the microscopic description is collected in subsection 2.7 for AdS 4 and subsections 3.6-3.8 for AdS 7 .
The Kerr-Newman-AdS 4 Black Hole
In this section we study the thermodyamics of Kerr-Newman AdS 4 black holes. We discuss the constraint on charges or potentials that is required for supersymmetry and consider the nearBPS black holes that have small temperature and/or fail to satisfy the constraint. We show that the partition function that accounts for BPS black hole entropy microscopically also describes the nearBPS regime.
The Kerr-Newman AdS 4 Black Hole
The 6 quantum numbers of the maximally supersymmetric black holes in AdS 4 are the mass M , the angular momentum J, and four R-charges Q I (I = 1, 2, 3, 4) that correspond to the Cartan generators of the S 7 isometry group SO (8). We specialize to the Kerr-Newman AdS 4 black holes where the four R-charges are identical so the solution depends on just three quantum numbers: M , J, and Q. The only other parameters that enter are the asymptotically AdS 4 radius 4 (related to the coupling g = −1 4 of gauged supergravity) and the gravitational coupling G 4 .
The explicit solution (first presented in [23]) is fairly elaborate, as expected for a rotating black hole, so we will not present the geometry and its associated matter here. The only feature that is needed in our study is the radial function ∆ r (r) = (r 2 + a 2 )(1 + g 2 r 2 ) − 2mr + q 2 , (2.1) that appears prominently in the metric. The event horizon of the black hole is located at the coordinate r = r + that is the largest real root of the quartic equation ∆ r (r) = 0. The parameters (m, a, q) in ∆ r are related to the physical variables (M, J, Q) of the black hole as [24][25][26] where Ξ = 1 − a 2 g 2 . The parameters m, q are positive while 0 ≤ ag < 1. The charges are normalized so 2Q 4 and J are integral (for bosons) or half-integral (for fermions).
In black hole thermodynamics a central role is played by the potentials that are conjugate to the three quantum numbers (M, J, Q). They are: , Ω = a r 2 + + a 2 1 + g 2 r 2 As usual, the size of the quantum configuration space is encoded in the black hole entropy We also record the on-shell Euclidean action of the black hole (2.5) It satisfies the quantum statistical relation as it should.
The BPS Bound
The supersymmetry algebra realized by the theory demands that the black hole mass satisfies the BPS bound 1 M ≥ M * = 2Q * + gJ * , (2.6) with the inequality saturated (i.e. satisfied as an equality) precisely when the black hole preserves a fraction of the supersymmetry. The * designates that variables refer to the BPS black holes. In this subsection we take the view that the BPS bound is a hypothesis that we seek to validate through explicit computation, by showing that it is satisfied by all the aforementioned black hole solutions. As a first step we rewrite the BPS bound (2.6) using our parametric formulae (2.2) Thus the inequality on physical variables is equivalent to the bound on parameters with BPS saturation corresponding to equality in both cases.
To make further progress we differentiate the BPS equality M * = 2Q * + gJ * and find the potentials Φ * = 2, Ω * = g for BPS configurations. We can invert the parametric formulae (2.3) for these potentials to find the corresponding BPS relation between parameters and also find the coordinate position of the horizon It is a consistency check that the temperature T = 0 for these values of parameters, as expected in the BPS limit. It follows from the facts established so far that the radial function ∆ r (r) (2.1) must vanish at r = r * for black hole parameters such that q = q * and m = m * = (1 + ag)q * . We can make this feature manifest by rewriting the general formula for the radial function as: The location of the horizon r = r + is the largest solution to ∆ r (r) = 0 so, for any value of black hole parameters, it satisfies the exact equation 2 m−q(1 + ag) r + = 1 1 + 6ag + a 2 g 2 (r + − r * ) 1 + 6ag + a 2 g 2 − (q − q * )(1 + ag) + 2 √ agg(r + − r * ) 2 2 The terms on the right hand side are manifestly positive so we conclude that the black hole parameters must satisfy m−q(1+ag) ≥ 0. This agrees with the parametric bound (2.8) so we have established by explicit computation that the physical BPS bound M ≥ M * is satisfied for all the black holes solutions, as we wanted to show.
This result was expected from supersymmetry of the theory. However, there is a less obvious corollary of the computation. The identity (2.12) shows that the BPS bound is saturated if and only if both of the square brackets on its right hand side vanish. This is clearly the case for "the" BPS black holes with q = q * and r + = r * that we have already identified but it is not difficult to check that this is the unique solution. In other words, we have shown by explicit computation that the BPS bound on the black hole mass (2.6) is saturated if and only if q = q * and r + = r * . We will see below that these conditions on parameters correspond to vanishing temperature T = 0 and an additional constraint on the physical potentials Φ − Ω 4 = 1 or on the charges Q, J.
Formulae for BPS Black Holes
We will discuss general black holes with frequent reference to the BPS limit. Therefore, we collect formulae for this special case in this short subsection. We label the one-parameter family of BPS black holes by ag and express the other dimensionless parameters as Inserting these values into into (2.2) we find the electric charge (2.14) This formula can be inverted as We can use this equation to eliminate the parameter ag in favor of the charge when considering any physical variable of a BPS black hole. As an important example, after inserting the BPS parameters (2.13) into (2.2) for the angular momentum, we can eliminate ag and find This relation between physical conserved charges is the constraint that must be satisfied for all BPS black holes. We can similarly find the BPS black hole entropy by substituting the parametric formulae (2.13) in the general equation for the entropy (2.4) and then eliminate ag using (2.15). However, because of the constraint (2.16) the dependence of the BPS entropy on the conserved charges Q * , J * is not unique, it can take many different forms. Our "preferred" formula will be to eliminate the angular momentum entirely and express the black hole entropy it in terms of charges alone where the omnipresent dimensionless ratio is a large pure number that sets the scale for the conserved charges. It quantifies that the black hole is much bigger than the Planck scale with a precise value that is characteristic of the microscopic theory. The second equality applies when the AdS 4 background arises from N M 2-branes, or from their dual description by ABJM theory.
NearBPS Thermodynamics
In this subsection we initiate our study of thermodynamics in the nearBPS regime. The parametric representation of the BPS limit is q = q * and r + = r * so we define nearBPS black holes as those where are small. The identity (2.12) shows that this is possible only if m is such that m−q(1+ag) ∼ 2 .
The black hole temperature is at the same order. It will prove advantageous to introduce a "nearBPS potential" ϕ defined by the linear combination Then the conditions (2.19) on the parameters of nearBPS black holes are equivalent to physical potentials of order T ∼ ϕ ∼ .
When black holes depart from the supersymmetric limit, their mass exceeds the BPS mass M * . We showed earlier that the excitation energy M − M * is proportional to m − q(1 + ag) and the identity (2.12) established that this quantity is positive definite, by casting it as a sum of two squares. We now observe that the linear combination of parameters that appear in those two squares coincide with the temperature T and the nearBPS potential ϕ at linear order. Therefore, the nearBPS mass is given by the quadratic mass formula where, after collecting various proportionality factors, we find the dimensionless coefficient (2.25) In the second form of the expression we used (2.15) and (2.18) to convert the gravitational formula into a remarkably economical form that can later be compared with microscopic results. The notation C T in (2.24) refers to the specific heat of the nearBPS black hole. The specific heat is proportional to the temperature T in the nearBPS regime so the coefficient of interest is the ratio C T T . Recall that general perturbations away from the BPS locus are characterized by two variables: the temperature T adds energy with charges kept fixed while the nearBPS potential ϕ parametrizes deformations along the extremal surface T = 0 that violate the constraint (2.16) on conserved charges. The specific heat refers to the first of these, the addition of energy through the nearBPS potential ϕ is physically quite distinct. To the extent ϕ can be identified with an electric potential the corresponding coefficient in the mass excess M − M * is the black hole capacitance. Interestingly, our mass formula (2.24) indicates that, for the black hole studied here, the capacitance is identical to the specific heat. 2 An analogous equality between two physically distinct linear response coefficients was previously noticed for nearBPS black holes in AdS 5 [5] and in Section 3 we will establish it also in AdS 7 . The common feature of these settings is the supersymmetry breaking pattern. The gravitational theory has (at least) N = 2 supersymmetry which is mildly broken by an excess energy (conjugate to temperature) or R-charge (conjugate to the nearBPS potential). The reasonable expectation that the corresponding symmetry breaking scales are themselves related by supersymmetry is born out in the N = 2 version of the SYK model which realizes the analogous symmetry breaking pattern in a nongravitational setting [27]. The nearBPS black holes developed in this paper offer an appropriate setting for this physical mechanism on the bulk side of the nAdS 2 /CFT 1 correspondence. It would be interesting to further study the supersymmetry breaking pattern in supergravity.
The constraint on conserved charges (2.16) can be presented as the vanishing of the "height" function This form of the constraint makes it manifest that nonrotating (J = 0) and uncharged (Q = 0) black holes are both inconsistent with supersymmetry in AdS 4 . In the nearBPS region we can relax the condition h = 0. Any surface with constant height function h is characterized by the vanishing of the differential The constraint h = 0 defines a line in the two-dimensional space of conserved charges (Q 4 , J) and we can interpret nonzero values of h as a coordinate along the normal to this line, quantifying the departure from the BPS line. However, we have already introduced the nearBPS potential ϕ such that ϕ = 0 on the constraint surface and ϕ is an equally good measure of the distance from the BPS surface. Indeed, the geometry of embedded surfaces guarantees that for small values these coordinates must be proportional. In the following we calculate their constant of proportionality.
In the nearBPS regime, the parameters m and q are proportional up to second order in , as noted after (2.19). Therefore, at linear order the general physical charges Q, J given in (2.2) are both proportional to q, with distinct proportionality factors depending a. Rescaling of q with a fixed therefore changes Q and J by a common factor. It modifies the height function as where we simplified using the constraint h = 0. However, the nearBPS potential ϕ (2.23) essentially measures the scale of q via dϕ = 2q −1 * dq so this calculation determines the constant of proportionality that we seek:
The First Law of Thermodynamics
We can further illuminate the nearBPS regime by explicitly verifying the first law of thermodynamics. We write it as and consider the left and right hand sides in turn. The quadratic formula for the nearBPS mass (2.24) gives with the coefficient C T T given in (2.25). The entropy S −S * in excess of its BPS value will turn out to involve a subtlety, as we discuss below. For now we compute the difference between the general area law (2.4) evaluated at the respective horizon positions r + , r * , each at the same value of a. This procedure gives in the nearBPS regime. The linear-in-T term has the correct coefficient to cancel the analogous term in the mass formula (2.31) so the left hand side of (2.30) yields an expression proportional to dϕ: The right hand side of the first law (2.30) involves the BPS entropy S * . In our "preferred" expression (2.17), it is a function of electric charge Q that gives We also need the potentials (2.21-2.22) recast in terms of the temperature T (2.20) and the nearBPS potential ϕ (2.23) as These expressions quantify the linear changes as we move off the BPS line so the terms on the right hand sides are equivalent to derivatives with respect to temperature T and potential ϕ. In this subsection the formulae in terms of the intrinsic coordinate a are sufficient but we record these results also in microscopic units for later reference. Returning to our ongoing verification of the first law (2.30), we combine the potentials (2.35-2.36) with the differential of the BPS entropy (2.34) and find In the second line we took advantage of the fact that the particular linear combination of dJ and dQ that appears is proportional to dh given in (2.27). Thus the relative change in the conserved charges preserves the "height" function (2.26) h = constant, for example by remaining within the constraint surface h = 0. Because of this property, we can invoke (2.29) and rewrite the differential in terms of the nearBPS potential ϕ (2.38) The first law of thermodynamics (2.30) demands that this expression agrees with (2.33). The fact that both are proportional to dϕ shows that temperature changes dT match, as they should. The coefficients of ϕdϕ also coincide but the terms proportional to T dϕ do not agree. The reason, previously uncovered in the analogous 5D setting [5] (and alluded to as a subtlety earlier in this subsection), is a dependence on the reference point on the BPS surface.
The BPS entropy S * is defined only modulo the constraint h = 0 on the charges. Therefore expressions that are equivalent when h = 0 is imposed may have differentials that differ by dh. Indeed, the BPS entropy S * employed as reference when computing S − S * in (2.32) is the general area law (2.4), evaluated at the BPS point. In contrast, the differential of S * was derived in (2.34) from the "preferred" form of the entropy (2.17), expressed in terms of the physical charge Q. The former amounts to a formula depending entirely on the coordinate ag along the BPS line, but the latter also takes into account that Q is proportional to q. This amounts to an additional contribution: Adding this expression to (2.38) we recover (2.33), as required by the first law.
The quantitative output of this subsection is the evaluation of the entropy due to the violation of the constraint. Its value read off from (2.32) at T = 0 defines a third response coefficient, above and beyond the two implied by the quadratic mass formula (2.24). In view of the ambiguity discussed above we must specify that the differentiation in its definition is taken at fixed value of the intensive parameter a which, as for Kerr-Newman black holes in asymptotically flat space, equals the ratio of physical variables J/M .
Stability and Physical Conditions
The potential ϕ was introduced in (2.23) as a linear function of Φ and Ω that vanishes for BPS black holes and measures the departures from the BPS line that preserve extremality T = 0. We see from (2.32) that it was defined such that the entropy increases for ϕ ≥ 0. This inequality suggests that the physical configuration space is restricted to ϕ ≥ 0. An equivalent statement is that the constraint relating physical charges Q, J can be violated, but only such that the height function introduced in (2.26) is positive h ≥ 0. Yet another version of the inequality is that the charge parameter q ≥ q * . For a perspective on these conditions we analyze Gibbs' free energy. Starting from the on-shell action (2.5) we can write it as where the horizon position r + and the angular momentum to mass ratio a are interpreted as functions of the potentials T, Ω. These functions are determined implicitly through , and 4πT = r + r 2 + + a 2 r 4 We want to examine the range of parameters that corresponds to physical black holes. We first demand that the extensive variables mass M , angular momentum J, charge Q, and entropy S are finite and non-negative. This restricts the parameters so 0 ≤ ag < 1. The inequality a ≥ 0 (and so Ω ≥ 0) does not limit generality because a → −a leaves all thermodynamic formulae (and the entire geometry) invariant, except for a flip of parity. Our second physical requirement is that Ω 4 ≤ 1, in order that the speed of light is not exceeded in the dual boundary theory. Since we already took |ag| < 1 this condition amounts to Interestingly, the first inequality in (2.41) can be recast as r + ≥ r * so the nonBPS black holes are all larger than their BPS relatives when measured in the conventional r coordinate. Our goal is ultimately to describe nearBPS black holes as excitations of BPS black holes. In the grand canonical ensemble considered here such states can be reached by deforming the potentials (Φ, Ω) away from their BPS values (Φ * , Ω * ) while staying at extremality T = 0, followed by raising the temperature while keeping (Φ, Ω) fixed. This motivates our third physical condition: the potentials (Φ, Ω) must be consistent with extremality T = 0. Given the general restrictions (2.41) already imposed, vanishing temperature is possible only for This leaves a physical domain defined by The Gibbs' free energy (2.40) is automatically negative semidefinite in this entire region. It vanishes only for BPS black holes where Ω = Ω * = 1, Φ = Φ * = 2. Therefore the nonBPS black holes in the entire region (2.42) are stable with the proviso that the BPS black holes are only marginally stable, they can be in equilibrium with a gas of BPS particles.
The conditions we impose may be overly strict for some purposes and there can be good reasons to relax them. On the other hand, the region (2.42) is natural for microscopic studies. For fixed (Φ, Ω) the temperature T can be increased from zero (extremality) all the way to the high temperature conformal regime without any phase transitions being encountered. Specifically, the deconfined phase reigns in the entire domain, the entropy is of O(k) throughout.
As an example that is widely studied in the literature consider the Gibbs' free energy for nonrotating black holes (Ω = 0). In this case it is elementary to solve the equations above explicitly. This yields the formula: This function is manifestly smooth everywhere in the interior of the domain (2.42). For large temperature (at fixed Φ) it takes the conformal form G ∼ − 4k 27 (2πT 4 ) 3 with the numerical coefficient familiar from studies of large AdS-Schwarzchild black holes. In particular, it appears in the hydrodynamic description that applies at large temperature [28][29][30]. The opposite regime of small temperature is more delicate. For example, the dependence 2 on the potential along the extremal surface T = 0 describes small nonrotating black holes.
The BPS region can be reached by tuning the potentials so ) kept fixed. In this limit Gibbs' free energy becomes where the dimensionless parameter ag is defined implicitly by the equation This equation determines a as a homogeneous function of Φ − Φ * , 1 − Ω 4 , and T . It is a quartic in the variable √ a and its general solution in terms of radicals is not illuminating. We can "solve" (2.44) for 1 − Ω 4 , Φ − Φ * by expressing each of these quantities as a linear combination of T and ϕ with coefficients that depend on a. Such equations were already found in (2.35-2.36) using other thermodynamic arguments and we can verify that they satisfy (2.44). This is a useful consistency check.
In the Cardy limit 1 − ag 1 we can solve the constraint (2.44) explicitly: and find the free energy Its derivatives with respect to T , Φ, Ω yield BPS values S * , Q * and J * that agree with our previous results (2.17, 2.14, 2.16) in the Cardy limit. It is interesting that the free energy approaches the BPS limit linearly in the temperature because this reflects a BPS remnant of the familiar deconfinement transition [31][32][33][34][35][36].
The Gibbs' free energy (2.43) with a given implicitly by (2.44) actually describes the entire BPS surface, not just the Cardy limit. For example, we can determine the mass as This is the exact BPS equation without assumptions on the black hole parameters. This computation is possible without knowing a in detail because G is homogeneous of degree one in the variables (T, Φ − Φ * , Ω − Ω * ) that the function a depends homogeneously on. Similarly, the general derivatives of G with respect to T , Φ and Ω depend on the unknown derivatives ∂ T a, ∂ Φ a, ∂ Ω a but only in combinations that follow from parametric differentiation of (2.44). This procedure recovers the general expressions for S * , Q * and J * without imposing the Cardy limit.
Microscopic and Macroscopic Black Hole Entropy
The comparison between microscopic and macroscopic thermodynamics of BPS black holes in AdS 4 can be implemented conveniently by considering the free energy 3 [37,38] ln Z BPS = 4ik On the microscopic side the partition function is identified with an index that can be found by methods such as supersymmetric localization. These computations are comparatively rigorous but the extraction of the black hole entropy from the free energy has some heuristic aspects, as we review below. Additionally, it unclear why the index yields the black hole entropy rather than just a lower bound. The focus of our study is the macroscopic thermodynamics. In order to facilitate comparisons with microscopic ideas, we will repackage our gravitational results into the free energy (2.45). Importantly, we will do so not just for the BPS limit but for the entire nearBPS regime. Therefore, any microscopic computation that yields (2.45) accounts for the nearBPS entropy as well.
The Entropy Function
The free energy (2.45) refers to the BPS partition function This expression applies only on the supersymmetric locus where the potentials are complex and satisfy the constraint We seek to compute the black hole entropy by Legendre transform of the free energy (2.45) to the ensemble specified by conserved charges rather than potentials. In view of the constraint (2.47), the entropy follows from extremization of the entropy function [39] S(∆ I , ω, Λ) = 4ik where Λ is a Lagrange multiplier. The extremization conditions are The prescription for computing the BPS entropy demands that Λ must be purely imaginary. This requirement is motivated by the saddle point approximation extremizing the entropy function over complex parameters but the detailed reasoning is somewhat mysterious. As we show shortly, it has a satisfactory implication: the relation between the conserved charges it imposes is equivalent to the BPS constraint (2.16). Moreover, after this prescription is enforced there is a unique solution with negative imaginary part, corresponding to positive entropy.
BPS Solution to the Extremization Conditions
In view of the prescription that Λ be purely imaginary, it is perfectly manageable to solve the quartic equation (2.52) for general values of the conserved charges (Q I , J) with I = 1, 2, 3, 4. This computation yields expressions for BPS black holes that are so general that they have yet to be constructed as solutions in supergravity. Interesting as genericity may be, for our purposes there is some value in keeping expressions simple. We therefore focus on "pairwise equal" charges such as Q 1 = Q 3 , Q 2 = Q 4 . This is more general than our gravitational considerations which correspond to all charges equal. We take the branch ∆ 2 1 ∆ 2 2 = ∆ 1 ∆ 2 for the simplified charges and simplify the extremization conditions (2.49) and (2.50) as Consistency between these conditions gives the quadratic equation Its imaginary part yields (2.54) We picked the overall sign of the free energy (2.45) so that this entropy would be positive (for positive charges).
Recalling that Λ is purely imaginary, the real part of the quadratic equation (2.53) gives where we chose the solution to the quadratic that gives positive entropy. For comparison, we recall the gravitational BPS entropy (2.17), in the case where Q 1 = Q 2 ≡ Q: The equality between the two forms of this gravitational formula expresses the constraint (2.16) satisfied by the conserved charges. The gravitational results for the BPS entropy and the constraint imposed by supersymmetry agree with (2.54-2.55) found from the extremization principle, as advertized.
To the extent the free energy (2.45) was derived from microscopic principles this provides the last step needed to arrive at the black hole entropy. Alternatively, the computation in this subsection shows that the free energy provides a convenient packaging of the gravitational results.
The Potentials
The potentials ∆ I and ω introduced via the free energy (2.46) and the BPS partition function (2.46) are related to the gravitational potentials Φ and Ω. We now proceed to compare them in detail.
Combining the result for Λ given in (2.55) with the constraint on the potentials (2.47) we find (for pairwise equal charges): This gives the real and imaginary parts of the potential conjugate to the angular momentum Im . (2.59) We similarly find the real and imaginary parts of the potentials conjugate to the charges: (2.61) The potentials ∆ I , Ω were introduced as the independent variables of the BPS partition function (2.46) and determined here from extremization of the entropy function (2.48). They can not be identified with their supergravity analogues which take the values Φ * = 2 and Ω * = 1 identically due to the BPS mass relation M = 2Q + J. To make progress we consider the general (non-BPS) partition function Re at least when all charges Q I are identical. This establishes a natural map between the macroscopic (Φ, Ω) and microscopic (∆, ω) potentials. However, this cannot be the entire story. Physical potentials in the gravitational solution are real while the fugacities introduced in the microscopic partition function can preserve supersymmetry only if they acquire an imaginary part. The missing ingredient is the one we stress throughout this paper: the BPS surface is co-dimension two, it can be approached from two distinct directions.
As discussed earlier, the real part of the microscopic potentials is related to increases in temperature T . We expect that their imaginary parts correspond to violation of the constraint, expressed in terms of potentials as ϕ = 0. Indeed, comparison between the expressions for ω, ∆ I above and their analogues in gravity (2. 35-2.36) show that: Note that while these expressions for ω and ∆, expressed as functions of a and g exactly match the gravitational results (2.35-2.36), they are ambiguous as functions of Q and J, defined only modulo the constraint (2.56). This is equivalent to demanding that the height above the BPS surface h = 0.
The nearBPS Regime
The microscopic discussion of nearBPS black holes is necessarily less rigorous than for their BPS relatives but some progress can be made nonetheless.
A good starting point is the relation between potentials (∆ I , ω) in the microscopic description and their gravitational analogues (Φ I , Ω). Comparing the partition functions (or the first law) at vanishing temperature gives the provisional identification (2.62-2.63) but supersymmetry additionally imposes the boundary conditions (2.47) on the microscopic potentials. A natural generalization is For ϕ = 0 this boundary condition is equivalent to the BPS requirement (2.47) in the extremal limit T → 0. However, for T and/or ϕ nonvanishing it breaks supersymmetry. Motivated by the success in the BPS limit, we identify the real part of the potentials (Φ I , Ω) in (2.67) with their gravitational counterparts. As a practical matter, once the physical parameter ϕ = 0, the full symmetry breaking pattern is easily implemented by the substitution ϕ → ϕ + 2πiT . The BPS free energy (2.47) is common to all recent discussions of microscopic entropy for AdS 4 black holes [40][41][42] . A minimal framework for nearBPS statistical physics applies the modified boundary condition (2.67) to the BPS free energy. This proposal can be presented efficiently as an extremization principle for the nearBPS entropy at linear order away from the BPS surface: We do not derive our prescription ab initio, but it is arguably a corollary of previously accepted microscopic considerations. It is thought that the BPS index can be continued freely from weak to strong coupling and, additionally, that the index and the partition function have the same asymptotic behavior in the gravitational regime. Any agreement in the strict BPS limit relies on these features and the only additional ingredient we invoke is smoothness of gravitational thermodynamics as the nearBPS regime approaches the BPS limit. An even more conservative view is that agreements we establish in the following show that our nearBPS extremization principle provides an efficient packaging of gravitational data. It is straightforward to make our proposal explicit for four generic charges but, as in subsection 2.7.2, we prioritize transparency over generality and focus on the case where charges are equal in pairs and expressions are more illuminating. Then the values for the potentials at the extremum differ from the BPS results (2.58-2.61) only by some simple substitutions. We write them as ) .
After multiplication with ϕ + 2πiT on both sides of the equations, the real part of each potential becomes a linear combination of ϕ and T . These expressions agree with the analogous results computed from the black hole solutions (2. 35-2.36). This result streamlines the identifications we already reported in (2.64-2.67) by incorporating them in a systematic computation.
The nearBPS extremization conditions are identical to their BPS counterparts (2.49-2.50), except for simple substitutions of variables. We do not need the details because the quartic equation in the Lagrange multiplier Λ (2.52) with coefficients depending on charges (Q I , J) is not modified. The important new feature is that solutions to the quartic where Λ is purely imaginary are insufficient. We insert the more general root in the on-shell nearBPS entropy function and identify the resulting real part as the physical entropy. This gives a corrected value for the entropy.
Since we consider charges that are equal in pairs the quartic equation satisfied by Λ (2.52) reduces to the quadratic (2.53) which we recast as where the "height" function generalizes h introduced in (2.26) to permit two distinct charges. This form of the equation for Λ makes it manifest that, when the constraint on charges h = 0 is imposed, we have purely imaginary Λ with the value given in (2.54) modulo any rewriting using h = 0. Conversely, when we allow violation of the constraint between charges by taking non-zero h the Lagrange multiplier is shifted. At linear order we find δΛ = 2kh .
(2.72)
Since this result is already proportional to h we freely applied the constraint h = 0 to eliminate J from the coefficient. For comparison with gravity, we relate the height-function h to the potential ϕ through the generalization of (2.29) to two independent charges The prescription ϕ → ϕ + 2πiT then gives This agrees precisely with the gravitational result (2.32) after specialization to equal charges Q = Q 1 = Q 2 and then trading Q for the gravitational parameter a via (2.14). The two variables being compared are both linear response coefficients, related to the the specific heat (2.25) and the electric field (2.39), respectively. Thus our result relates parameters of nonsupersymmetric black holes to microscopic concepts. The agreement reported in this subsection goes against expectations from rigid versions of indexology that demand strict adherence to supersymmetry. However, it is less surprising from an effective quantum field theory point of view. It is expected that the UV theory accounts for the supersymmetric ground state entropy, i.e. the size of the classical phase space at very low energy. The leading excitations above the ground state are described by a low energy effective field theory with gravitational/QFT aspects encoded in the nAdS 2 /CFT 1 correspondence [15,43]. It depends on just a few symmetry breaking parameters that generally must be determined by matching to the UV theory. Although we have not developed the effective theory systematically, it is not unreasonable that we can recover these effective parameters quantitatively by studying collective modes on the agreed upon classical phase space.
Asymptotically AdS 7 Black Holes
In this section we study the near BPS thermodynamics of black holes in AdS 7 . The maximally supersymmetric theory results from eleven-dimensional supergravity reduced on S 4 . As a result, the quantum numbers of a generic black hole solution in this theory are the mass M , three angular momenta (J 1 , J 2 , J 3 ) that correspond to rotations in AdS 7 , and two charges (Q 1 , Q 2 ) that correspond to momenta along S 4 . A completely general solution has not yet been constructed but a special case with three identical angular momenta and two independent charges was first presented in [44]. In the gravitational part of our calculations we consider a further simplification to the Kerr-Newman AdS 7 black hole, i.e. the special case where the two charges are equal. This geometry is also a solution to minimal supergravity in AdS 7 .
The Black Hole Geometry
The conserved charges (M, J, Q) are encoded in a mass parameter m, an angular momentum parameter a, and a charge parameter q (which we occasionally trade for the "boost" parameter δ introduced through q = m sinh 2 δ). The geometry presented in [44] is where H = 1 + 2 q ρ 4 , ρ 2 = r 2 + a 2 , and the functions f 1 , f 2 and Y are given The dΣ 2 2 is a metric on CP 2 which together with the U (1) fibre σ forms a five-sphere within AdS 7 . The omnipresent constant g is the coupling constant of gauged supergravity, related to the radius of the asymptotically AdS 7 spacetime as g = −1 7 . For ag outside the range (−1, 1), the coefficient of dΣ 2 2 is nonpositive (or divergent) so the spacetime signature is not Lorentzian.
The event horizon of the black holes is located at the coordinate r = r + where the function Y (r) has its largest root. The thermodynamic potentials characterizing the solution are all evaluated at this value r = r + . They are: 4 , The conserved charges of the black holes are given in terms of the parameters (m, a, q = m sinh 2 δ) through
Supersymmetric Black Holes
Extremal black holes have zero temperature. Referring to the expression for temperature (3.3), this is equivalent to the derivative ∂ r Y = 0 at the event horizon r = r + . Since the polynomial Y (r) also vanishes there, it develops a double root. The extremality condition ∂ r Y (r + ) = 0 gives a simple equation for the mass parameter m m = 2g 2 ρ 6 + + 3 2 Ξρ 4 + + 4g 2 qρ 2 + − a 2 g 2 q . While this procedure is straightforward in principle, in practice it is unwieldy and not terribly illuminating. To make progress we therefore take supersymmetry into account. For the theory to admit supersymmetric solutions, all physical configurations must satisfy the BPS bound with equality for BPS black holes. The linear combination of conserved charges that appear can be recast as .
The expression in the parenthesis on the second line is strictly positive in the entire physical range 0 ≤ ag < 1 so the BPS bound amounts to: An alternative expression follows from the identity (3.10) which yields the parametric form of the BPS bound The equivalence of this inequality and (3.9) is easily verified using the definition q = m sinh 2 δ. Both inequalities are saturated if and only if the black hole is supersymmetric. BPS saturation is possible only for extremal black holes but it is a stronger condition, it imposes a constraint on the black hole parameters in addition to vanishing temperature. Some authors impose this "second" condition by demanding the absence of closed time-like curves, a requirement that makes reference to detailed analysis of the geometry. Later in this subsection we show that BPS saturation automatically gives both vanishing temperature and the additional constraint on black hole parameters. Thus the latter does not require appeal to an independent physical principle.
In preparation for this argument we temporarily impose the BPS formula for the mass and independently set the temperature to zero. Accordingly, we assume that m and q are related by equality in (3.11) and additionally require that m(r + ) is the function given in (3.6). In this situation the horizon equation Y (r + ) = 0 becomes It is manifest that the largest root is a double root, as expected, and locates the event horizon at Here and in the following we use the superscript * to denote quantities that take on their BPS values. The location of the event horizon (3.13) gives the BPS values m * = 12a 4 (1 + ag) 3 (2 + 3ag) (1 + 3ag) 2 , (3.14) Our BPS values for these parameters agree with those found in [44]. 5 They correspond to the physical quantum numbers: , (3.16) and the BPS black hole entropy becomes In these BPS formulae we have opted to express the overall normalization in microscopic units via (3.5).
The physical variables satisfy the BPS mass formula M * = 3gJ * + 4Q * , as they should. In our conventions the first law of thermodynamics takes the form so on the extremal surface T * = 0, the BPS mass formula gives The general thermodynamic potentials (3.3) do in fact simplify to these constants when they are evaluated on the BPS surface. It is an additional consistency check on our formulae that the first law (3.19) is satisfied after expressing each of the BPS quantities as functions of a. The BPS expressions given above were all computed assuming both saturation of the BPS bound (3.11) and vanishing temperature, as implemented through (3.6). However, with the benefit of hindsight we can now do better. We begin with rewriting the exact metric function Y as a power series in ρ 2 around the position of the BPS horizon: The horizon equation Y (r + ) = 0 then gives 2r 2 + (m − 3ag(2 + 3ag)q) = g 2 (3 + ag)(1 + 3ag) 3 (q − q * ) 2 (1 + ag) 2 (3 + 10ag + 19a 2 g 2 ) (3.21) near the BPS limit. 6 The left hand side of this expression is non-negative and vanishes exactly when the BPS bound (3.11) on the mass is saturated. Since the right hand side is manifestly the sum of two non-negative terms, we see that BPS saturation implies two conditions on the black hole. Moreover, the large square bracket on the second line is proportional to the temperature at linear order. Therefore, one of the two conditions is extremality T * = 0. The other requirement is the constraint on potentials for conserved charges, identified here in the form q = q * . In view of the two independent conditions on the black hole parameters that follow from supersymmetry, BPS black holes form a co-dimension 2 "surface" within the space of general black holes parameterized by (m, a, q). On any point along the resulting BPS line, the quantum numbers J * and 7 Q * are dependent variables: they are both expressed in terms of a single dimensionless parameter which we take as ag. For this reason, the expression of black hole entropy as a function of conserved charges (3.18) is not unique. For example, the translation of the parameter ag to the physical variables J * and 7 Q * may equally well yield the alternate form In the statistical ensemble specified by the quantum numbers J * and 7 Q * it is convenient to characterize the BPS line as the vanishing locus of the "height" function (3.24) 6 Our computations here are valid up to quadratic departures from the BPS limit. The analogous formulae for AdS4 and AdS5 black holes can be made exact without much additional effort. We assume that the result is exact in AdS7 as well but we have not worked out the details, as they are more elaborate in this case.
For example, non-rotating black holes obviously have angular momentum J = 0. These are the AdS-Reissner-Nordström geometries. They include an extremal black hole T = 0, the lightest in the family that is regular. However, a generic extremal AdS-Reissner-Nordström black hole is not supersymmetric because h = 0 for J = 0. We conclude this subsection by noting an important corrolary of supersymmetry imposing two conditions: it can be broken in two independent and complementary ways: • Near-extremal BPS black holes have non-vanishing temperature but they satisfy the constraint h = 0.
• NearBPS extremal black holes have vanishing temperature but they are not supersymmetric because their charges violate the constraint.
In the following three subsections we first study each of these cases separately and then examine their interplay.
Near-Extremal Thermodynamics
In this subsection we consider black holes that depart from BPS by having elevated temperature. This perturbation necessarily increases the mass M from its BPS value M * (J * , Q * ). However, it does not modify the conserved charges from their reference values Q * and J * so they will still be related by the constraint (3.24) that is satisfied on the BPS-line.
The specific heat C T = dQ dT = T dS dT is the response coefficient that characterizes the increased temperature. At leading order away from extremality the specific heat is linear in temperature so the derivative ∂ T S = C T T (taken with conserved charges held fixed) is a constant.
This result is positive in the physical regime 0 < ag < 1, as required for a stable system. As a consistency check, we can instead derive the specific heat by considering (3.8) for the mass M above its BPS value M * . Inserting the parametric form of the mass excess at the second order (3.21) and the temperature T (3.22), the expression becomes: This procedure again gives the expression (3.30) for C T T , as demanded by the first law of thermodynamics applied with charges kept fixed.
Yet another way to calculate the specific heat is via the nAttractor mechanism [46,47]. In this simple and illuminating construction the elevated temperature is taken into account geometrically through the outward displacement of the horizon, without ever deforming away from the BPS geometry. This method yields the more concise expression, which again evaluates to the result in (3.30). We can exploit the nAttractor mechanism taking temperature into account via a simple radial derivative also when analyzing other physical variables. By first specializing the potentials Φ, Ω given in (3.3) to the BPS geometry, and only then taking the radial derivative, we easily calculate Both of these quantities are negative. It is also noteworthy that they are nearly identical: The physical interpretation of this relation will be discussed in Section 3.6.
Extremal Near-BPS Thermodynamics
In this subsection, we perturb a BPS configuration by changing charges J and Q so the constraint (3.24) is no longer satisfied but extremality is maintained. Thus the temperature remains zero: the mass is at its minimum possible value, albeit for the "new" charges. This perturbation is complementary to the temperature/mass deformation studied in the previous subsection.
The starting point is a BPS state characterized by rotation parameter a and the BPS assignments ρ * 2 + (3.13) and q * (3.15) corresponding to that value of a. Variations δr 2 + = r 2 + − r * 2 + and δq = q − q * away from the BPS values generally modify the temperature T to the value given in (3.22). It remains zero at linear order exactly when these perturbations are correlated as . (3.36) Furthermore, for variations correlated in precisely this manner (3.21) yields a greatly simplified formula for the parametric mass which expresses the amount that the energy exceeds the BPS mass. The reason for this excess is that the BPS bound cannot be saturated when the constraint is not satisfied. A convenient measure of the distance from the BPS line along the extremal surface is the combination of potentials which manifestly vanishes on the BPS line where Φ = Φ * = 2 and Ω = Ω * = g = −1 7 . Using the formulae (3.27) and (3.28) for Φ and Ω we expand to linear order in δr 2 + and δq, followed by simplification using the relation (3.36). This yields the simple expression: Thus the composite potential ϕ measures the relative change of q as it departs from q * along the extremal surface. Moreover, the physical parameters M, J, Q given in (3.4) are all proportional to q at linear order, since (3.37) equates m and q up to terms of quadratic order. We therefore interpret the potential ϕ as the generator of a scale transformation that acts on the entire black hole geometry, as implemented through rescaling of M, J, Q by a common factor. The numerical factor 2 in (3.39) shows that ϕ is 2 times the relative rescaling of these physical parameters.
The change of the parameter q as we depart from the BPS line while maintaining zero temperature inevitably changes both the electric potential Φ and the rotational velocity Ω.
We express this dependence through the derivatives These relations satisfy as expected from (3.38). Scale transformations with ϕ > 0 are preferred because they decrease the rotational velocity below Ω * 7 = 1 which corresponds to the speed of light in the dual boundary theory. They increase Φ above its critical value Φ * = 2.
As we have already stressed, the motion away from the BPS line with temperature fixed necessarily increases the energy. In analogy with electrodynamics, the capacitance is the response coefficient measuring energy as ϕ increases. We introduce a coefficient C ϕ that is proportional to the temperature T (but evaluated as T → 0) through the mass formula The expression (3.37) for the parametric mass m gives . (3.44) Alternatively, the changes in potentials (3.41) and the scaling transformations dQ = 1 2 Qdϕ, dJ = 1 2 Jdϕ give with the expression for the capacitance C ϕ the same as before, as demanded by the first law of thermodynamics (3.19).
We have been careful to define the coefficient C ϕ entirely through properties within the extremal surface T = 0. It is therefore interesting that numerically where C T is as given in (3.30)
Near-BPS Thermodynamics
Having explored the two independent deformations of a BPS configuration, in this subsection we put them together to explore the entire near BPS region of parameter space. Taking advantage of (3.21) we expand the mass M around its BPS value and find This is simply a sum of the independent contributions from T (3.31) and ϕ (3.43), with no interplay between the two deformations. We want to understand why the increase in mass takes this form. We begin by introducing another thermodynamic coefficient, C E . Consider the amount by which the black hole entropy S exceeds its BPS value S * . At linear order, the difference between the general form of the entropy S(q, r + , a) (3.25) and its BPS limit S * (a) (3.17) yields terms proportional to q − q * and r + − r * . These perturbations are equivalent to the small physical potentials T and ϕ via (3.22) and (3.39). Therefore, the differential change in entropy can be expanded as Explicit computation shows that the coefficient C T introduced here agrees with its namesake in (3.47), as demanded by the first law of thermodynamics. The coefficient of ϕ is a new response coefficient that takes the value (3.49) Now, we must be careful because C E is subject to an ambiguity. In the preceding paragraph we specified for definiteness S * as the function of a given in (3.17). Its differential dS * is proportional to da which is along the BPS surface. However, it may be more appropriate to specify BPS entropy S * as a function of charges J, Q. The resulting differentials dJ, dQ do not generally respect the constraint between charges h = 0. Therefore, they may include a contribution normal to the BPS surface, in the direction of dϕ. Thus the value of C E depends on the reference point S * .
There is a simple method to take this dependence into account. In the nearBPS regime the parameters m and q are proportional up to quadratic corrections (3.21). Therefore, in this regime J and Q (3.4) are both functions of a, except for an overall factor of q. We already noted that the differential da is entirely within the BPS surface. However, q and ϕ are closely related (3.39) so the overall factor q yields simple additional terms dQ = 1 2 Q * dϕ + . . . , dJ = 1 2 J * dϕ + . . . , (3.50) where the dots refer to the differential da within the BPS surface. These formulae determine the dependence of dS * on dϕ when S * is presented as a function of the charges Q, J rather than just a.
(3.52)
This coefficient is always positive in the entire regime 0 < ag < 1. This designation of reference S * assigns positive entropy to all perturbations with ϕ ≥ 0. The method for analyzing dS * can be applied to other functions of (J, Q) as well. An important example is the height function h itself. Any surface h = constant is characterized by the one-form dh and the value of the "constant" measures its distance from the BPS surface h = 0. The potential ϕ is another such measure so, according to the general theory of surfaces, h must be proportional to ϕ at linear order. The constant of proportionality follows from the rule (3.50): The constant α is strictly positive in the entire regime 0 < ag < 1.
We are now in a position to better understand the first law of thermodynamics in the near BPS region: (3.55) The departures of the potentials Φ and Ω from their BPS values are given by the sum of their extremal near-BPS (3.40-3.41) and BPS near-extremal (3.33-3.34) contributions. Alternatively, Φ − Φ * and Ω − Ω * can be calculated from their general forms (3.27-3.28), expanding to linear order in r + − r * and q − q * and then trade these variables for T and ϕ. We already presented a convenient rule (3.50) that relates the differentials dQ and dJ to the potential dϕ so we can present the result as The values of the linear response coefficients C E and C ϕ that follow from the computation here agree with those given earlier. As we stressed previously, the precise value of C E depends on the reference value for the BPS entropy S * . We added the term T dS * so the equality holds no matter the reference value S * , as long as it is consistently applied.
Combining the partial result above with (3.48) we find The terms with the coefficient C E cancel precisely and the remaining terms satisfy the first law with the mass term as expected.
The final physical variable we will study is Gibbs free energy particularly its dependence on T and ϕ in the nearBPS limit. Since the excitation energy (3.47) is quadratic in T and ϕ, the first term is negligible. Moreover, since Φ − Φ * and Ω − Ω * are first order in T and ϕ, we can replace Q, J and S by their BPS values when computing the free energy at first order. The full expression then becomes Here the coefficient of ϕ is always negative while the coefficient of T switches sign depending on the value of a.
We will now proceed to study the free energy from a microscopic point of view.
BPS Entropy Extremization
The partition function Z of a black hole is defined in Euclidean quantum gravity as the onshell action. It depends on potentials that are specified as asymptotic boundary conditions on spacetime. While it is a thermodynamic quantity in gravity, it is identified in the dual microscopic description as a trace over quantum states that, in the context of AdS 7 , we introduce as where the BPS mass is M * = 2Φ * Q + 3ω * J. Here and in all microscopic considerations that follow in the following subsections we simplify units so M and Q are dimensionless. This amounts to taking 7 = g −1 = 1.
However, as we have stressed repeatedly, the BPS limit is stronger than extremality. In the microscopic theory supersymmetry is conveniently implemented as an index that can be defined as the complex locus 2∆ − 3ω = 2πi . (3.61) The BPS partition function therefore becomes a function of two complex potentials and only their real parts can be identified with gravitational potentials through Re∆ = ∂ T Φ and Reω = ∂ T Ω. For the Kerr-Newman AdS 7 black hole we study in this section, microscopic considerations give the BPS partition function [38,[48][49][50][51] ln Z(∆, ω) = 1 24 The black hole entropy is defined in the microcanonical ensemble where conserved charges are specified. The Legendre transform from the canonical ensemble is conveniently implemented by the entropy function where Λ is a Lagrange multiplier that enforces the constraint (3.61) on the potential. The extremization conditions of the entropy function are These equations simplify the entropy function at its extremum so S = −2πiΛ , (3.67) and show that the Lagrange multiplier Λ satisfies the quartic equation with the coefficients N 12 (2ag) 12 3 + 21ag + 54a 2 g 2 + 66a 3 g 3 + 15a 4 g 4 + a 5 g 5 (1 + 3ag) 6 (1 − ag) 12 .
(3.72)
For each coefficient the second expression introduces the dimensionless parameter ag by rewriting the conserved charges using (3.16).
All the coefficients in the quartic equation (3.68) are real so, for the entropy (3.67) to be real, the polynomial must have at least one pair of purely imaginary conjugate roots. Therefore, it must take the form (Λ + α) (Λ + β) Λ 2 + γ = 0 . (3.73) Comparing the coefficients in this quartic polynomial with those of (3.68) gives α + β = A and from the odd powers of Λ. The even powers similarly yield the product αβ and the consistency condition These identifications determine the BPS entropy. The formula for γ gives the simplest expression and the consistency condition (3.75) gives an interesting alternate form: The coefficients A-D defined in (3.69-3.72) are such that the two expressions agree precisely with (3.18) and (3.23). Moreover, the consistency condition (3.75) is exactly the constraint on charges (3.24) that is required for supersymmetry.
The derivation of the intricate formulae for the entropy and the constraint from a simple free energy is very suggestive but not entirely satisfying. The requirement that the entropy be real is essential for reaching these results but there is no clear reason that the Legendre transform cannot be dominated by a complex saddle-point. The reality condition on the entropy is a prescription that is evidently correct but it still awaits a principled physical explanation.
NearBPS Potentials
The entropy extremization principle for BPS black holes can be leveraged to describe nearBPS physics as well. For example, in the following subsection, we will be able to derive the specific heat C T and the linear response coefficient C E defined from gravity in (3.92). These parameters pertain unambiguously to physical properties away from the BPS surface.
The extremization of the BPS entropy function in the preceding subsection determines the potentials ∆ and ω at the extremum: Here the purely imaginary value of the Lagrange multiplier Λ = i C A is understood. The BPS potentials ∆, ω are genuinely complex, they have nontrivial real and imaginary parts. As we discussed below (3.61) in the extremal (but not necessarily BPS) limit β → ∞ we identify their real parts with the thermal derivatives ∂ T Φ and ∂ T Ω, respectively. A more general departure from the BPS surface that allows temperature as well as violation of the constraint is described by the complex variable ϕ + 2πiT . This suggests identifying the complex potentials ∆, ω as derivatives of Φ and Ω with respect to ϕ + 2πiT . This rule yields the real parts Re ∆ = − 12πag(1 − ag) 3 + 10ag + 19a 2 g 2 1 + 3ag 3 + ag = ∂ T Φ , (3.79) Re ω = − 8πag(1 − ag) 3 + 10ag + 19a 2 g 2 The reasoning leading us to the real and imaginary parts of ∆, ω takes advantage of the complex formulae (3.77-3.78) which are cast in terms of conserved charges Q and J. We have refrained from employing the charges Q, J in those formulae, opting instead for the coordinate ag on the BPS locus that facilitates comparisons with gravity. Conceptually, it may seem preferable to retain conserved charges in all final results. This inclination is equivalent to finding a function a(J, Q), with the understanding that the result is non-unique due to the BPS constraint h = 0. It is not difficult to do so, at least in principle. For example, we can rearrange (3.81) as a quadratic equation for ag with the solution ag = 2π − 5 (Im∆) − 2 π 2 + 16π (Im∆) − 8 (Im∆) 2 28πg − 19 (Im∆) g , (3.83) where, from (3.77), we have While this is a closed form for a which can in principle give us all BPS potentials and conserved charges in terms of J and Q, the expressions are messy and do not seem illuminating. We will arrive at a more concise expression below using the first law of thermodynamics.
NearBPS Entropy
In this subsection we introduce a near AdS extremization principle that will account for the entropy in the near BPS region. We follow the AdS 4 discussion in subsection 2.7.4. Thus we take the configuration space identified by BPS considerations for granted. However, noting that the physical BPS states result from a larger phase space upon imposing a constraint, we consider the additional states that result by relaxing the constraint from its strict BPS version (3.61) to 2(Φ − Φ * ) − 3(Ω − Ω * ) = ϕ + 2πiT .
The primary change from the BPS case is that now it is the boundary conditions (3.84) that are imposed by the Lagrange multiplier Λ. Accordingly, the extremization equations from the BPS considerations (3.64-3.65) are unchanged, except for the constraint(3.66) that is modified to (3.84). The extremal value of the free energy is then T S = −(ϕ + 2πiT )Λ . (3.86) The significant difference between the BPS and nearBPS cases is that the latter does not guarantee a purely imaginary root of the quartic equation (3.68) so the factorized form (3.73) does not apply in general. Without making any assumptions on coefficients, we can rewrite the quartic equation (3.68) as In the extremal case the height function h is related to the symmetry breaking parameter as h = αϕ, where the proportionality constant α was previously defined in (3.53). Nonvanishing temperature can be taken into account by the complexification ϕ → ϕ + 2πiT so the black hole entropy above and beyond the BPS contribution becomes where the BPS entropy S * used for reference is (3.18) and we introduced the notation The excess entropy S − S * we find here takes the same form as the gravitational formula (3.48). Moreover, the linear response coefficients C E T (3.51) and C T T (3.30) agree exactly. | 2020-10-12T01:01:05.256Z | 2020-10-09T00:00:00.000 | {
"year": 2020,
"sha1": "e779c3288057525d20de13c0e7f8bb18e99e97dd",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2021)198.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "e779c3288057525d20de13c0e7f8bb18e99e97dd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
167208932 | pes2o/s2orc | v3-fos-license | Structures and mechanism of transcription initiation by bacterial ECF factors
Abstract Bacterial RNA polymerase (RNAP) forms distinct holoenzymes with extra-cytoplasmic function (ECF) σ factors to initiate specific gene expression programs. In this study, we report a cryo-EM structure at 4.0 Å of Escherichia coli transcription initiation complex comprising σE—the most-studied bacterial ECF σ factor (Ec σE-RPo), and a crystal structure at 3.1 Å of Mycobacterium tuberculosis transcription initiation complex with a chimeric σH/E (Mtb σH/E-RPo). The structure of Ec σE-RPo reveals key interactions essential for assembly of E. coli σE-RNAP holoenzyme and for promoter recognition and unwinding by E. coli σE. Moreover, both structures show that the non-conserved linkers (σ2/σ4 linker) of the two ECF σ factors are inserted into the active-center cleft and exit through the RNA-exit channel. We performed secondary-structure prediction of 27,670 ECF σ factors and find that their non-conserved linkers probably reach into and exit from RNAP active-center cleft in a similar manner. Further biochemical results suggest that such σ2/σ4 linker plays an important role in RPo formation, abortive production and promoter escape during ECF σ factors-mediated transcription initiation.
INTRODUCTION
Bacterial factors are key components of the bacterial RNAP holoenzyme. During transcription initiation, the factors associate with RNAP core enzyme, guide the transcription machinery to promoter regions of genes, unwind double-strand promoter DNA, and facilitate de novo RNA synthesis (1)(2)(3)(4). The genomes of bacteria comprise one primary factor (or group-1 factor; 70 in Escherichia coli) maintaining expression of majority of genes, and a collection of alternative factors in control of subsets of genes responding to certain intracellular and environmental signals (5,6).
The alternative factors contain three groups of s belonging to the 70 family (group-2, 3 and 4 s) and one group of s belonging to the 54 family (1). The group-2 factors ( 38 in E. coli) contain all domains except domain 1.1 and recognize promoters very similar to those of group-1 factor (primary factor). The group-3 factors ( 32 or 28 in E. coli) lack domain 1.1 and 1.2 and recognize promoters distinct from those of group-1 factor. The group-4 factors (also known as Extra-Cytoplasmic Function factors; ECF factors) only retain conserved domains 2 and 4 . ECF factors are the most abundant, compact and divergent factors (1,3). They are important for stress adaption of most bacteria and are associated with virulence and drug resistance of pathogenic bacteria (6,(23)(24)(25)(26). ECF factors recognize promoters with stringent specificity and have been engineered to orthogonal transcriptional elements for constructing gene circuits (27)(28)(29).
Escherichia coli E ( 24 ) is an essential ECF factor. It maintains cell envelope integrity both under stress conditions (heat-shock, acid or oxidative stresses) and during normal growth (30); it also participates in biofilm formation and drug resistance of pathogenic E. coli (31,32). The activation of E is induced by mis-folded proteins in periplasm under cell envelope stress, which triggers a cascade of protease cleavage resulting in release of E into cytoplasm (33). The E subsequently forms a holoenzyme with RNAP and directly upregulates expression of ∼100 proteinencoding genes that are involved in transport and assembly of outer membrane proteins and lipo-polysaccharide to relieve stress. It also indirectly downregulates expression of outer membrane proteins by activating transcription of their small regulatory RNAs-MicA, RybB and MicL to reduce protein load (34,35).
Escherichia coli E contains two conserved domains ( 2 and 4 ) and a non-conserved 2 / 4 linker as other bacterial ECF factors. Escherichia coli E recognizes promoters with consensus sequences at the −35 and −10 elements of 'GGAACTT' and 'GTC', respectively (36)(37)(38) (41,42). The structures together revealed interactions among ECF factors, RNAP core enzyme and promoter DNA, and surprisingly showed that the 2 / 4 linkers of the two ECF factors interact with RNAP core enzyme in an analogous way as the 3.2 of primary factor does--the linker inserts into the active center cleft and exits out through the RNA exit channel (43,44). As the 2 / 4 linkers of ECF factors are highly divergent in length and sequence, it is intriguing to know whether the 2 / 4 linkers of other ECF factors interact with RNAP in a similar manner, how RNAP manages to accommodate the extremely variable structure modules using one binding site, and more importantly what role the linkers play during transcription initiation.
In this study, we determined a cryo-EM structure at 4.0 A of E. coli transcription initiation complex comprising E. coli E , and a crystal structure of M. tuberculosis transcription initiation complex comprising a chimeric M. tubercu-losis H/E factor. The structures reveal protein-protein interactions essential for RNAP holoenzyme assembly, and protein-DNA interactions critical for promoter recognition and unwinding by E. coli E . More importantly, the structures show that the 2 / 4 linkers of E. coli E and M. tuberculosis E insert into the active-center cleft of RNAP and interact with template single-stranded DNA as do the 2 / 4 linkers of M. tuberculosis H and L , despite no sequence similarity of the linker regions. The structure prediction of 27,670 bacterial ECF factors shows that the 2 / 4 linkers of ECF factors retain similar secondary structures at the end regions, indicating that the 2 / 4 linkers, albeit highly divergent in sequence, probably follow the same path to enter and exit the active center of RNAP. We demonstrated that the 2 / 4 linker is essential for ECFinitiated transcription probably by facilitating several steps including RPo formation, synthesis of initial short RNA transcripts, and promoter escape. Supplementary Table S1. The pEASY-prpoH, pEASY-prpoE, pEASY-psigM, pEASY-psigW, pEASY-psigB and pEASY-pClpB were constructed by inserting the promoter region (−50 to +50) of respective genes amplified from genomic DNA into the pEASY-blunt vector (Transgen biotech, China).
Proteins
The wild-type or derivatives of bacterial ECF factors were over-expressed in E. coli BL21(DE3) cells (NovoProtein), and purified from soluble fractions using Ni-NTA (SMART, Inc.) and Heparin columns (GE Healthcare). The Mtb E 2 and Bs E 2 were obtained from the inclusion body. The E. coli, M. tuberculosis, and B. subtilis RNAP core enzymes were over-expressed in E. coli BL21(DE3) and sequentially purified on a Ni-NTA affinity column, a Mono Q ion-exchange column, and a Superdex S200 size-exclusion column.
Crystallization and structure determination of Mtb H -RPo and H/E -RPo
The Mtb H -RPo and H/E -RPo complexes for crystallization were prepared by reconstitution. The Mtb RNAP core enzyme, Mtb H (or H/E ), and nucleic-acid scaffolds ( Figure 3A) were mixed at 1: 4: 1.2 molar ratio and incubate at 4 • C overnight. The RPo complexes were purified using a Hiload 16/60 Superdex S200 column (GE Healthcare, Inc.) and stored in 20 mM Tris-HCl pH 8.0, 0.1 M NaCl, 1% (v/v) glycerol, 1 mM 1,4-dithiothreitol (DTT) with a concentration of 7.5 mg/ml. Crystals of Mtb H -RPo were obtained from 0.08 M Magnesium acetate, 0.05 M sodium cacodylate pH 6.5, 15% PEG400; and crystals of Mtb H/E -RPo were obtained from 0.2 M sodium acetate, 0.1 M sodium citrate pH 5.5, 10% PEG4000. The Xray diffraction data were collected at Shanghai Synchrotron Radiation Facility (SSRF) beamlines 17U and 19U, and the structures were solved by molecular replacement with Phaser MR using the structure of M. tubercolusis RNAP holo enzyme (PDB ID: 5ZX3).
Cryo-EM structure determination of E. coli E -RPo
The E. coli E -RPo were obtained by reconstitution with E. coli RNAP core enzyme, E. coli E , and nucleic-acid scaffold as above ( Figure 1A). The E. coli E -RPo were concentrated to ∼15 mg/ml and stored in 10 mM Hepes pH 7.5, 50 mM KCl, 5 mM MgCl 2 , 3 mM DTT. The E. coli E -RPo was mixed with CHAPSO (Hampton Research Inc.) to a final concentration of 8 mM prior to grid preparation. The complex (3 l) were subsequently applied on a glowdischarged C-flat CF-1.2/1.3 400 mesh holey carbon grids (Protochips, Inc.), and plunge-frozen in liquid ethane using a Vitrobot Mark IV (FEI). The grids were loaded into a 300 keV Titan Krios (FEI) equipped with a K2 Summit direct electron detector (Gatan) and a dataset was collected. The electron density map was obtained by single-particle reconstitution with RELION2.1. Gold-standard Fouriershell-correlation analysis indicated a mean map resolution of 4.02Å. The structure model was built in Coot and refined in Phenix.
Stopped-flow assay
The promoter for the stop flow assay was prepared as in Supplementary Figure S5A. To monitor the efficiency of RPo formation of E. coli RNAP holoenzymes comprising wild-type or derivatives of E. coli E , 60 l E -RNAP holoenzyme (200 nM) and 60 L Cy3-PrpoE (4 nM) in 10 mM Tris-HCl, pH 7.7, 20 mM NaCl, 10 mM MgCl2, 1 mM DTT were rapidly mixed and the change of Cy3 fluorescence was monitored in real time by a stopped-flow instrument (SX20, Applied Photophysics Ltd, UK) equipped with a excitation filter (515/9.3 nm) of and a long-pass emission filter (570 nm). The data were plotted in SigmaPlot (Systat software, Inc.) and the observed rates Kobs, 1 and Kobs, 2 were estimated as in the Supplementary Materials and Methods.
Fluorescence polarization (FP) competitive assay
The E. coli E was labeled with fluorescein at residues C165. The affinity of E. coli RNAP core enzyme and wild-type E. coli E was first determined as ∼53 nM by a FP assay. A FP competition assay was further employed to compare the affinities of wild-type and derivatives of E. coli E to RNAP core enzyme. Label-free E. coli E (0, 2.5, 5, 10, 20, 40, 80, 160, 320, 640, 1280, 2560 or 5120 nM; final concentration) pre-mixed with fluorescein-labeled E. coli E (5 nM; final concentration) were incubated with E. coli RNAP core enzyme (100 nM; final concentration) in FP buffer at room temperature for 20 min in FP buffer. The FP signals were measured using a plate reader (SPARK, TECAN Inc.) equipped with excitation filter of 495/10 nm and emission filter of 520/20. The data were plotted in SigmaPlot (Systat software, Inc.) and the IC 50 were estimated as in the Supplementary Materials and Methods.
The cryo-EM structure of E. coli E -RPo
To obtain a structure of E. coli E -RPo, we reconstituted the E. coli E -RPo complex with E. coli RNAP core enzyme, E. coli E and a nucleic-acid scaffold ( Figure 1A We obtained a cryo-EM map at 4.0Å for the E. coli E -RPo complex with local resolution at the active-center cleft of RNAP around ∼3Å ( Figure 1C; Supplementary Figure S1B-E and Table S3). The map shows clear density for residues of E 2 (residues 5-87) and E 4 (residues 131-190) and all residues of the E 2 / E 4 linker (residues 88-130) ( Figure 1E and Supplementary Figure S2B). The map also shows clear density for the upstream dsDNA, template and nontemplate ssDNA of transcription bubble, the RNA/DNA hybrid and the downstream dsDNA ( Figure 1F and Supplementary Figure S2D). The RNAP clamp in the structure of E -RPo adopts a closed conformation as other bacterial transcription RPo complexes (Supplementary Figure S3A) (10,(45)(46)(47)(48). The template ssDNA and nontemplate ssDNA follow the same path as in the structure of M. tuberculosis H -RPo (Supplementary Figure S3B) (41).
In the structure of E -RPo, the domains E 2 and E 4 locate on the surface of RNAP ( Figure 1D). E 2 attaches to the clamp helices of the RNAP-' subunit via a polar surface as the 2 of other 70 -family factors does ( Figure 1D and Supplementary Figure S3C) (48,49). The residues on the interface are conserved (Supplementary Figure S4A). Intriguingly, E 4 uses a distinct hydrophobic surface to bind the tip helix of RNAP- flap domain (FTH). The interface residues include V131, F132, I135, L151, I181, V185 and I189 of E 4 and E898, L901, L902, I905 and F906 of the FTH. Such interaction induces a 90 • rotation of the FTH, where the FTH is further stabilized by the extended hydrophobic surface created by residues I121, L123 and L127 of the E 2 / E 4 linker (Figure 2A and B).
The promoter recognition and unwinding by E. coli E
The structure of E. coli E -RPo is superimposable on the binary structure of E. coli E 4 /−35 element promoter ds-DNA ( Figure 2C), supporting the previous conclusion that 4 of bacterial ECF factors reads sequence and shape of −35 dsDNA (12). The structure of E. coli E -RPo is also superimposable on the binary structure of E. coli E 2 /−10 element promoter ssDNA ( Figure 2D). In particular, the T -10 and C -9 of the non-template strand were inserted into two protein pockets ( Figure 2E) in the exactly same manner as in the structure of E. coli E 2 /−10 ssDNA. The DNAprotein interactions are sequence specific, as swapping the 'specificity loop' of E. coli E altered the specificity of the element (39).
The structure implicates that N80 might serve as a wedge to separate the base pair at position −10. To explore contributions of the residue to promoter unwinding, we modified a stopped-flow assay to monitor the RPo formation by E. coli E -RNAP, in which the fluorescence of a Cy3 fluorophore at +1 position on non-template strand DNA increases upon RPo formation ( Figure 2F and Supplementary Figure S5A). Similar assays have been used to measure the kinetics of RPo formation by the primary factor (50)(51)(52). As shown in Figure 2F, the fluorescence rapidly increases and reaches to a plateau in 5 seconds after mixing the E -RNAP with promoter DNA, while RNAP core enzyme induces no change of fluorescence, validating the assay. The kinetics of RPo equilibration is two times slower by E (N80A)-RNAP holoenzyme compared with wild-type E -RNAP, suggesting a role of N80 during RPo formation probably by facilitating promoter unwinding ( Figure 2F).
Interestingly, mutations of the protein pockets on E for T -10 and C -9 (F64A or W73A) also exhibited slowed RPo equilibration ( Figure 2F), indicating that the RPo equilibration could be accelerated by securing the unwound nucleotides. It is worth noting that all curves could be perfectly fitted with a typical two-phase kinetics (a fast phase and a slow phase), suggesting the existence of a significant intermediate (RPi) on the path toward RPo ( Figure 2F and Supplementary Figure S5C-E). Alanine substitutions of N80, F64 or W73 slow down kinetics of both phases (Supplementary Table S5).
The above evidence supports the conclusion that E. coli E unwinds promoter at the −11/−10 junction in a previous study (39), similar to M. tuberculosis H (41), but different from E. coli 70 (13), which unwinds promoter DNA at a position 1-bp downstream of that by the ECF factors (Supplementary Figure S3E-H) (9,41,42). Structure superimposition (M. tuberculosis H -RPo, E. coli 70 -RPo, and E. coli E -RPo) reveals that the melting residues of the primary of A (W433 and W434 for E. coli 70 ) and ECF factors (N80 for E. coli E ) locate at slightly different po- Figure S3E-H). Tryptophan substitution of the residues of E. coli E locating at the corresponding positions of the W-dyad on A (R76W, I77W or R76W/I77W) resulted in substantial decrease of promoter unwinding efficiency, confirming that E opens promoter through a different mechanism than primary factor ( Figure 2F and Supplementary Table S5).
The E 2 / E 4 linker interacts with the active-center cleft of RNAP
We discovered that the E 2 / E 4 linker dives into the activecenter cleft of RNAP and emerges out through the RNAexit channel ( Figure 2G). The path inside of RNAP of the E 2 / E 4 linker is remarkably similar to that of the 3.2 of the group-1 factor and also to those of linker regions of two other ECF factors (M. tuberculosis H and L ) (41,42), therefore, we designated the E 2 / E 4 linker as E 3.2 -like linker ( Figure 2G and Supplementary Figure S6). The E 3.2 -like linker region could be further divided into three sub-regions--the head (residues 88-98), middle (residues 99-118) and tail (residues 119-130) ( Figure 2G). The head sub-region extends the helix of E 2 and enters the active-center cleft through T-ss DNA channel created by the RNAP-' lid and rudder motifs. The middle sub-region passes underneath the lid domain and makes a turn toward the RNA-exit channel; it resides in the RNAP active-center cleft and contacts the T-6 nucleotide ( Figure 2H). The tail sub-region forms a continuous helix with the first helix of E 4 and exits the RNAP active-center cleft through the RNA-exit channel ( Figure 2G).
The 3.2 -like linker of M. tuberculosis E also inserts into the active-center cleft of RNAP
Considering the fact that there is no similarity on primary sequences of the 2 / 4 linker regions of bacterial ECF factors (1), we are interested to know whether the linker regions of other bacterial ECF factors follow the same path in RNAP. The initial attempts to obtain additional structures of RNAP complexed with ECF factors failed. Inspired by the results that chimeric factors with the linker region swapped function normally and the idea of determination of crystal structures of transcription initiation complexes containing chimeric factors (41,42), we sought to obtain crystal structures of the Mtb RPo complexes with chimeric H factors. We first took advantage of the high By using the same fork scaffold, we determined a crystal structure at 3.1Å of Mtb H/E -RPo comprising the same nucleic-acid scaffold and a chimeric H/E with 2 / 4 linker of H replaced by that of Mtb E ( Figure 3C). In the structure of Mtb H/E -RPo, the 2 / 4 linker region of Mtb E follows a similar path through RNAP active-center cleft and makes interactions with the template ssDNA as other bacterial ECF factors, providing another evidence for the conserved interaction mode of the linker region with RNAP ( Figure 3D). Figure S4; Supplementary Files 1 and 2). However, struc-tural comparison of the 3.2 -like linkers of the four available RPo structures comprising ECF factors exhibits similar secondary structures for the head and tail sub-regions. Namely, the head sub-regions contain a short helix followed by a short  strand or a coil; while the tail sub-regions are mainly composed of a helix ( Figure 4B and C).
The head and tail of 3.2 -like linkers retain conserved secondary structures
To explore whether other bacterial ECF factors also retain similar secondary structure folds for the 3.2 -like linker regions. We performed secondary-structure prediction of the 3.2 -like linker regions of the 27,670 bacterial ECF factors using RaptorX-Property and calculated the probability score of secondary structures for each position (53,54). The predictions agree very well with the secondarystructure pattern of the four available structures (Supplementary Figure S7); 85% of residues adopt exactly the same secondary structures as predicted, validating the predictions. More importantly, the predictions show a strikingly conserved pattern of secondary structures for the head and tail sub-regions of 3.2 -like linkers. Namely, ∼80% of ECF factors are predicted to contain a short helix followed by a coil in the head sub-region and a short helix in the tail subregion of 3.2 -like linkers ( Figure 4D
The 3.2 -like linker plays pivotal role during transcription initiation
Above evidence suggests that 3.2 -like linker of most bacterial ECF factors probably follows a similar path to enter into and exit from the active-center cleft of RNAP, implicating that such 3.2 -like linker is an indispensable domain and probably plays essential function. We next explored the functional importance of the 3.2 -like linker. We prepared wild-type and derivatives of well-studied bacterial ECF factors (including E. coli E , B. subtilis W , B. subtilis M , M. tuberculosis E and M. tuberculosis H ) and performed in vitro transcription experiments. The results of Figure 5A clearly showed that deleting or replacing the 3.2 -like linker with a disordered sequence completely abolished the transcription activity of all tested bacterial ECF factors. The results suggest that the 3.2 -like linker region is indeed essential for the transcription activity of bacterial ECF factors.
To further dissect the steps 3.2 -like linker might be involved in during transcription initiation, we studied the assembly of RNAP holoenzyme, the formation of RPo, and the synthesis of abortive and productive transcripts by using wild-type or derivatives of E. coli E . We developed a competitive FP assay (in which the unlabeled wild-type or derivatives of E compete with [C165-FAM] E for binding to RNAP core enzyme) to compare binding affinities of various factors. The E. coli A exhibited the strongest inhibition with an IC 50 ∼5-fold lower than E. coli E , which is in consistent with the previous finding that A has higher affinity than that of other ECF factors (red in Figure Figure 5. The data points were recorded every 0.1 s and the data were fitted as described in 'Materials and Methods' section. The Ec E head region (residues 88-98) was replaced by 'GGSSGSGGSSS' resulting in Ec E (head); the E 3.2 tail region (residues 119-130) was replaced by 'GGSSGSGGGSSS' resulting in Ec E (tail); E E 3.2 head region (residues 88-98) and tail region (residues 119-130) were replaced by 'GGSSGSGGSSS' and 'GGSSGSGGGSSS', respectively resulting in Ec E (head/tail). (D) The in vitro transcription assay with WT or derivatives of E. coli E . The 'abortive' represents abortive transcripts and the 'T' represents terminated transcripts of 82 nt. The in vitro transcription and stopped-flow experiments were repeated for three times and representative data are shown. The FP competitive experiments were repeated for three times and the data were presented as mean ± S.E.M. Table S4) (55). Deletion of the 3.2like linker of E substantially decreases the binding affinity with an IC 50 ∼20-fold higher than E. coli E (Figure 5B and Supplementary Table S4). However, replacing the head or the tail sub-regions of the 3.2 -like linker of E with random sequences has no significant change on the affinity of E. coli E ; while replacing the entire linker with a disordered acidic loop instead slightly increased the binding affinity. The results suggest that the presence of a physical linker--regardless of protein sequences of the linker--between 2 and 4 is necessary for maintaining the high affinity of E. coli E to RNAP core enzyme (the linker physically ties the E 2 and E 4 together and thus greatly increase the affinity of the two domains to RNAP), but the interactions of the linker with RNAP plays little role for assembly of RNAP holoenzyme. The results are also consistent with the fact that bacterial ECF factors show highest conservation scores for RNAP-contacting residues on 2 and 4 , but show no conservation on any residues on 3.2 -like linkers ( Figure 4A and Supplementary Figure S4). The results also explain that the identities of the −10 element are exclusively recognized at the non-template strand of promoter DNA (10,41,42).
5B and Supplementary
The chimeric E. coli E factors serve as good materials for subsequent experiments, as they showed similar affinity to wild-type E with RNAP core enzyme. Therefore, any effects can be attributed the altered conformation of the 3.2like linker or interactions between the linker and RNAP. We next studied the potential effect on RPo formation using the chimeric E. coli E factors by a stopped-flow fluorescence assay as described above. All the chimeric E. coli E factors showed slowed RPo equilibration (Figure 5C and Supplementary Table S6), suggesting a role of the 3.2 -like linker during RPo formation.
To explore the potential role of the 3.2 -like linker of E on the steps following RPo formation, we performed in vitro transcription assays. As shown in Figure 5D, RNAP holoenzymes comprising chimeric E. coli E factors produce substantially less amount of abortive as well as fulllength products. Intriguingly, RNAP holoenzyme with E (DL) (the whole linker replaced by a disordered loop), E (Head/Tail) (the head and tail regions of the 3.2 -like linker are replaced by disordered loops), or E (R2/R4) (disconnected E 2 and E 4 ; the 3.2 -like linker is completely truncated) still produced abortive transcripts, albeit less efficiently, but produced no full-length products (Lane IV, V and VI in Figure 5D), suggesting that the 3.2 -like linker probably also affect the later step of transcription initiation (i.e. promoter escape).
DISCUSSION
In this work, we have solved a cryo-EM structure of E. coli E -RPo at 4.0Å, a crystal structure of M. tuberculosis H -RPo at 2.9Å, and a crystal structure of M. tuberculosis H/E -RPo at 3.1Å. We included a 5-nt RNA primer (complimentary to nucleotides of template ssDNA at positions −4 to +1) to stabilize the complexes, a strategy has been used previously for determination of bacterial RPo complexes (10,13,56). The conformation of the 5-bp hybrid in our structures is indistinguishable to that of the bona fide bacterial transcription initiation complexes with 5-nt RNA (16,48), although it is not an on-pathway state of transcription initiation.
The structure of E. coli E -RPo reveals protein-protein interactions essential for E -RNAP holoenzyme assembly, and protein-DNA interactions essential for promoter recognition and unwinding. More importantly, the four structures of transcription initiation complexes comprising ECF factors and secondary-structure prediction of available 27,670 ECF factors show that the 3.2 -like linkers of most bacterial ECF factors retain conserved pattern of secondary structures of the head and tail sub-regions and strongly suggest the 3.2 -like linkers follow the same path to get in and out the active-center cleft of RNAP.
Our study explains how bacterial RNAP manages to accommodate such divergent 3.2 -like linkers and why the primary sequences of 3.2 -like linkers become so divergent during evolution. The head sub-region of 3.2 -like linkers comprises a short helix followed by a coil. The short helix extends the last helix of 2 and help guide 3.2 -like linker approaching into the channel to enter the active-center cleft of RNAP. The short coil forms a -sheet with the lid domain of RNAP-' subunit in three of four available structures of ECF -RPo ( Figure 4B). Such interaction model explains the poor conservation of primary sequence in this region; as a -sheet is typically stabilized through main-chain interactions. The tail sub-region of 3.2 -like linkers in the RNA exit channel forms a long intact helix (occasionally with a kink) with residues of 4 ( Figure 4B). It seems that the channels for entry and exit of 3.2 -like linkers of ECF factors put some evolutionary pressure on the head or tail sub-regions and consequently certain secondary-structure patterns in the two sub-regions are retained. The middle sub-region of 3.2 -like linkers locates mainly in the active-center cleft--a wide channel for accommodating DNA/RNA hybrid which puts much less restraint for indels on this sub-region during evolution, and thereby exhibits varied lengths in primary sequence and diverse secondary structures.
In case of primary factors, the 3.2 plays an essential role during transcription initiation (10,16,17,21,44,57). It inserts into the active-center cleft of RNAP, where it mimics an RNA molecule, pre-organizes the template ssDNA into a helical conformation, and increases the binding affinity of initiating NTPs. After showing that the 3.2 -like linkers of bacterial ECF factors bind to the active-center cleft of RNAP and to the template ssDNA in the transcription bubble in a similar manner to the 3.2 of primary factors ( Figures 2G and 3B-C) , we demonstrated the 3.2 -like linker of bacterial ECF factors is also crucial to transcription initiation as the 3.2 of primary factors. Deletion of the 3.2 -like linker of bacterial ECF factors completely abolished production of full-length transcripts ( Figure 5A). We further showed that multiple steps of transcription initiation require proper engagement of the 3.2 -like linker in the active-center cleft of RNAP, as disrupting such interactions resulted in impaired ability to form RPo complex, synthesis of abortive transcripts as well as promoter escape ( Figure 5).
Transcription machineries from all three domains of lives retain similar essential structure modules as the domain 3. Figure S6) (65,66). Apparently, distinct multiple-subunit DNA-dependent RNAP have evolved non-homologous, but functionally equivalent structure modules for efficient transcription initiation, implicating a unified mechanism for transcription initiation for multiple-subunit DNA-dependent RNAP.
DATA AVAILABILITY
Atomic coordinates and structure factors of Ec E -RPo complex, Mtb H -RPo complex, Mtb H/E -RPo complex have been deposited into the Protein Data Bank with accession code 6JBQ, 6JCX and 6JCY, respectively (https: //www.wwpdb.org/). gift of pTolo-EX vectors. We thank the staff at beamline BL18U1/BL19U1 of National Center for Protein Science Shanghai (NCPSS), and at beamline BL17U1 of Shanghai Synchrotron Radiation Facility for assistance during data collection. We thank Shenghai Chang at center of cryo Electron Microscopy for help with cryo-EM sample preparation and data collection. We thank the state key laboratory of bio-organic and natural products chemistry at Shanghai institute of organic chemistry at CAS for sharing the stoppedflow fluorescence spectrometer. | 2019-05-28T13:10:10.536Z | 2019-05-27T00:00:00.000 | {
"year": 2019,
"sha1": "96fc6503731bd191f4501b92c716c1feca4001ab",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/47/13/7094/28981291/gkz470.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96fc6503731bd191f4501b92c716c1feca4001ab",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261493607 | pes2o/s2orc | v3-fos-license | Association of interpregnancy interval and risk of adverse pregnancy outcomes in woman by different previous gestational ages
Abstract Background: With an increasing proportion of multiparas, proper interpregnancy intervals (IPIs) are urgently needed. However, the association between IPIs and adverse perinatal outcomes has always been debated. This study aimed to explore the association between IPIs and adverse outcomes in different fertility policy periods and for different previous gestational ages. Methods: We used individual data from China's National Maternal Near Miss Surveillance System between 2014 and 2019. Multivariable Poisson models with restricted cubic splines were used. Each adverse outcome was analyzed separately in the overall model and stratified models. The stratified models included different categories of fertility policy periods (2014–2015, 2016–2017, and 2018–2019) and infant gestational age in previous pregnancy (<28 weeks, 28–36 weeks, and ≥37 weeks). Results: There were 781,731 pregnancies enrolled in this study. A short IPI (≤6 months) was associated with an increased risk of preterm birth (OR [95% CI]: 1.63 [1.55, 1.71] for vaginal delivery [VD] and 1.10 [1.03, 1.19] for cesarean section [CS]), low Apgar scores and small for gestational age (SGA), and a decreased risk of diabetes mellitus in pregnancy, preeclampsia or eclampsia, and gestational hypertension. A long IPI (≥60 months) was associated with an increased risk of preterm birth (OR [95% CI]: 1.18 [1.11, 1.26] for VD and 1.39 [1.32, 1.47] for CS), placenta previa, postpartum hemorrhage, diabetes mellitus in pregnancy, preeclampsia or eclampsia, and gestational hypertension. Fertility policy changes had little effect on the association of IPIs and adverse maternal and neonatal outcomes. The estimated risk of preterm birth, low Apgar scores, SGA, diabetes mellitus in pregnancy, and gestational hypertension was more profound among women with previous term births than among those with preterm births or pregnancy loss. Conclusion: For pregnant women with shorter or longer IPIs, more targeted health care measures during pregnancy should be formulated according to infant gestational age in previous pregnancy.
Introduction
The interpregnancy interval (IPI) was thought to be a modifiable factor to prevent adverse effects on perinatal and maternal health in subsequent pregnancies.The World Health Organization (WHO) suggested that at least 2 years of birth spacing and 6 months of postabortion spacing could reduce the risk of adverse maternal, perinatal, and infant outcomes. [1]However, previous research based on a population study showed that pregnant women have the lowest risk of adverse perinatal outcomes, such as low birthweight, preterm birth, and small for gestational age (SGA), at an IPI of 18-23 months after a previous live birth. [2][4][5][6] A longer interval (longer than 4 years) may increase the risk of preeclampsia recurrence. [7]The "maternal nutritional depletion" hypothesis demonstrating inadequate recovery from previous pregnancy was thought to be a mechanism of the association between a short interbirth interval (month gap between two consecutive live births) and increased adverse neonatal outcomes. [8,9]Moreover, some researchers believe that the effect of the IPI is confounded by maternal health status and social, economic, and demographic factors. [10,11]However, fully controlling for all possible confounding factors is difficult to carry out in one study.
To date, most relevant studies have been published in developed countries, with little research performed in developing countries.Thus, the applicability of previous research conclusions to developing countries is unclear.With the gradual relaxation of China's fertility policy at the end of 2013 and the introduction of the "universal two-child" policy in 2016, the proportion of second children has increased. [12]Although some research about the association of IPIs and adverse outcomes in Chinese women has been published, all these studies were partial studies from a province or a particular level of hospitals without adequate representativeness of China.All these studies included IPIs as a categorical variable without an adequate scientific or clinical basis.In addition, most of these studies focused on adverse neonatal outcomes.15][16] A baby boom was observed shortly after the "universal two-child" policy, and this increase in births was assumed to be driven by the fertility policy change. [17]e noticed that previous studies have analyzed the association between IPIs and adverse outcomes after pregnancy loss.][20] In addition, we also found a study that compared the risk of preterm birth in women with long and short IPIs among those with previous preterm births and term births.However, this study was conducted in high-income countries. [21]The conclusion may not be suitable for developing countries that are restricted by health care services.In addition, the effect of previous gestational age on other adverse health outcomes is unknown.Thus, we built models stratified by infant gestational age in previous pregnancy to further explore the associations between IPIs and adverse maternal and prenatal outcomes.
Ethical approval
This study was approved by the Ethics Committee of the West China Second University Hospital (No. 2012008).
Data collection
Individual maternal data were collected through China's National Maternal Near Miss Surveillance System (NMNMSS) from January 2010 to December 2019.The NMNMSS was first established in 2010.Currently, it covers 441 member hospitals that manage more than 1000 deliveries annually.Since there is no National Maternal Near Miss surveillance hospital in Tibet, the included member hospitals were located in 326 districts or counties throughout 30 provinces, autonomous regions and municipalities in Chinese mainland.Quality control was performed for all the collected data at the provincial, municipal, and county levels at least four times a year. [22][25] The NMNMSS collects the sociodemographic and obstetric information of pregnant and postpartum women from the obstetric departments of surveillance hospitals.The collected data include the name and code of the hospital, the date of delivery, the number of antenatal visits, maternal educational level and marital status, maternal age, delivery mode, fetal sex, parity, and the number of fetuses.The sampling strategy, data collection, and quality control procedure have been detailed elsewhere. [22,24,26]The analyzed population who had identical ID numbers, had at least two consecutive singleton pregnancy records, and became pregnant again between 2014 and 2019 were included.
Definition
The IPI was defined as the gap in months between the end of a pregnancy (including abortion, stillbirth, or live birth) and the date of the last menstrual period of the subsequent pregnancy.
We categorized the analyzed time period into three phases: the "selective one-child" policy period, the "universal two-child" policy period, and the "coolingoff" period, which were defined as 2014-2015 (Phase 1), 2016-2017 (Phase 2), and 2018-2019 (Phase 3).The "universal two-child" policy began in January 2016, [27] allowing every couple to have a second child.Thus, we set the first time point as January 2016.After the relaxation of the "universal two-child" policy, the national birth rate peaked in 2016, reaching 12.95‰.However, the national birth rate fell from 12.43‰ in 2017 to 10.94‰ in 2018, as the Statistical Bulletin reported. [28,29]We inferred that the birth rate change may have been driven by the fact that a proportion of women who were not permitted to have a second child before the relaxation of the second child policy gave birth intensively during 2016 and 2017.However, the incentive of the fertility policy was temporary.After that time, the enthusiasm of the two-child policy entered the cooling-off period as the birth rate decreased.Thus, the second time point was set at January 2018.
In addition, we also wanted to explore factors at the individual level.As for the different recommended IPIs after live birth and abortion, we categorized infant gestational age in previous pregnancy preceding the interval as <28 weeks, 28-36 weeks, and ≥37 weeks.
Perinatal outcomes, including preterm birth by cesarean section (CS), preterm birth by vaginal delivery (VD), low Apgar scores, SGA, and large for gestational age (LGA), were defined through different inclusion and exclusion criteria.As the information for the onset of labor was inaccessible in this research, we stratified preterm birth by different delivery modes, such as preterm birth (VD) and preterm birth (CS).Details of the criteria are listed in Supplementary Table 1, http://links.lww.com/CM9/B661. [30,31]Gestational age in China is generally ascertained based on the last menstrual period or the ultrasound examination when the date of the last menstrual period is not known. [26]ternal outcomes focused on some of the most common maternal complications in China, including placenta previa, postpartum hemorrhage, gestational hypertension, preeclampsia or eclampsia, and diabetes mellitus in pregnancy.Postpartum hemorrhage, including soft birth canal lacerations, uterine atony, retained placenta, or other postpartum hemorrhage, was defined as an obstetric hemorrhage greater than 500 mL during VD or 1000 mL during CS and occurring in or after the third stage of labor. [32]Gestational hypertension was defined as new-onset hypertension (≥140/90 mmHg) after 20 weeks of gestation with the normalization of blood pressure at 12 weeks postpartum.Preeclampsia was defined as hypertension (≥140/90 mmHg) and proteinuria after 20 weeks of gestation or hypertension plus the involvement of one organ or system in women with previously normal blood pressure.Eclampsia was diagnosed as the presence of new-onset grand mal seizures in women with preeclampsia. [33]Since women usually do not receive screening for diabetes mellitus before pregnancy, it can be challenging to distinguish gestational diabetes mellitus from pre-existing diabetes.However, gestational diabetes mellitus took account of approximately 90% of diabetes mellitus in pregnancy. [34,35]Thus, we defined diabetes mellitus in pregnancy in this research, which refers to pre-existing diabetes mellitus and diabetes mellitus arising in pregnancy.
Other variables, including region, prenatal examination, maternal educational level, maternal marital status, maternal age, previous delivery mode, parity, and pregnancy loss (including stillbirth and early neonatal death; fetuses with unknown birth vital signs and a gestational age less than 28 weeks were also classified as pregnancy loss) in previous pregnancy, were used as covariates.Based on the hospital's location, we defined the region as urban and rural.The hospital level (from level 1 to level 3) was certified by the administrative department of health.It was classified according to the number of beds, categories of clinical departments, numbers of medical personnel, type and quantity of equipment, and hospital funding, with level 3 hospitals providing more advanced care.All covariates were measured at the time of the previous delivery preceding the birth interval.
Statistical analysis
The IPI was used as a continuous variable.To estimate the risk of adverse health outcomes at each IPI, an IPI of 24 months was used as the reference. [4,5,36] assess the impact of fertility policy changes on IPIs, a single-group design with multiple treatment periods interrupted time series analysis (ITSA) was used. [37]The outcome was the IPI in months.The two time points (January 2016 and January 2018) were used as the start of the two treatment periods.Variables that may change over the years, including region (urban or rural), hospital level, prenatal examination, advanced maternal age, maternal education level, delivery mode in previous pregnancy, parity, pregnancy loss, and maternal complications in previous pregnancy, were used as covariates.
Poisson regression analysis is regarded as an appropriate approach to analyzing the risk of rare events.However, the error could be overestimated when estimating relative risk (RR) in binomial recorded outcomes.This can be overcome by employing a robust error variance procedure in Poisson regression models.Therefore, we performed a Poisson regression analysis with a robust variance estimator to examine the association between IPIs and perinatal or maternal outcomes.Crude and adjusted relative risks (aRRs) with 95% confidence intervals (CIs) were estimated separately.In the multivariable model, rural areas, hospital level, inadequate prenatal examination, fertility policy period, gestational age in previous pregnancy, maternal educational status, maternal age, delivery mode in previous pregnancy, parity, pregnant losses, maternal complication, interactions of fertility policy period category, and previous pregnancy category were used as covariates.Restricted cubic splines (RCSs) with five knots placed at the 5th, 25th, 50th, 75th, and 95th percentiles were also included in the model to allow non-linear assumptions between the IPIs and each adverse outcome.Each adverse outcome was analyzed in the overall model and stratified models separately.Because the relationship between IPIs and adverse neonatal and maternal outcomes might vary as a function of the change in the fertility policy period or length of a previous pregnancy, we analyzed the potential mediating effect of fertility policy changes or the length of a previous pregnancy by fitting a Poisson model using interactions of the fertility policy period category and previous pregnancy category in the overall model.The stratified models were carried out in different categories of fertility policy periods (2014-2015, 2016-2017, and 2018-2019) and infant gestational age in previous pregnancy (<28 weeks, 28 -36 weeks, and ≥37 weeks).A test for interaction for each outcome at each interval length in separate subgroups was performed. [38]nsidering the possible collinearity between age and the IPI, we also conducted a sensitivity analysis that excluded adjustment for maternal age.Differences between the adjusted and unadjusted models for age were found to be minor.To assess the potential bias of unmeasured confounding variables, we also calculated the E value, which represents the minimum strength that unmeasured confounding needs to have on both IPIs and adverse perinatal and maternal outcomes to fully explain the observed association. [39]STATA (version 16.0; Stata Corp., TX, USA) and SAS (version 9.4; SAS Institute Inc., NC, USA) were used to conduct the analysis.P <0.05 (two-sided) were considered statistically significant.
Results
There were 781,731 pregnancies enrolled in this study.Approximately half of the pregnancies (51.86%) had an IPI less than 24 months.The enrolled women were mainly primiparas aged less than 30 years with an educational level of college or above.Women who were from urban areas, had adequate prenatal examinations, had a live singleton birth through CS, and did not have any maternal complications tended to have a longer 2].A rare difference in adverse perinatal outcomes was observed in the fertility policy period stratified analysis [Figure 1, Supplementary Table 5, Figures 1 and 2, http://links.lww.com/CM9/B661].In the previous gestational age stratified analysis, the increased risk of preterm birth (VD) and low Apgar scores in women with short IPIs was more profound among those who became pregnant again subsequent to a previous term birth.Women who become pregnant again after delivering an infant with a previous gestational age greater than 37 weeks had a notably higher risk of preterm birth (CS) than women with a previous gestational age less than 28 weeks in short IPIs [Figure 2, Supplementary Table 6, Figures 3 and 4, http:// links.lww.com/CM9/B661].
Regarding adverse maternal outcomes, the risk of most of the adverse maternal outcomes increased with an increasing IPI, and the risk of preeclampsia or eclampsia increased from 0.68 (95% CI, 0.62, 0.75) to 2.16 (95% CI, 2.02, 2.32).However, the risk of uterine rupture was significantly higher at the 6-month interval (1.31 [95% CI, 1.08, 1.59]) [Table 2].In the fertility policy period stratified analysis, rare notable differences in adverse maternal outcomes were observed in different categories of fertility phase periods [Figure 3, Supplementary Table 5, Figures 5 and 6, http://links.lww.com/CM9/B661].In the previous gestational age stratified analysis, rare differences were observed in women with different IPIs among those who became pregnant after delivering an infant with a gestational age less than 28 weeks.The decreased risk of diabetes mellitus in pregnancy in women with short IPIs was more profound for those who became pregnant again after a term birth than for those who became pregnant again after delivering an infant with a gestational age less than 28 weeks or between 28 weeks and 36 weeks.The increased risk of gestational hypertension in women with long IPIs was higher for those who became pregnant again after a term birth than for those who became pregnant after a delivering an infant with a gestational age less than 28 weeks [Figure 4, Supplementary In sensitivity analyses, the calculated E values indicated that confounding was unlikely to entirely explain the observed results [Supplementary Table 7, http://links.
Discussion
In the context of the rising proportion of second children and multiparas, there is an increasing need to recognize the association of IPIs and adverse maternal and neonatal outcomes.In our study, we found that approximately half of the women (51.86%) became pregnant again within an IPI of less than 24 months.A short IPI (≤6 months) was associated with an increased risk of preterm birth (VD), preterm birth (CS), a low Apgar score at 1 min, SGA, and uterine rupture, and a decreased risk of diabetes mellitus in pregnancy, preeclampsia or eclampsia, and gestational hypertension.A long IPI (≥60 months) was associated with an increased risk of preterm birth (VD), preterm birth (CS), LGA, placenta previa, postpartum hemorrhage, diabetes mellitus in pregnancy, preeclampsia or eclampsia, and gestational hypertension.The fertility policy change had an effect on the increase in IPIs but had little effect on the association of IPIs and adverse maternal and neonatal outcomes.The risk of preterm birth (VD), preterm birth (CS), low Apgar scores, and SGA in women with short IPIs was higher among those with a previous term birth, while the decreased risk of diabetes mellitus in pregnancy was more profound among those who became pregnant again after a term birth.Rare notable differences were observed in the association between IPIs and other birth outcomes by infant gestational age in previous pregnancy.
Previous studies have also explored the association of IPIs and adverse maternal and neonatal outcomes in China.One of them was a multicenter retrospective study of 21 hospitals from 14 provinces. [15]The IPI distribution of that research was different from ours.In that study, over half of the women were pregnant within an IPI of 24-59 months.In our research, approximately half of the women were pregnant within an IPI less than 24 months.We found that all the participants in that study were from level 3 hospitals.Level 3 hospitals in China have advanced medical facilities and staff and specialize in difficult miscellaneous diseases and near misses.In our results, we included women from level 1 to level 3 hospitals, and we found that the more advanced the hospital was, the higher the proportion of women with a long IPI.
Previous research has explored the effect of IPIs on preterm birth, SGA, and LGA in South China from 2000 to 2015, before the implementation of the "universal two-child" policy. [4]Although adjusted with different covariates, the curves were consistent with our study.In our study, according to the comparison, we determined whether these associations were confounded by fertility policy changes.A significant IPI increment immediately after the announcement of the "universal two child" policy was found among primiparas.However, we found that the association of IPIs and adverse maternal and neonatal outcomes seemed rarely affected by the fertility policy.The curve of the association of IPIs and each adverse outcome was similar in different fertility phase periods.
In addition to fertility policy changes, we also wanted to explore the effect of infant gestational age in previous pregnancy.For neonatal outcomes in previous studies, we noticed that the estimated risk of preterm birth for women with short and long IPIs was higher among those with a previous term birth than a previous preterm birth. [3,21]The risk of preterm birth in women with short IPIs is consistent with the maternal depletion hypothesis. [8,40]Through further exploration in our study, we found that among women with previous term births, the risk of preterm birth (VD) was higher than that of preterm birth (CS) among those with short IPIs.However, in women with long IPIs, the risk of preterm birth (VD) was lower than that of preterm birth (CS).The delivery mode may be affected by various reasons, and research has shown that compared with VD, CS was associated with increased odds of maternal intensive care unit admission, maternal near misses, and neonatal intensive care unit admission. [41]In our research, although we could not separate iatrogenic preterm birth from spontaneous preterm birth, the associations between IPIs and preterm birth (VD) and IPIs and preterm birth (CS) were similar to those between IPIs and spontaneous preterm and iatrogenic preterm births. [42]As hypertension and placenta previa were confirmed as key factors in iatrogenic preterm births, [43] we interestingly found that the trend for preterm birth (CS) was consistent with that for placenta previa, preeclampsia or eclampsia, and gestational hypertension in women with long IPIs.For maternal outcomes, our results for all participants were consistent with another study of Chinese women, except the association of IPIs and preeclampsia. [14]In that research, a short IPI (<12 months) was associated with an increased risk of preeclampsia.However, two restrictions were observed in that study.First, the increased risk of preeclampsia in women with short IPIs may be contrary to the "physiological regression hypothesis" that during pregnancy, the maternal cardiovascular system and metabolic system gain growth-supporting capacities to support fetal growth.However, the increased blood volume and insulin level gradually decrease if the woman does not become pregnant again. [44,45]In our results, the risk of cardiovascular or metabolic system-related complications, such as diabetes mellitus in pregnancy, preeclampsia or eclampsia, and gestational hypertension, was generally decreased when the IPI was short.In addition, we found that the longer the previous gestational period was, the greater the decreased risk of these complications.Moreover, the results in that study were not adjusted for covariates such as preeclampsia in previous pregnancy, while research has demonstrated that women with prior preeclampsia have a higher risk of recurrence of preeclampsia. [46]And the degree of abnormal glucose metabolism in previous pregnancy was important in the recurrence of diabetes mellitus in pregnancy. [35]Our result was consistent with another meta-analysis that concluded that shorter intervals (less than 2 years) are not associated with an increased risk of recurrent preeclampsia, but longer intervals appear to increase the risk. [7]We further found that the estimated risk of preeclampsia or eclampsia, gestational hypertension, and diabetes mellitus in pregnancy for women with long IPIs increased more among women with a previous term birth than among those with a previous preterm birth or an infant with a gestational age less than 28 weeks.These differences need to be further explored.
There were some limitations of our research.First, cases enrolled in this study included women with at least two consecutive pregnancies documented in the national database who became pregnant again between 2014 and 2019.Thus, there was little chance for us to enroll women with long IPIs, especially IPIs above 60 months.Furthermore, there were still confounders, such as maternal economic status and fertility desire, that were not available in this research even though they are considered to have an effect on the association of IPIs and adverse maternal and neonatal outcomes. [47]he association of IPIs and adverse outcomes may not be affected by factors such as fertility policy changes, while gestational age in previous pregnancy, especially a preceding term birth, may be a more important factor in the association of IPIs and adverse outcomes.In the future clinical practice of pregnancy health care, for pregnant women with shorter or longer IPIs, more targeted health care measures during pregnancy should be formulated according to their infant's gestational age in previous pregnancy.
Figure 2 :
Figure 2: aRR of neonatal outcomes according to IPI stratified by gestational age in previous pregnancy.Adjusted for: urban or rural areas, hospital level, inadequate prenatal examination, fertility policy period, maternal educational status, maternal age, delivery mode in previous pregnancy, parity, pregnant losses, and maternal complication.aRR: Adjusted relative risk; CS: Cesarean section; IPI: Interpregnancy interval; LGA: Large for gestational age; Small for gestational age; VD: Vaginal delivery.
Figure 3 :
Figure 3: aRR of maternal outcomes according to IPI stratified by fertility policy period.Adjusted for: urban or rural areas, hospital level, inadequate prenatal examination, gestational age in previous pregnancy, maternal educational status, maternal age, delivery mode in previous pregnancy, parity, pregnant losses, and maternal complication.aRR: Adjusted relative risk; IPI: Interpregnancy interval.
Figure 4 :
Figure 4: aRR of maternal outcomes according to IPI stratified by gestational age in previous pregnancy.Adjusted for: urban or rural areas, hospital level, inadequate prenatal examination, fertility policy period, maternal educational status, maternal age, delivery mode in previous pregnancy, parity, pregnant losses, and maternal complication.aRR: Adjusted relative risk; IPI: Interpregnancy interval.
Table 6
, Figures 7 and 8, http://links.lww.com/CM9/B661].Figure 1: aRR of neonatal outcomes according to IPI stratified by fertility policy period.Adjusted for: urban or rural areas, hospital level, inadequate prenatal examination, gestational age in previous pregnancy, maternal educational status, maternal age, delivery mode in previous pregnancy, parity, pregnant losses, and maternal complication.aRR: Adjusted relative risk; CS: Cesarean section; IPI: Interpregnancy interval; LGA: Large for gestational age; SGA: Small for gestational age; VD: Vaginal delivery. | 2023-09-04T06:17:03.901Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "cc8dd577b9278fdbb154d6b000038a8a1dbde1bb",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/cm9.0000000000002801",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "c3edb2bc50b702741389cc8885364672b94b5271",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239768155 | pes2o/s2orc | v3-fos-license | Fair Enough: Searching for Sufficient Measures of Fairness
Testing machine learning software for ethical bias has become a pressing current concern. In response, recent research has proposed a plethora of new fairness metrics, for example, the dozens of fairness metrics in the IBM AIF360 toolkit. This raises the question: How can any fairness tool satisfy such a diverse range of goals? While we cannot completely simplify the task of fairness testing, we can certainly reduce the problem. This paper shows that many of those fairness metrics effectively measure the same thing. Based on experiments using seven real-world datasets, we find that (a) 26 classification metrics can be clustered into seven groups, and (b) four dataset metrics can be clustered into three groups. Further, each reduced set may actually predict different things. Hence, it is no longer necessary (or even possible) to satisfy all fairness metrics. In summary, to simplify the fairness testing problem, we recommend the following steps: (1)~determine what type of fairness is desirable (and we offer a handful of such types); then (2) lookup those types in our clusters; then (3) just test for one item per cluster.
INTRODUCTION
A journal on software engineering methodologies needs to concern itself not just with one applications, but also general methods that hold across multiple applications. Recently the authors faced a methodological issue where reviewers challenged the validity of the metrics they used to assess that work. Prompted by that experience, we examined how we the current SE research community selects metrics for assessing the fairness of algorithmic decision making.
On reading the literature, we found a a general pattern: while the literature proposes a propose a plethora of metrics 1 , we could not find a principled argument (across a large space of known metrics) that it was necessary/unnecessary to report some metric X. This raises various methodological questions: • Should we reject papers that "only" use (e.g.) five metrics? Or should researchers always use dozens of metrics?
• When we use automatic tools to optimize for fairness, should we optimize for dozens of goals? Or is optimizing for a smaller set sufficient?
To resolve these methodological concerns, we made the following conjecture. Given, the large space of known metrics (such as the 30 studied in this paper), perhaps many of these metrics are measuring the same thing. As shown by the experiments of this paper, this is indeed the case, since we can cluster these 30 metrics into around half a dozen. While our results pertain a particular domain, there is nothing in principle stopping this methodology being applied to any domain where researchers keep proposing new metrics without first checking if the new metric is not just "old wine in new bottles" 1 E.g. the Fairlearn [20] tool lists 16 metrics; the Fairkit-learn tool [64] comes with its own 16 metrics; IBM AIF360 toolkit [25] As to the specifics of our domain, this paper concerns itself with measures of algorithmic fairness. Increasingly, software is being used for critical decision-making processes, such as patient release from hospitals [15,85], credit card applications [50], hiring [83], and admissions [19]. According to guidelines from the European Union [13] and IEEE [16], a software cannot be used in real-life applications if it is found to be discriminatory toward an individual based on any sensitive attribute such as gender, race, or age. Hence "fairness testing" is now an open and pressing problem in software engineering.
As shown in Table 1, researchers have proposed a plethora of fairness metrics for measuring fairness, and that number is growing (e.g. see all the metrics proposed in [20, 25,64]). Given that trend, is is somewhat strange to report that researchers in this area only use a few metrics in their papers [41,55,61,66,77,94]. For example, in our literature review papers from the last three years, we see only a handful of papers (13 out of 60 to the best of our knowledge) using more than five fairness metrics to evaluate their method. This is surprising since all of them ignore more than half the known metrics. Is that wise?
The conjecture that is tested by this paper is that too many spurious metrics which all measure very similar things. If that were true, then it should be possible to simplify fairness assessment as follows: Run metrics on real-world data. Find clusters of correlated metrics. Prune "insensitive clusters 2 ". Only use one metric per surviving cluster.
This paper experiments with seven datasets and finds that (a) 26 classification fairness metrics can be clustered into just seven groups; (b) four dataset metrics can be clustered into three groups and that (c) these clusters actually predict for different things. That is, it is no longer necessary (or even possible) to satisfy all these fairness metrics. Hence, to simplify fairness testing, we recommend (a) determining what type of fairness is desirable (and we offer a handful of such types); then (b) looking up those types in our clusters; then (c) testing for one item per cluster. This paper is structured around these research questions. RQ2: Can we group (cluster) fairness metrics based on similarity? We find sets of similar metrics using agglomerative clustering [5].
RQ3: Are some fairness metrics more sensitive to change than others? While most are sensitive, some are not.
RQ4:
Can we achieve fairness based on all the metrics at the same time? It is challenging to do so since some of them are competing goals and some are contradictory based on definitions.
In terms of research contributions, this study is important since the art of software fairness testing is evolving rapidly.
Studies like the one are essential to documenting what methods are "best" (as opposed to those that might distract from core issues). Accordingly: • This paper proposes a novel metric assessment tactic that can clarify and simplify future research reports in this field (run metrics on real-world data; find clusters of correlated metrics; prune "insensitive clusters1"; only use one metric per surviving cluster).
• This paper tests that tactic in an extensive case study applying 30 fairness metrics and grouped them into clusters (RQ1 & RQ2). We say this study is extensive since it is far more detailed than prior work. All our empirical results were repeated 25 times. Our study explores multiple bias mitigation algorithms on seven datasets (than prior work [40,[42][43][44]60] was tested on far fewer metrics and far fewer datasets). [20]. For definitions of the terms used here, see Table 3.
• To the best of our knowledge, this study is the first one to perform such a sensitivity meta-analysis of fairness testing and to warn that some metrics are unresponsive to data changes (RQ3). • This study also presents a meta-analysis of metrics ability to achieve fairness after application of bias mitigation technique (RQ4).
• In order to support replication and reproduction of our results, all our datasets and scripts are publicly available at https://github.com/Repoanonymous/Fairness_Metrics. Manuscript submitted to ACM
Preliminaries
Before beginning, we digress to make four points.
Firstly, mitigating the untoward effects of AI is a much broader problem than just exploring bias in algorithmic decision making (as done in this paper). The general problem of fairness is that influential groups in our society might mandate systems that (deliberately or unintentionally) disadvantage sub-groups within that society. An algorithm might satisfy all the metrics of Table 1 and still perpetuate social inequities. For example: • Its license feeds might be so expensive that only a small monitory of organizations can boast they are "fair"; • The skills required to use a model's API might be so elaborate that only an elite group of programmers can use it even if the model is fair.
More generally, Gebru et al. [21,35] argues that inequities arise from the core incentives that drive the organizations building an AI model, e.g., tools funded by the Defence Department have a tendency to support damage to property or life. She argues that "There needs to be regulation that specifically says that corporations need to show that their technologies are not harmful before deploying them". In terms of her work, this paper addresses the technical issue of how to measure "harm". As we show in Table 1 there are dozens of ways we might call software "biased" (and, hence, harmful). But we can also show is that many of those measures are relatively uninformative. Hence, if some organization wishes to follow the recommendations of Gebru et al., then with the methods of this paper, they can make their case of "harmless" via a smaller and simpler report.
Secondly, Table 1 lists dozens of metrics currently seen in the SE fairness testing literature. This paper makes an empirical argument that this list is too long since many of these metrics offer similar conclusions. One alternative to our empirical argument is an analytical argument that metric X (e.g.) is equivalent to metric Y. Later in this paper (see §5.1), we make the case that to reduce the space of metrics to be explored, that kind of analytical argument may actually be misleading.
Thirdly, to be clear, while we can reduce dozens of metrics down to ten, there will still be issues of how to trade-off within this reduced set. That said, we assert our work is valuable since debating the merits of, say, ten metrics is a far more straightforward task than trying to resolve all the conflicts between 30. Further, and more importantly, our methods could be used as a litmus test to prune away spurious new metrics that merely report old ideas but in a different way.
Fourthly, even after our mitigation algorithms, some fairness metrics still can contradict each other regarding the presence of bias. Hence, in §5.3, we offer an extensive discussion on what to do in that situation.
The Problem of Algorithmic Fairness
As software developers, we cannot turn a blind eye to the detrimental social effects of our software. While no single paper can hope to fix all social inequities, this paper shows how to reduce the complexity involved in assessing one particular kind of unfairness (algorithmic decision making bias). There is much evidence of machine learning (ML) software showing biased behavior. For example, language processing tools are more accurate on English written by Anglo-Saxons than written by people of other races [33]. An Amazon hiring tool was found to be biased against women [12]. YouTube makes more mistakes while generating closed captions for videos with female voices than males [73,86]. A popular risk-score predicting algorithm was found to be heavily biased against African Americans showing a higher error rate while predicting future criminals [8]. Gender bias is also prevalent in Google [36] and Bing [64] translators.
Due to so many undesirable events, academic researchers and big industries have started giving immense importance to ML software fairness. Microsoft has launched ethical principles of AI where "fairness" has been given the topmost priority [18]. IBM has built a toolkit called AI Fairness 360 [11] containing the most noted works in the fairness domain.
In recent years, the software engineering research community has also started exploring this topic actively. ICSE'18 held a special workshop for "software fairness" [14]. ASE'19 held another workshop called EXPLAIN, where fairness and explainability of ML models were discussed [17]. Johnson et al. have created a public GitHub repository for data scientists to evaluate ML models based on quality and fairness metrics simultaneously [64].
As to technology developed to detect and fix these issues of fairness, we can see three groups: fairness testing, model-based mitigation, and fairness metrics, .
Fairness Testing: The idea here is to generating discriminatory test cases and finding whether the model shows discrimination or not. The first work on this was THEMIS, done by Galhotra et al. [59]. THEMIS generates test cases by randomly perturbing attributes. AEQUITAS [88] improves the way of test case generation to become more efficient.
Aggarwal et al. combined local explanation and symbolic execution to generate a better black-box testing strategy [22].
Model Bias Mitigation:
There are three techniques used to remove bias from model behavior. The first one is "preprocessing" where before model training, bias is removed from training data. Some popular prior work includes optimized pre-processing [37], Fair-SMOTE [43] and reweighing [67]. The second one is "in-processing" where after model training, the trained model is optimized for fairness. Popular prior work includes prejudice remover regularizer [70] and meta fair classifier [39]. The last one is "post-processing" where while making predictions, model output is changed to remove discrimination. Some noted works include reject option classification [69] and calibration [77]. There is some work that combines two or more of these techniques, such as Fairway [44], a combination of "pre-processing" and "in-processing".
While the fairness testing and model bias mitigation are important areas, we note that before we can declare success in those two areas, we first need some way to measure that success.
Accordingly, this paper focuses on the third area called: Fairness Metrics: Early work in this area was done by Verma et al. [91] who divided 20 fairness metrics into five groups based on the theoretical definitions. Hinnefeld et al. made a comparative analysis of four fairness metrics [62].
Wang et al. did a user study to find a relation between fairness metrics and human judgments [95]. There are also some papers coming from industry on the topic. LinkedIn has created a toolkit called LiFT for scalable computation of fairness metrics as part of large ML systems [90]. Recently, Amazon internally published an empirical study based on 18 fairness metrics [54].
While all that research is certainly insightful, in some sense that work has been too successful. As mentioned in the introduction, the above work has now generated a plethora of metrics. Hence, for the rest of this paper, we check if we can simplify the current space of metrics.
Metrics Used in this Study
In our work, we collected all the metric definitions from the IBM AI Fairness 360 GitHub repository. Table 1 lists the metrics studied in this paper. The Fairkit and Fairlearn columns in Table 1 show the metrics that are common among the IBM AIF360 metrics and metrics from Fairkit [64] (16 out of 16 available metrics) and Fairlearn [20] (7 out of 16 metrics) toolkit. Before explaining fairness metrics, we need to understand some terminology. Table 2 contains seven binary classification datasets. The binary outcomes are favorable if it gives an advantage to the receiver (i.e., being hired for a job, getting credit card approved). Each of these datasets has at least one protected attribute that divides the population into two groups (privileged & unprivileged) that have differences in terms of benefits received. "sex", "race", "age" are examples of protected attributes. The goal of group fairness is, based on the protected attribute, privileged and unprivileged groups will be treated similarly. While individual fairness tries to provide similar outcomes to similar individuals.
A fairness metric is a quantification of unwanted bias in training data or models. Table 1 shows a sample of such metrics. When selecting these particular metrics, we skipped over: • Metrics for which we could not access precise definitions and implementations in IBM AIF360 toolkit [25]; • Metrics for which we could not find publications to use as baselines in this paper.
These two selection rules resulted in the 30 metrics of Table 1, which divide as follows: Classification Metrics: These measure fairness based on classification results and are labeled in Table 1 using a Metric Id beginning with C. Two inputs are needed to measure this: the first one is original dataset with true labels and the second one is predicted dataset. In case of binary classification, classification metrics can be calculated from confusion matrix. Table 3 shows a combined confusion matrix where every cell is divided based on the protected attribute.
Dataset Metrics: While classification metrics relate to predictions made from models, dataset metrics discuss learnerindependent properties of the data. These are labeled in Table 1 using a Metric Id beginning with D. Only one input is needed to compute this: original dataset or transformed (by some bias mitigation algorithm) dataset. It can be applied for both group and individual fairness.
Distortion Metrics: For completeness, we note that AIF360 includes a third set of metrics called distortion metrics.
While these metrics are not seen extensively in the current literature, they would be a worthy target for future research.
In Table 1, each metric has an ideal value representing the best-case scenario. This means at ideal value according to the metric privileged and unprivileged groups are treated equally. For most of the metrics, the ideal value is zero, while in some cases where the metric is a ratio, the ideal value is one. If the ideal value for a metric is zero, a positive value denotes an advantage for the unprivileged group, while a negative value denotes an advantage for the privileged group.
On the other hand, if the ideal value for a metric is one, a value < one denotes an advantage for the privileged group and > one denotes an advantage for the unprivileged group. To use these metrics, some threshold must be applied to report "fair" or "unfair"; • For metrics with ideal value 0: the IBM AIF360 toolkit [25] uses the following definition of "fair": ranges between -0.1 to 0.1 as "fair" (so "unfair" means values outside that range). • For metrics with ideal value 1: the IBM AIF360 toolkit [25] uses the following definition of "fair": ranges between 0.8 to 1.2 as "fair" (so "unfair" means values outside that range).
Models
This paper analyzes the 30 fairness metrics in Table 1 using the seven datasets described in Table 2. In that work, we use one baseline model and two models tuned by pre-processing and in-processing algorithms to compare with: • Baseline: We used a logistic regression model for creating baseline results. Logistic regression is widely used in the fairness domain as baseline model [38,[44][45][46]70]. We used scikit-learn implementation with 'l2' regularization (which helps to prevent over-fitting), 'lbfgs' solver (which is a quasi-Newton optimization algorithm), and maximum iteration of 1000.
• Meta Fair Classifier: An in-processing method proposed by Celis et al. [39], which is a widely used meta algorithm [28,40,60,76]. The optimization algorithm is developed to improve 11 fairness metrics with minimal loss in accuracy.
The last two bias mitigation algorithm implementations are taken from IBM AIF360 [27].
Agglomerative Clustering
Our metrics selection strategy, requires a clustering algorithm. Two class of such clustering algorithms are (a) partitioning clustering and (b) hierarchical clustering. Here we are grouping fairness metrics based on similarity, not on distance, and we have no prior idea about the number of clusters. Thus, in this case, the ideal choice is hierarchical clustering. Agglomerative clustering [5] is a hierarchical bottom-up clustering approach that is widely used in the ML community [24, 51-53, 56, 75, 79, 84, 97]. In this approach, the closest pairs of items are grouped together. These closest of these groups are then grouped into a higher-level group. This repeats until everything falls into one group. We used the average pairwise dissimilarity between objects in two different clusters as linkage method between groups. This process creates a dendrogram, a hierarchical structure of the groups/clusters obtained by between-cluster distance or dissimilarity. From this tree of groupings, we use the within-cluster similarity from the dendrogram, look for the largest . Here x-axis shows the classification metric Ids from Table 1 and y-axis shows the dissimilarity measure between clusters.
distance that we can travel vertically without crossing any horizontal line [1, 49,87], and extract the clusters at the largest change in dissimilarity (which is similar to SSE -Sum of Squared Error). Figure 1 shows the dendrogram created for the classification metrics using the above described method. Table 4 shows that we get seven clusters from 26 classification metrics. Following a similar process for dataset metrics we get three clusters as shown in Table 5.
Spearman Rank Correlation
To build these clusters and dendrograms, we measure the similarity of two metrics. In this paper, by "similarity" we mean, they are measuring the similar bias in the models/dataset. Such similar metrics will show a similar pattern of changes in bias when models are built using different parts of the data or different bias removal algorithms are used. To compute this similarity, we sample from our model training procedure (see §3. 4.2) that computes our metrics 25 times, each time using different train/validation/test samples of the data. Next, for each dataset, for those 25 numbers, we use correlation to assess similarity.
Two widely used definitions of correlation [47, 51-53, 63, 78, 84, 97] are the (a) Pearson correlation (which evaluates the linear relationship between two continuous variables) and the (b) Spearman rank correlation (which is a nonparametric measure of rank correlation that evaluates the monotonic relationship between two continuous or ordinal variables). We choose Spearman rank correlation, as it measures the monotonic relationship between two variables and is less affected by outliers.
Experimental Setup
We summarize our experimental setup as follows.
3.4.1 Data Pre-processing: Three different pre-processing steps are performed before using the data [58,74,82] for model building. At first, each categorical value in the dataset is converted either using a label encoder or by one hot encoder, as most most ML algorithms can't handle categorical values directly. Then the protected attributes are changed into ones and zeros from their original values. Here we denote the privileged attribute as one and unprivileged as zero.
Finally, we use min-max normalization in the datasets to normalize the data before building the models.
Model Training:
We used five fold cross-validation repeated five times with random seeds build training/ test sets (as recommended by [68,82,89,92]). This step is to divide the data into multiple subsets of data with various degrees of bias. We train three models in each iteration (a) baseline model: here we use the training data to build a logistic regression model; (b) Reweighing model: here we first train the reweighing method, then use the learned model to transform the training data to achieve group fairness. Using the transformed data, we train a logistic regression from scikit-learn with 'l2' regularization, 'lbfgs' solver and maximum iteration of 1000; and (c) Meta Fair Classifier model: dataset is fair or unfair according to a metric, we selected a threshold for each of the metrics. As mentioned in §2.2, that threshold is the fair range. If a metric value falls in that range, we say it "fair" otherwise "unfair".
Building Clusters:
One of the main goals of this study is to group a set of metrics together that perform similarly and measure similar kinds of bias. We use 26 classification metrics calculated on seven datasets with three different methods to calculate metric to metric correlation based on Spearman rank correlation coefficient. We do the same for the four dataset metrics as well. This provides us two correlation matrices: one 26x26 and one 4x4. After that, to build the clusters using the agglomerative clustering, we convert the similarity matrix into a dissimilarity matrix [51,63] using equation 1. We use this dissimilarity matrix to create the clusters. The agglomerative clustering process creates a dendrogram as shown in Figure 1. Now to select the number of clusters, we cut the dendrogram at a height, where the clusters will remain unchanged with the most increase/decrease of the cutting threshold. For classification metrics, we cut the dendrogram (Figure 1) at 0.57 as the clusters will remain unchanged between the cutoff value 0.49 and 0.64. Finally, we get the clusters containing classification metrics measuring similar kinds of bias.
We perform the same process for dataset metrics and cut the dendrogram at a height of 0.4.
Calculating Sensitivity:
Research question four asks about the consistency of the metric values for three cases: (a) raw data, b) after applying Reweighing (RW), (c) after applying Meta Fair Classifier (MFC). As we are using five cross fold five repeats for all the datasets, we get 25 results for each dataset and report for all seven datasets: • the median value: the 50th percentile (or 2 ); • the IQR: the (75-25)th percentile (or 3 − 1 )
RESULTS
Our results are organized based on four research questions.
RQ1: Do current fairness metrics agree with each other?
At first, we need to verify our motivation. In real life, do the fairness metrics contradict? Table 4 contains results for 26 classification metrics; Table 5 contains results for four dataset metrics. The learner here is logistic regression. The last row contains the percentage of metrics marking the specific dataset as unfair in both tables. If we combine last rows of This means that researchers and practitioners will be spending much effort trying to understand their systems using disagreeing oracles (a result that motivates this entire paper).
RQ2: Can we group (cluster) fairness metrics based on similarity? Table 4 shows that 26 classification metrics can be divided into seven clusters. Table 5 shows that four dataset metrics can be divided into three clusters. More importantly, we note that: • RQ1 reported intra-project disagreement on "fair"-vs-"unfair"; • We note that there is much intra-cluster agreement for each data set in Table 4 and Table 5.
As evidence, we note that the majority fairness decision is always the same within the clusters for each dataset. In Table 4, the row Percentage of agreement comments on the uniformity of decisions within each cluster (for each dataset).
Note that uniformity is very high (often 100%). That means metrics inside each cluster agree with each other for every dataset. Among the seven clusters, we see six clusters (except cluster two) show 100% agreement considering median value across seven datasets. For example, in case of cluster zero, percentage of agreement is 100% for five datasets; 75% for one; 50% for one. Majority is 100%. That is true for clusters 1,3,4,5,6 & 7. We see similar agreement pattern inside clusters in Table 5 also.
For reference purposes, the last column of Table 4 and Table 5 offers names for those clusters: • Misclassification (cluster 0, 3): these metrics try to measure the difference or ratio of misclassification errors between groups; • Differential fairness (cluster 1): these metrics try to measure if probabilities of the outcomes are similar regardless of the combination of protected attributes [57]; Table 4. Cluster based results for 26 classification metrics on seven datasets. For a metric with ideal an value of zero, anything below -0.1 and above 0.1 is "unfair". For a metric with an ideal value of one, anything <0.8 or >1.2 is "unfair". • The clustering reduces the confusion of having too many metrics and not knowing their similarity.
• As the metrics inside the same cluster measure same kind of bias and behave in the same manner; we can choose just one metric from each cluster. Thus we measure a few metrics but can cover a much more comprehensive range of fairness notions. • If we see agreement among all the metrics inside a cluster for a particular dataset, then one metric can be chosen as representative of the whole cluster.
• In case of intra-cluster conflicts, choosing only one metric can be risky. In these cases, practitioners need to do a proper risk assessment before selecting metrics. That said, if there is intra-cluster conflict among metrics, we can choose one from the 'fair' group and one from the 'unfair' group to mitigate that risk.
As part of this study, we further analyzed each cluster mathematically to verify if our cluster of metrics and their mathematical definitions coincide. A detailed analysis of these clusters and their mathematical analysis has been discussed in §5.1.
RQ3: Are some fairness metrics more sensitive to change than others?
An ideal metric is responsive to the dataset it examines. An "insensitive" metric is one that delivers the same conclusions, no matter what data is being examined. An "insensitive" cluster is one containing mostly insensitive metrics. Such insensitive clusters could be ignored since they are not informative.
We measure sensitivity by looking at the variability of our metrics scores using the intra-quartile range (IQR= 3 − 1).
For each data set, we found the IQR across all clusters. Next, we highlight the sensitive results; i.e. those with an IQR greater that *standard deviation. The remaining, unhighlighted results are the insensitive metrics.
As to what value of to use in this analysis, we take the advice of a widely cited paper by Sawilowsky [81] (this 2009 paper has 1100 citations). That paper asserts that "small" and "medium" effects can be measured using = 0.2 Table 6. This table shows sensitivity of the classification metrics on the three different models used in this study (a) Baseline; (b) Reweighing(RW); and (c) Meta Fair Classifier(MFC). The table shows the median and IQR values of three datasets. Here the cells in IQR columns are marked with "red" those that change by more than a small amount (35th percentile of the standard deviation of the IQR values). The insensitive metrics are those that usually have white IQR values. Turning now to Table 6 and Table 7 we see that most clusters have highlight IQR results. However, in Table 6, we see the clusters formed by metrics C16, C18, C20 (individual fairness) and C17, C18, C21, C22, C23, C24 (between group individual fairness) are insensitive. This, in turn, means that we should not criticize a fairness analysis that ignores these metrics. [39]. Table 8 shows those results collected for seven datasets after using RW and MFC algorithms. For every dataset (row-wise), we show the number of metrics changed towards or away from its ideal value. In that table: • FU denotes the metrics that changed towards ideal value; • UF denotes the metrics that moved away from the ideal value, • NC means the metrics which did not change.
Note that majority of the metrics move towards "fair", but there are some metrics that move towards "unfair". For Reweighing, some metrics show "no change", but we have verified they always remain in the fair range. Table 8. This table shows the number of classification metrics that move towards or away from the ideal value when either Reweighing or Meta Fair Classifier is used to remove bias in the models. Here "UF" shows the number of metrics that moved towards the ideal metric value, while "FU" shows the opposite. Finally "NC" shows the number of metrics that did not change at all. The main takeaway here is no longer necessary (or even possible) to satisfy all these fairness metrics. While our analysis can reduce dozens of metrics down to ten, there will still be issues of how to trade-off within this reduced set.
Reweighing
Even after applying bias mitigation approaches, some metrics still conflict with others. This finding is similar to the claim made by others: • Berk et al. [29] offer an "Impossibility Theorem" that says there is no way to satisfy all kinds of fairness together.
• As Yuriy Brun said at his keynote at ICSSP'2020 "we need to work the system in a biased way sometimes" [34].
DISCUSSION
We have described all of our results. Here we are summarizing the results in a comprehensible way to reach a stable conclusion. The main idea of this work is to reduce the complexity of measuring fairness. That said, it is imperative we narrate our conclusions in a very easy way. We discuss here three major concerns that arise from §4 and try to simplify fairness measurement to our best.
Why Not Group Metrics via their Analytical Structure?
This paper has offered an empirical analysis that many of the metrics in Table 4 are synonymous since, when clustered, they fell together into just a few similar groups. In this section, we check if the same conclusions can be achieved from a more analytical analysis that looked at the structure of the equations for the fairness metrics.
Sometimes, a group generated by formula's analytical structure is similar to the clusters we generated above. For example: • In cluster three (from Table 4), all metrics are based on FDR, which suggests that both from an empirical and analytical point of view, they should be similar. • Also, In cluster zero, we see that all those metrics are based on FOR and error rate. Intuitively, this seems sensible since here metrics try to measure amount of misclassification.
That said, as shown by the following three examples, there are many examples where an equation's analytical structure does not predict for its empirical cluster.
• EXAMPLE #1: If we look at cluster five, all six metrics inside this cluster are related to "between group individual fairness". This metric is based on the same benefit function: (For more details on that. see Table 1 metric id C16). We note that cluster two is also based on Equation 2, but the metrics inside this cluster represent individual fairness for each group separately. That means Although all metrics inside cluster two and cluster five are based on the same benefit function, they measure different definitions of fairness.
That is, a formal analysis of the analysis might combine these clusters, whereas a data-oriented empirical analysis would argue for their separation. • EXAMPLE #2: In cluster four from Table 1, the metrics C0, C1, C2, C5, C6 and C9 dependent on TPR, FPR and FNR. Recall that FPR and FNR report type one and type two errors ( misclassification on fairness); Now TPR can be expressed as 1 -FPR, which means the change in TPR will mirror changes in FPR. In contrast, in this cluster, the other two metrics C14 and C15 are based on selection rate (ratio of number of predicted positive and number of instances). Although there is not much similarity in the formula between these two and other metrics in this cluster, we can see they perform similarly when measuring fairness. That is: An analytical analysis does not always reflect the measurement of fairness in the real world scenario.
Verma et al. [93] in their paper notice a similar phenomenon where they observe that: Equal Predictive parity (a measure they explore) should also have equal FDR ... but when measured from an empirical point of view, they showed they are not the same.
• EXAMPLE #3: In cluster one, metrics C10 and C25 have very different mathematical formulas. C10 is based on FPR while C25 is based on smoothed EDF-the Empirical differential fairness. EDF is calculated based on Dirichlet smoothed base rates for each intersecting group in the dataset, which is based on count of predicted positive. Here as well, we see that Two formulas with a different analytical structure can have a similar performance w.r.t. fairness.
To summarize the above, we quote Alfred Korzybski, who warned: A map is not the territory.
While the analytical structure of the formula offers intuitions about the nature of fairness, those intuitions had better be checked via empirical analysis.
Is our Empirical Analysis Useful?
We have established the requirement of empirical analysis and we have also done that analysis. We need to find out whether this analysis would be helpful in real-life applications or not. Here we describe various scenarios of fairness contradiction and how our study helps to remove that.
Imagine a college admission decision scenario, where the system might be seen as biased against group B if applicants from group A are accepted more than group B. Here group A and group B are divided based on different values of a protected attribute. The college applies a bias mitigation approach to solve this problem using a group fairness metric by changing group A's or B's scoring threshold. Now, if a member of group A is rejected, while a member of group B has been accepted with an equal or lower score, then the system might be seen as biased against that individual. The main takeaway from this story is that there is a conflict between "individual fairness" and "group fairness" [31].
The concept of fairness is very much application-specific and choosing the appropriate metric is the sole responsibility of the policymaker. An ideal scenario will be building a machine learning model which does not show any kind of bias.
However, that is too good to be true. Brun et al. found out that if a model is adjusted to be fair based on one protected attribute (e.g., sex), in some cases model becomes more biased based on another protected attribute (e.g., race) [14].
Kleinberg and other researchers argue that different notions of fairness are incompatible with each other and hence it is impossible to satisfy all kinds of fairness simultaneously [72]. Here one thing to remember while doing prediction is that fairness is not the only concern. Prediction performance is the most important goal. Berk et al. found out that accuracy and fairness are competing goals [30]. This trade-off makes the job even more complicated since damaging model performance while making it fair may be unacceptable.
As researchers, we know that satisfying all kinds of fairness together is not possible. A policymaker has to choose which fairness definitions are most important for the particular domain and ignore the rest. Our work of dividing fairness tries to make the choice easier, as choosing metrics from a group of 10 options is much simpler than choosing from 30 choices. Using our results of Table 4 and Table 5, in a specific domain, if group fairness is more important than individual fairness, then cluster four will be given more priority than clusters two and five (Table 4). Once a cluster is given priority, one or two metrics can be chosen to represent the whole cluster. That means our whole work boils down to minimizing the number of metrics to look at and covering a wide range of fairness. We believe future researchers and industry practitioners will use our work as a guide and that will be the fulfillment of this study.
What to do when the metrics contradict each other?
We have seen that there are scenarios where fairness metrics contradict each other. According to some metrics, the prediction is fair, where some other metrics disagree. Fairness metrics find out how critical the errors of a prediction model are. It is the decision of the policymaker or the domain expert to choose appropriate fairness metrics based on what kind of bias is more important for the specific domain. For example, consider the following two scenarios: • Suppose we are predicting if a patient has cancer or not, depending on the symptoms. Here predicting a benign case as malignant is not very dangerous but predicting a malignant case as benign is extremely dangerous. A wrong diagnosis for an actual cancer patient will delay the treatment, and the patient may die. That means false negative is more important here.
• Suppose we are predicting if future performance of a student based on previous records. Here if we predict a good student as bad, that is not that fatal. However, if a student who really needs special attention and help from teachers, is given a good rating then it will be misery for that student. That means false positive is more important here.
If we know which metrics look at what kind of error, it will be easier for the decision-maker to choose. That said, based on the guidance we have provided, in case of contradiction among metrics, one metric over another will be given priority.
THREATS TO VALIDITY & FUTURE WORK
This paper explores machine learning methods for software engineering. One issue with any paper like this is a few selection and evaluation biases along with construct and external validity based on the choice of models, datasets, and methods. In the future, we plan to address the apparent threats to validity that this paper has not fully addressed.
Construct Validity: Here, we have used popular hierarchical clustering called agglomerative approach, as the number of clusters were not known beforehand. In future, we need to experiment with other clustering techniques to check for conclusion stability. This analysis used logistic regression (LR), as much prior work on fairness has also used LR [25,44]. Nevertheless, in future work, we need to explore some other classification models including DL models.
Also, the metric clusters found in Table 4 and Table 5 are created using the results of our choice ML models, dissimilarity measures, and cutting point in the dendrogram. Thus, choosing one metric from each cluster may contain some risk, and researchers need to be careful while making informed choices about metric selection.
Evaluation Bias: We have used 30 metrics taken from IBM AIF360 [25]. We have also covered most of the metrics from Fairkit-learn [64] and Fairlearn [20]. There are other metrics and definitions of fairness, thus the results of this study may not generalize to all available metrics. But the 30 metrics covered in this study are widely used in the fairness domain [32,48,58,71,96].
External Validity: We have used seven datasets. In the fairness domain, one big challenge is the availability of adequate datasets. It would be insightful to re-run this study on new datasets and also on other domains.
Sampling Bias: In this work we used thresholds recommended by IBM AIF360 ("fair" means -0.1,0.1 or 0.8,1.2 for different kinds of metric). Future work should explore the sensitivity of our conclusions to changes in those thresholds.
Another issue with sampling bias is that our analysis is based on the data of Table 2. We recommend that when new data becomes available, we test the conclusions of this paper against that new data. That would not be an arduous task (and to simplify that task, we have placed all our scripts online in order).
CONCLUSION
Fairness is a rapidly evolving domain and the number of fairness metrics is increasing exponentially. While performing our literature review we saw the current practice in this domain is to relying on a handful of metrics and ignoring the rest. But which metrics can be ignored? Which are essential?
To answer these questions, this paper has experimented with the following metrics selection tactic: When applied, the paper reported that this tactic could reduce dozens of metrics to just a handful. We found: • RQ1 showed that all the metrics do not agree with each other when labeling a model as fair or unfair.
• RQ2 showed that metrics can be clustered together based on how they measure bias. Each of the resultant clusters measures different types of bias and selecting one metric from each cluster should be representative enough to measure increase or decrease in bias in other metrics in the same cluster.
• RQ3 showed that we could ignore at least two of those clusters, since they were not "sensitive". Recall that by "insensitive" clusters, we mean those where changes to the data did not change the fairness scores.
• RQ4 showed this reduced set actually predicts for different things. That said, it is no longer necessary (or even possible) to satisfy all these fairness metrics.
From these results, we argue that: • There are many spurious fairness metrics; i.e. metrics that measure very similar things.
• To simplify fairness testing, just (a) determine what type of fairness is desirable (for a list of types, see Table 4 and Table 5 ); then (b) look up those types in our clusters; then (c) just test for one item per cluster.
• While this approach does not completely remove all issues with fairness testing, it does reduce a very complex problem of (say) 30 metrics to a much smaller and manageable set.
• Also, the methods of this paper could be used as a litmus test to prune away spurious new metrics that merely report the same thing as existing metrics.
ACKNOWLEDGEMENT
The work was partially funded by LAS and NSF grant #1908762. | 2021-10-26T01:17:02.173Z | 2021-10-25T00:00:00.000 | {
"year": 2021,
"sha1": "907ace620850d20c7616e7d60c95e67e54f07160",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "907ace620850d20c7616e7d60c95e67e54f07160",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
271493117 | pes2o/s2orc | v3-fos-license | Vaginal microbiome differences between patients with adenomyosis with different menstrual cycles and healthy controls
Background Adenomyosis is a commonly observed benign gynecological disease that affects the quality of life and social psychology of women of childbearing age. However, because of the unknown etiology and incidence of adenomyosis, its pathophysiological mechanism remains unclear; further, because no noninvasive, accurate, and individualized diagnostic methods are available, treatment and efficacy evaluations are limited. Notably, the interaction between the changes in the microecological environment of the female reproductive tract and human immunity, endocrine, and other links leads to the occurrence and development of diseases. In addition, the vaginal microbiome differs in different menstrual cycles; therefore, assessing the differences between the microbiomes of patients with adenomyosis and healthy individuals in different menstrual cycles will improve the understanding of the disease and provide references for the search for noninvasive diagnosis and individualized precision treatment of adenomyosis. This study aimed to explored the data of individuals in different menstrual cycles. Results Differences in the vaginal microbiome between patients with adenomyosis and healthy individuals were observed. At phylum level, the relative abundance of Firmicutes in the adenomyosis group was higher than that in the control group, which contributed the most to the species difference between the two groups. At the genus level, Lactobacillus was the most dominant in both groups, Alpha-diversity analysis showed significant differences in the adenomyosis and control group during luteal phase (Shannon index, p = 0.0087; Simpson index, p = 0.0056). Beta-diversity index was significantly different between the two groups (p = 0.018). However, based on Weighted Unifrac analysis, significant differences were only observed throughout the luteal phase (p = 0.0146). Within the adenomyosis group, differences between women with different menstrual cycles were also observed. Finally, 50 possible biomarkers including were screened and predicted based on the random forest analyse. Conclusions The vaginal microbiome of patients with adenomyosis and healthy individuals differed during menstrual periods, especially during the luteal phase. These findings facilitate the search for specific biological markers within a limited range and provide a more accurate, objective, and individualized diagnostic and therapeutic evaluation method for patients with adenomyosis, compared to what is currently available. Supplementary Information The online version contains supplementary material available at 10.1186/s12866-024-03339-9.
Introduction
Adenomyosis is a benign uterine myometrial lesion commonly found in women of reproductive age and is characterized by compensatory hypertrophy in the peripheral myometrium, with endometrioid glands and stroma found in the myometrium [1].Pathological diagnosis after surgery is the gold standard for clinical diagnosis; however, the exact incidence and pathogenesis of adenomyosis remain unknown [2].Studies have shown that a history of uterine surgery is a high risk factor for adenomyosis.For example, the incidence of adenomyosis in patients with the aforementioned surgical history is 1.5 times higher than in patients with a different history [3,4].In the treatment of adenomyopathy, in addition to surgical treatment, conservative programs are used to regulate endocrine and immune system functions.Diagnostic methods include magnetic resonance imaging (MRI), transvaginal ultrasonography, and CA125 test, however, no specific, individualized diagnostic method is available.Adenomyosis and other benign gynaecological diseases, such as uterine fibroids, endometriosis, and endometrial polyps, have a high comorbidity rate, and attributing specific symptoms to adenomyosis in clinical diagnosis and treatment is difficult.
The vagina is an important organ of the female lower genital tract and is an important habitat for microorganisms in the human body.Lactobacillus is the predominant bacterial species and is affected by various exogenous and endogenous factors; furthermore, the species composition of the vaginal microbiome has a strong dynamic change [5].The vaginal microbiome is an important defence mechanism that regulates and maintains reproductive function and relative homeostasis in healthy environments.The stability of the microbiome can prevent the proliferation of symbiotic microorganisms and the colonization of pathogens [6].Microorganisms affect the balance of the microenvironment through nutritional competition, intraspecific and interspecific signal transduction, metabolic pathways, and product interactions.The mechanism of microenvironmental imbalance remains unclear; however, this imbalance can disrupt normal homeostasis, resulting in certain pathological signs.The female upper reproductive tract was once considered a sterile environment; however, this theory has been challenged.The presence of microbiota in the endometrial microbiota [7] was confirmed by the isolation of microbiota from female endometrial aspirated fluid samples.Studies have shown that bacterial DNA can be detected in 95% of post-hysterectomy samples [8].Microbial switching occurs in the female reproductive tract, and the microbiota of the upper and lower reproductive tracts work synergistically to regulate the uterine environment.With increasing age, synchronous changes in the microbiome of the uterus and vagina increasingly converge, showing a mutually parallel relationship.Animal studies have verified the damaging and protective effects of vaginal bacteria on the endometrium using microbiota transplantation techniques [9].This also indicates that lower reproductive tract bacteria affect or directly interfere with the regulation of some benign and malignant diseases, to some extent, through certain mechanisms.
Initial research on vaginal microbes mainly relied on microscopy and microbial culture techniques; however, the vast majority of microorganisms in the physiological or natural environment are difficult to obtain through culture.Using bioinformatics, high-throughput sequencing and analysis technology were performed to minimise the dependence on bacterial culture technology used in the literature and enhance our understanding of the structure and function of the microbial community, as well as of the association between the bacterial community of this "non-visual organ" and benign and malignant diseases of the female reproductive system.
The 16S-rRNA is a subunit of ribosomal RNA.With improvements in sequencing technology, 16S-rDNA amplicon sequencing has become an important method to evaluate the microenvironment, structure, and composition [10][11][12][13].As research progresses, sequencing platforms are updated and iterated.Relying on the upgraded Illumina NovaSeq sequencing platform, we compensated for the inefficiency of single-ended reading and realized double-ended sequencing; that is, small fragment libraries were built according to the characteristics of the amplified regions.
According to our review of the literature, no study has investigated the differences in the vaginal microbiome between adenomyosis patients with different menstrual cycles and healthy individuals.Therefore, this study aimed to elucidate the differences in the vaginal microbiota between women with and without adenomyosis, with different menstrual cycles.Our results provide a reference for the subsequent screening of characteristic biological markers, disease diagnosis, non-invasive precision treatment, and efficacy prediction based on microbial detection.
Materials and methods
The case group in this study comprised patients with adenomyosis in the gynecological outpatient department of Affiliated Hospital of Shandong University from November 2021 to October 2022 were selected as the case group.They were evaluated by professional gynecologists, and adenomyosis was confirmed by ultrasound or magnetic resonance imaging (MRI).The control group comprised healthy individuals.The inclusion criteria were as follows: (1) 18-49 years old; (2)
Sample collection
The individuals who fulfilled the inclusion criteria had a clinical sample collected on the day of the clinical visit before they received a transvaginal gynecologic examination or gynecologic ultrasound.The posterior vaginal fornix was fully sampled using disposable sterile swabs.During the procedure, contact between the swab head and the speculum, vaginal wall, and other non-sampling sites was avoided.The swab head was cut off with sterile scissors and placed in a sterile centrifuge tube containing Amies culture medium (JINAN BABIO BIOTECHNOL-OGY CO,.LTD.), and stored at -80 ℃ in the laboratory.
Extraction of genome DNA
The genomic DNA of the sample is extracted by cetyltrimethylammonium bromide (CTAB) method.DNA concentration and purity was monitored on 1% agarose gels.According to the concentration, DNA was diluted to 1 ng/ µL using sterile water.Using the diluted genomic DNA as a template, the V3-V4 region of 16S-rDNA gene was amplified.The primer sequence was as follows: ①F:CCT AYG GGRBGCASCAG; ②R:GGA CTA CNNGGG TAT CTAAT (Phusion ® High-Fidelity PCR Master Mix with GC Buffer, New England Biolabs,lnc.).Polymerase Chain Reaction (PCR) was performed using specific primers with Barcode and high-efficiency high-fidelity enzyme according to the selection of sequencing region to ensure amplification efficiency and accuracy.All PCR reactions were carried out with 15µL of Phusion ® High-Fidelity PCR Master Mix (New England Biolabs); 2 µM of forward and reverse primers, and about 10 ng template DNA.Thermal cycling consisted of initial denaturation at 98℃ for 1 min, followed by 30 cycles of denaturation at 98℃ for 10 s, annealing at 50℃ for 30 s, and elongation at 72℃ for 30 s. Finally 72℃ for 5 min.
Library construction and sequencing
Sequencing libraries were generated using TruSeq ® DNA PCR-Free Sample Preparation Kit (Illumina, USA) following manufacturer's recommendations and index codes were added.The library quality was assessed on the Qubit@2.0Fluorometer (Thermo Scientific) and Agilent Bioanalyzer 2100 system.At last, the library was sequenced on an Illumina NovaSeq platform and 250 bp paired-end reads were generated.
Paired-end reads assembly and quality control
Paired-end reads was assigned to samples based on their unique barcode and truncated by cutting off the barcode and primer sequence.Paired-end reads were merged using FLASH (V1.2.7, http:// ccb.jhu.edu/ softw are/ FLASH/) [14], which was designed to merge paired-end reads when at least some of the reads overlap the read generated from the opposite end of the same DNA fragment, and the splicing sequences were called raw tags.Quality filtering on the raw tags were performed under specific filtering conditions to obtain the high-quality clean tags [15] according to the QIIME(V1.9.1, http:// qiime.org/ scrip ts/ split_ libra ries_ fastq.html) [16] quality controlled process.The tags were compared with the reference database (Silva database, https:// www.arb-silva.de/) [17] to detect chimera sequences, and then the chimera sequences were removed [18].Then the Effective Tags finally obtained.
Results
The study enrolled 43 patients with adenomyosis and 40 healthy people.There were no significant differences in demographic background between the two groups of participants (Table 1).
The vaginal samples were collected from all participants; however, 7 samples in total were excluded from the control group due to poor DNA quality after library quality check.Therefore, 83 samples were used in the subsequent analysis.(Fig. 1).
Next, the vaginal microbiota was analyzed using 16 s rDNA sequencing techniques.The Raw PE data sequenced by Illumina Novaseq were splicing and quality control to obtain Clean Tags, and then chimeric filtering was performed to obtain Effective Tags for subsequent analysis (S1 Table ).
Species relative abundances
At phylum level, the relative abundance of Firmicutes in adenomyosis group was higher than that in control group (80.70% and 69.72% in adenomyosis and control groups).At the genus level, the Lactobacillus relative abundance in both adenomyosis group and control group was the highest (72.10% and 66.08%).But the relative abundance of Gardnerella and Atopobium in the adenomyosis group was lower than that in the control group (9.67% and 1.04% in adenomyosis and 14.95% and 4.69% in control groups); At the Species level, the Lactobacillus_iners abundance in the adenomyosis group was higher than that in the control group(43.74%and 32.14%), and showed a diversity of Lactobacillus, including Lactobacillus_delbrueckii and Lactobacillus_ jensenii (Fig. 2).
Different menstrual cycles
The top 35 species with the average abundance of all samples of the same level and different groups are selected for clustering, and the heatmap is drawn by heatmap package of R software, which is convenient to find the number or content of species in the sample (Fig. 3).
Sample complexity analysis
In order to study the influence of menstrual cycle on vaginal microecology, we named all the samples in the luteal Alpha-diversity analysis showed significant differences in the adenomyosis and control group during luteal phase (Shannon index, p = 0.0087; Simpson index, p = 0.0056), but we didn't find the statistically difference in ACE and chao 1 index (Fig. 4).It was verified that the amount of sequencing data was progressive and reasonable, and more data would only produce a few new species, thus suggesting a uniform distribution of species (Fig. 5).
Comparative analysis of multiple copies
The species distributions in the adenomyosis group and the control group were not completely separated, but were similar (Fig. 6).
We analyzed the Beta-diversity index by using the t-test and found that the species Beta-diversity index was significantly different between the adenomyosis group and the control group (p = 0.018).However, based on Weighted Unifrac analysis, significant differences between the disease group and the control group were only observed throughout the luteal phase (p = 0.0146) (Fig. 7 A, B, C, D).
R value was between (-1, 1), and R value was greater than 0, indicating that the difference between groups was greater than the difference within groups, which was significant (P < 0.05).The reasonableness of the grouping in this study was proved.(Table 2).
At the phylum level, there were no significant species differences between the adenomyosis group and the control group.At the class level the significant differences was in Coriobacteriia and Gammaproteobacteria (p < 0.01).At the class level the significant differences was in Lactobacillales, Coriobacteriales (p < 0.01),and in Pseudomonadales (p < 0.05).At the class level the significant differences was in Beijerinckiaceae and Listeriaceae (p < 0.05).At the genus level, that were in Listeria, Ralstonia, Acinetobacter, and Haemophilus (p < 0.01), and Alloscardovia,Ureaolasma (p < 0.05).Finally, at the species level,there was significant difference in Alloscardo-via_omnicolens and Lactobacillus_delbrueckii (p < 0.01) (Fig. 8).
At the phylum level, Firmicutes showed the highest species abundance in both the adenomyosis group and the control group, and at the same time, contributed the most to the species difference between the two groups (Fig. 9).
Random forest is a classical machine learning model based on classification tree algorithm to screen features (biomarkers) that play an important role in classification or grouping.A default tenfold cross-validation was performed for each model, and Receiver Operating Characteristic Curve (ROC) curves were drawn to select potential Biomaker 50 as shown in Fig. 10.
Discussion
Species diversity was analyzed using alpha diversity indices (Shannon index, chao1 index, ACE, and Simpson indices), and the number of microbial species and proportion of each species in a single sample were calculated.Results showed that species diversity of the two groups did not show significant differences, similar to the results of Chen et al. [19].Although the species composition of the two groups was similar, species abundance significantly differed.At the phylum level, the relative abundance of Firmicutes was higher in the adenomyosis group than in the control group.At the genus level, except for the absolute species dominance of Lactobacillus in both groups, the relative abundance of Gardnerella in the adenomyosis group was significantly lower than that in the control group, which differed from the results of Kunaseth [20].Other groups of vaginal bacilli were also detected, second only to Lactobacillus in overall abundance.
Lactobacillus vegetation in the female reproductive tract is critical for the maintenance of genital health.However, the exact pathogenesis of Gardnerella vaginalis remains unclear [21].Lactobacillus and Gardnerella interact in the female reproductive tract; when the abundance of Lactobacillus decreases to a certain extent, the growth of Gardnerella can decrease or stop [22], and the imbalance of the two bacteria can change the acid-base environment of the vagina and produce mucosal adsorption and biofilm, promoting chronic, persistent infection and inflammation [23,24].A data analysis using the dominance network analysis framework found that Lactobacillus is not the dominant species in some healthy African women, and very few bacteria have a cooperative and mutually beneficial relationship with Gardnerella and Lactobacillus iners [25], contrary to previous views [26].L. iners cooperate with Gardnerella but are inhibited by other species [27].A high abundance of Gardnerella genomospecies indicate the presence of gene variants coding for virulence factors, such as cholesterol-dependent pore-forming cytotoxin vaginolysin and neuraminidase sialidase [28].In this study, the abundance of L. iners in the adenomyosis group was found to be significantly higher than that in the control group, which was verified using the MetaStat method.Microbiomes from women diagnosed with Amsel-bacterial vaginosis (BV) were enriched for host immune response evasion and colonization functions by L. iners, and its role in the vaginal microbiome has been widely debated.A study has identified a specific set of L. iners genes associated with positive Amsel-BV diagnoses, and their data suggested that certain L. Iners strains may adhere to epithelial cells, contributing to the appearance of clue cells and becoming more difficult to displace in the vaginal environment [27].In conclusion, the variation in L. iners and Gardnerella abundance may be a potential cause of adenomyosis, and maintaining the balance of Lactobacillus and Gardnerella in the body may be a self-mechanism to maintain the stability of vaginal microecology.
However, little is known about how the genital microbiota affects host immune function and regulates disease susceptibility.Lactobacillus imbalance and high ecological diversity may be closely related to the concentration of pro-inflammatory cytokines in genital organs [29].Patients with adenomyosis show leukocyte infiltration in the endometrial functional layer, and the number of macrophages and natural killer (NK) cells increased [30,31].Transcriptional analysis showed that antigen-presenting cells sense gram-negative bacterial products in situ via Toll-like receptor 4 (TLR-4) signalling, promoting genital organ inflammation by activating the nuclear factor kappa-B (NF-κB) signalling pathway and recruiting lymphocytes through chemokine production [29].Immune dysregulation is present in the ectopic endometrium of patients with adenomyopathy and manifests as elevated T Cell Immunoglobulin Domain and Mucin Domain-3/ Galectin-9 (Tim-3/Gal-9) expression and differential RNA methylation [32,33].Therefore, we speculated that vaginal microecological changes affect the important role of Tim-3/Gal-9 in immunosuppression through some mechanism, causing the persistence of infection, affecting the growth environment of the endometrial tissue, and causing adenomyosis.In addition, the expression of Type I interferon (IFN-I) inducers is increased in the ectopic endometrium in adenomyosis.The increased levels of IFN-Is and expression of IFN-stimulating genes and pro-inflammatory cytokines in tissues may be related to host immunity under the influence of certain microorganisms [34].Recent literature has suggested that microbiota-induced interferon activation does not require direct host-bacterial interaction but the remote transport of bacterial DNA into host cells via bacteria-derived membrane vesicles [35].In contrast with our finding that the beta diversity index was significantly higher in the adenomyosis group than in the control group, the increased bacterial diversity in the vagina probably explains the activation of the host's innate immune response in the ectopic endometrium in adenomyosis [5,20].Endometriosis and adenomyosis are closely related disorders.Their pathophysiology and clinical symptoms such as chronic pain are extremely similar [36].There is a correlation in the microbial composition of both intestinal and cervicovaginal microbial niches, and over 50% overlap in species abundance and cell density [37].Central sensitisation is known to be significantly involved in endometriosis-associated chronic pelvic pain [38].Dysbiosis may potentially lead to incorrect immune responses, triggering the development of inflammatory pain [39], such as that seen in endometriosis and adenomyosis.All the patients with adenomyosis included in the study have obvious dysmenorrhea.However, further studies are may elucidate the association between microbial changes and chronic pain.
The microbiota of the female reproductive system is influenced by changes in age and system physiology, and Fig. 5 Rarefaction curve and Rank Abundance curve.In the (A) Rarefaction curve, horizontal coordinate is the number of sequencing strips randomly selected from a sample, and the vertical coordinate is the number of Operational Taxonomic Units (OTUs) that can be constructed based on the number of sequencing strips, which is used to reflect the sequencing coverage, and different samples are represented by different colored curves; in the (B) Rank Abundance curve, the horizontal coordinate is the serial number sorted by the abundance of OTUs, and the vertical coordinate is the relative abundance of the corresponding OTUs, and different samples are represented by different colored fold lines the menstrual cycle is a major disruptor of the vaginal microbiome.Different microbiota characteristics are observed in women at different physiological stages [40].In healthy women of reproductive age, the vaginal microbiome composition changes dramatically before and after menstruation [41].Menstrual blood flowing through the vagina leaves sufficient iron necessary for pathogens, and the iron necessary for pathogen metabolism [42], which is reduced by the iron-binding affinity of lactoferrin, is replenished.Additionally, studies measuring oestradiol levels and vaginal microbiome composition in women who use oral contraceptives to inhibit ovulation have shown that the high diversity observed during menstruation is mainly driven by oestradiol withdrawal before menstruation rather than by the dynamic drive of progesterone.Lactobacillus abundance increases during the follicular and luteal phases, gradually normalising the vaginal microecology [41,43].Under the influence of this periodicity, combined with our test results, different types of dominant bacterial profiles were observed in patients with adenomyosis in both luteal and follicular stages, which provided a reference for the detection of biomarkers in patients with specific menstrual cycles or to evaluate their efficacy.
In summary, in this study, an increase in microbial richness was associated with adenomyosis, and the microbiome characteristics of patients with and without adenomyosis differed according to the menstrual cycle.This study has three notable limitations: 1) the final sample size was limited because of coronavirus disease 2019 (COVID-19), 2) large sample of clinical data for verification was not available, and 3) the different methods used in each study may have led to different conclusions.Furthermore, adenomyosis diagnosis remains unconfirmed without histological Fig. 7 A Weighted Unifrac based distance from Beta-diversity analysis.B Unweighted Unifrac based distance from Beta-diversity analysis.The box plots of Beta-diversity between-group difference analysis can visualize the median, dispersion, maximum, minimum, and outliers of within-group sample similarity.At the same time, the T-test test was used to analyze whether the Beta diversity differences of species between groups were significant or not.C Weighted unifrac ased distance from Beta-diversity analysis during different menstrual cycles.D Unweighted unifrac ased distance from Beta-diversity analysis during different menstrual cycles Table 2 Anosim analysis based on the Bray-Curtis distance.Anosim analysis is a non-parametric test used to test whether the difference between groups is significantly greater than the difference within groups, so as to determine whether the grouping is meaningful.We conducted the significance test of the difference between groups based on the rank of the Bray-Curtis distance value assessment.This may have led to misclassification in both cases (false positives) and controls (false negatives).In future research, we plan to develop standardized analysis software and large databases to continue our investigation of the mechanisms behind this association.
Fig. 3
Fig. 3 Heatmap of species abundance clustering during different menstrual cycles.The top 35 species with the average abundance of all samples of the same level and different groups are selected for clustering at (A) phylum, (B) class, (C) order, (D) family, (D) genus and (E) species level.The heatmap is drawn by heatmap package of R software, which is convenient to find the number or content of species in the sample
Fig. 6
Fig. 6 AWeighted Unifrac based distance from Principal Co-ordinates Analysis (PCoA) analysis.Horizontal coordinates indicate one principal component, vertical coordinates indicate another principal component, and percentages indicate the contribution of the principal component to the sample variance; each point in the graph indicates a sample, and samples from the same group are indicated using the same color (B) Unweighted Unifrac based distance from PCoA analysis.C Euclidean based distances from Principal Component Analysis (PCA) analysis.The horizontal coordinate indicates the first principal component, and the percentage indicates the contribution value of the first principal component to the sample difference; the vertical coordinate indicates the second principal component, and the percentage indicates the contribution value of the second principal component to the sample difference; each point in the graph indicates a sample, and samples in the same group are indicated using the same color; in PCA graphs with clustering circles, the clustering circle is added with the grouping information (clustering circles need more than 3 samples in the group)
Fig. 8 Fig. 9
Fig. 8 MetaStat analysis at (A) phylum, (B) class, (C) order, (D) family, (D) genus and (E) species level.For the species with significant differences between study groups, MetaStat method was used to screen the species with significant differences based on the species abundance tables of different levels
Fig. 10 A
Fig. 10 A MeanDecreaseAccuracy based analysis and MeanDecreaseGin based analysis.B proportion of false positive (Specificity), ordinate: proportion of true Sensitivity; (C) ROC curve of the test pair, abscess: proportion of false positive (Specificity), ordinate: proportion of true Sensitivity (specificity) Mean Decrease Accuracy measures the extent to which the prediction accuracy of random forest is reduced when the value of a variable is changed to a random number.The greater the value, the greater the importance of the variable.MeanDecreaseGini compared the importance of the variables by calculating the effect of each variable on the heterogeneity of the observed values at each node of the classification tree using the Gini index
Table 1
Demographic data of the subjects | 2024-07-28T13:16:33.275Z | 2024-07-27T00:00:00.000 | {
"year": 2024,
"sha1": "b4d2b7ca44b261b84f3786b575310fd8613f1715",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5fef4fa0a45ce4acc313c4ef6fc4abd24d88eb9f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49865894 | pes2o/s2orc | v3-fos-license | Sensorimotor Synchronization With Auditory and Visual Modalities: Behavioral and Neural Differences
It has long been known that the auditory system is better suited to guide temporally precise behaviors like sensorimotor synchronization (SMS) than the visual system. Although this phenomenon has been studied for many years, the underlying neural and computational mechanisms remain unclear. Growing consensus suggests the existence of multiple, interacting, context-dependent systems, and that reduced precision in visuo-motor timing might be due to the way experimental tasks have been conceived. Indeed, the appropriateness of the stimulus for a given task greatly influences timing performance. In this review, we examine timing differences for sensorimotor synchronization and error correction with auditory and visual sequences, to inspect the underlying neural mechanisms that contribute to modality differences in timing. The disparity between auditory and visual timing likely relates to differences in the processing specialization between auditory and visual modalities (temporal vs. spatial). We propose this difference could offer potential explanation for the differing temporal abilities between modalities. We also offer suggestions as to how these sensory systems interface with motor and timing systems.
INTRODUCTION
Many behavioral studies have examined human timing ability in tasks of sensorimotor synchronization (SMS) where subjects synchronize their movements to an external rhythm.
Comparisons between auditory metronomes and visual flashing metronomes reveal that movement synchronization is less variable and can occur at faster rates with auditory metronomes (Chen et al., 2002;Repp, 2003;Repp and Penel, 2004;Lorås et al., 2012). However, visuo-motor synchronization greatly improves when synchronizing with a moving periodic visual metronome . Adding a changing velocity profile to the moving visual metronome further reduces variability in SMS tapping (Hove et al., 2013a;Iversen et al., 2015), and Gan et al. (2015) suggests that a more realistic velocity profile can bring visual SMS to be as temporally precise as auditory SMS, at moderate but not fast tempi. While most studies of SMS look at finger tapping, others have included synchronized circle drawing, gait, dancing, and eye movements in the context of modality-specific timing effects (e.g., Repp and Su, 2013).
Studies on auditory and visual interference also suggest auditory timing is more prominent. When concurrent auditory metronomes and visual flashing metronomes are presented out-of-phase, the auditory sequences interfere with visuomotor timing, but not vice versa Penel, 2002, 2004). The interference effect is considerably reduced with moving visual metronomes and is tied to training and experience as the auditory dominance is stronger in musicians and weaker in video gamers (Hove et al., 2013a). Similarly, auditory cues can improve visual temporal discrimination (Morein-Zamir et al., 2003;Parise and Spence, 2008). This effect only holds for the temporal domain however, as the visual system dominates when auditory and visual stimuli conflict in the spatial domain; spatial dominance in the visual modality is apparent in the well-known "ventriloquist effect" (Vroomen et al., 2001).
ROLE OF ERROR CORRECTION IN TIMING
Error correction is a crucial component of any SMS task. By inducing perturbations and errors in SMS, we can gain insight into the underlying timing mechanisms. A common method to induce errors in a SMS task is to occasionally perturb an otherwise isochronous metronome (Repp, 2000(Repp, , 2001aPraamstra et al., 2003;Repp and Keller, 2004;Jang et al., 2016;Jantzen et al., 2018). Error correction in SMS can be broken down into two distinct mechanisms: a phasecorrection mechanism for correcting errors in relative phase, and a period-correction mechanism that corrects changes to the internal timekeeper period (Repp, 2001b;Repp and Keller, 2004). Period corrections require conscious awareness of the error as it involves a conscious updating of the internal rhythm; while a phase correction can happen even with errors too small for conscious awareness and does not involve updating the central timekeeper period and so is considered a more peripheral process than period correction (Repp, 2001b(Repp, , 2005). An error corrected under the phase-correction mechanism is typically a gradual adjustment that occurs over several beats, while an error corrected under the period-correction mechanism will be evidenced by a pronounced correction, usually followed by a more gradual phase-correction-like pattern after the initial large correction (Repp, 2001b).
While error correction has been well documented in auditory SMS, relatively little work has investigated error correction in visual SMS. In a recent study comparing error correction for auditory and flashing visual sequences, we observed error corrections for perturbations in the auditory condition that were modulated by the direction of the perturbations, but no such modulation was found for perturbations in the visual condition (Comstock and Balasubramaniam, 2017a). This suggests the visual system may not engage in the same SMS timing mechanisms as the auditory system. Additional evidence for a discrepancy in error correction for auditory and visual sequences can be gleaned from the autocorrelation structure of adjacent taps: unlike auditory SMS, tapping with visual flashes does not produce a negative lag1 autocorrelation that can indicate of the presence of a robust central timekeeping and error-correction mechanism (Hove and Keller, 2010). However, visuomotor synchronization with moving and apparent-motion metronomes do produce a negative lag1 autocorrelation, suggesting that a moving visual metronome may engage error correction (Hove and Keller, 2010;; note that negative lag1 autocorrelation does not necessarily stem from error correction and can arise from other timing factors (e.g., Wing and Kristofferson, 1973). It remains unclear if error correction will occur with perturbations in moving visual metronomes or with larger phase perturbations in a flashing visual metronome.
UNDERLYING PHYSIOLOGY OF THE AUDITORY AND VISUAL TIMING SYSTEM Brain Networks Involved in Timing Activity
Investigating the neural underpinnings in auditory and visual timing is a massive undertaking due to the many different timing subprocesses and tasks, including: SMS, interval timing, rhythm perception, timing recall, time perception, etc.. Excellent reviews of the brain mechanisms involved in various timing activities include: a review of neural activity in music production (Zatorre et al., 2007); a review of neural activity involved in time perception (Wiener et al., 2010); and an overview of neural activation in SMS as part of a larger review of SMS (Repp and Su, 2013). This body of work consistently demonstrates that temporal processing across tasks and sensory modalities relies heavily on the motor system. This motor network includes the supplemental motor area (SMA), primary motor cortex, lateral premotor cortex, anterior cingulate, basal ganglia, and cerebellum (Repp and Su, 2013). Auditory rhythm perception activates the motor system and is closely linked to movement (Janata et al., 2012;Iversen and Balasubramaniam, 2016;Ross et al., 2016a,b). The SMA is also strongly implicated in motor timing (Coull et al., 2016;Merchant and Yarrow, 2016), and along with the pre-SMA could be a hub of motor timing (Schwartze et al., 2012). Subcortical regions are especially active during sub-second time perception (Wiener et al., 2010), subsecond interval timing (Repp and Su, 2013), and rhythm timing (Grahn and Rowe, 2009;Wiener et al., 2010;Coull et al., 2011;Teki et al., 2011;Hove et al., 2013b). There is evidence of a dorsal auditory stream connecting the auditory cortex to the motor cortex through the posterior parietal cortex that plays a role in rhythm perception (Patel and Iversen, 2014;Ross et al., 2018). Interestingly this dorsal stream is also implicated in visual and tactile rhythm perception (Araneda et al., 2017;Rauschecker, 2017), adding to the idea of a common timing system tied to the motor system. Further evidence of the common timing system is found in a study of auditory and visual synchronization that dissociated modality and tapping stabilityputamen activation was highest when synchronizing to auditory beeps, moderate with a frequency-modulated siren and with a moving visual metronome, and lowest with a flashing visual metronome, closely paralleling behavioral performance (Hove et al., 2013b).
While visual SMS activates many of the same motor regions as auditory SMS (Hove et al., 2013b;Araneda et al., 2017), some activations are specific to the visual system. The visual cortex shows activity related to interval timing that follows the expected scalar property, such that size of timing errors measured in the visual cortex scale in proportion to size of the interval being timed as predicted by Weber's law (Shuler, 2016). Additionally, Zhou et al. (2014) found evidence that visual feature processing in the early visual cortex can contribute to duration perception, furthering the notion that at least some timing information is processed independently within the visual cortex. Additionally, in visual rhythm perception, the visual cortex plays a role predicting rhythmic onsets Balasubramaniam, 2017b, 2018). The additional activations with visual timing tasks, taken together with behavioral results, suggest the timing accuracy in visual processing may be compared to the auditory system due to the additional computational demands of processing the higher complexity of visual spatial information along with temporal information.
Role of Cortical Oscillations in Timing Encoding and Spreading Information Across the Brain
In addition to looking at the networks and regions involved in temporal processing, a growing body of work shows the role of cortical oscillations in encoding timing across multiple frequency bands. Cortical oscillations play a role in connecting regions across the brain, with higher frequencies utilized for localized interaction and lower frequencies for longer range interaction (Sarnthein et al., 1998;Von Stein and Sarnthein, 2000). This pattern of oscillations is used to connect and calibrate disparate timing systems in the brain (Gupta and Chen, 2016). Oscillations relating to timing appear to arise from multiple context-specifc timing systems in the brain (Wiener and Kanai, 2016). The question is then how these functionally and anatomically disparate systems integrate and interact. It appears that oscillations from different timing systems are coordinated within the striatum (Matell and Meck, 2004;Gu et al., 2015).
Beta band activity (∼20 Hz) is tied to the motor system and several studies indicate beta's role in predicting timing of auditory rhythms (Fujioka et al., 2009(Fujioka et al., , 2012(Fujioka et al., , 2015. Additionally, beta activity reflects top-down imposition of metrical structure on auditory rhythms . Recently, beta activity has also been linked to timing predictions within the visual system in response to visual rhythms (Comstock and Balasubramaniam, 2017b).
With rhythm perception, evidence shows that internal oscillations arise to match the fundamental frequency of the rhythm, and frequency of the meter (Nozaradan et al., 2011), as well as to the frequency of imagined rhythms (Okawa et al., 2017). These findings align with the Neural Resonance Theory that posits neural rhythms synchronize to auditory rhythms, and these neural rhythms can influence attention, expectancy, and motor planning (Large and Snyder, 2009). As of yet, it is unclear if this same neural resonance to meter would arise with visual stimuli.
Neural Underpinnings of Error Correction
The neural correlates of error correction reveal more evidence for multiple interacting and overlapping timing mechanisms. Error detection of timing perturbations in auditory SMS tasks modulates the P1, N1, and N2 auditory ERP components depending on both the size and direction of the perturbation (Praamstra et al., 2003;Jang et al., 2016). Jantzen et al. (2018) also found a theta response stemming from the Pre-SMA and anterior cingulate for error detection, an increase in theta coupling between the SMA and the motor cortex for late perturbations. In visual error detection, the visual P1 component is reduced in latency only for large late perturbations (Comstock and Balasubramaniam, 2017a). Each of these instances show cortical activation specific to a type of perturbation, although these effects are generally limited to larger perturbations.
Smaller perturbations that elicit a phase-correction response are believed to be driven primarily by subcortical mechanisms. Applying repetitive TMS to downregulate motor and premotor cortices produced no effect on phase correction (Doumas et al., 2005), whereas phase-correction was impaired by repetitive TMS to the cerebellum (Bijsterbosch et al., 2011). This fits with the suggestion that phase-correction is primarily subcortical based on evidence from how rapidly the movement trajectory changes after a perturbation (Hove et al., 2014). A possible network that exhibits the rapid timing required for the phase-correction response is a cortico-striatal circuit connecting the cerebellum to the SMA-striatal network via the thalamus (Kotz et al., 2016).
The data on the neural underpinnings of error correction suggest multiple timing systems, each with specific roles, yet able to coordinate for rapid response. Commensurate with this idea is work suggesting the basal ganglia integrates various timing systems through oscillation comparators (Matell and Meck, 2004;Gu et al., 2015). The limited data on visual error correction, however, leave open how well this network can interface with the visual timing systems.
EVIDENCE THE AUDITORY SYSTEM HAS PRIVILEGED ACCESS TO TIMING SYSTEMS
Considering the auditory system's timing advantage along with the prominence of the motor system in timing processing, we suggest that the auditory system's advantage in timing stems from its stronger coupling to the motor system. Auditory timing compared to visual timing tasks often yield more activation in motor structures, such as the SMA and premotor cortex (Jäncke et al., 2000). Even when visual SMS tasks employed the modality-appropriate moving visual metronomes, audiomotor synchronization with auditory beeps yielded greater activation in the putamen (Hove et al., 2013b). Likewise, priming a visual rhythm with a similar auditory rhythm resulted in increased putamen activation compared to a visual rhythm alone, while a visual rhythm yielded no priming effect on an auditory rhythm (Grahn et al., 2011). The finding that the increased visual synchronization ability provided by a bouncing ball does not transfer to purely perceptual rhythm perception provides further evidence of the role of motor coupling in timing tasks (Silva and Castro, 2016). Additionally, the privileged link between auditory and motor systems can be seen in Parkinson's disease, a disorder that impairs movement due to cell loss within the basal ganglia (Davie, 2008). For example, Parkinsonian gait can improve when cued by an external rhythm, and these interventions are more effective when synchronizing with auditory metronomes than with flashing visual metronomes (Rochester et al., 2005;Arias and Cudeiro, 2008).
Visual timing activities recruit timing centers within the visual system that, based on behavioral results, are less precise compared to the auditory timing system. In Jäncke et al. (2000), visual timing tasks resulted in increased activity in the right superior cerebellum, vermis, and right inferior parietal lobe compared to auditory timing tasks. Visual timing tasks also recruit areas MT, V5, and the superior parietal lobe, tying into the dorsal visual stream (Jantzen et al., 2005), and visual rhythm perception induces increased beta activity at event onsets arising from the visual cortex (Comstock and Balasubramaniam, 2017b). It is unclear if these timing activations in the visual system are the result of compensating for a weaker connection to the motor timing system. It may be that the temporal processing in the visual system is additional processing of visual information required to interface with the motor system.
While differences in coupling strength to the motor system are crucial for modality timing differences, other factors are likely. To that end, it is clear that the visual system is able to pick out high speed temporal information, for example, V1 will phase lock its input/output to up to a 100 Hz visual flashing stimuli (Williams et al., 2004). This suggest that entrainment is not easily transferred to the systems involved in time/rhythm perception, especially at the time frame usually involved in rhythm perception, indicating that the issue may be one of translation. A likely place for that translation would be within the dorsal pathway, which has been found to have neurons with high temporal resolution in macaques, with higher temporal resolution in the auditory dorsal stream (Rauschecker, 2017). If there is a higher temporal resolution of the auditory dorsal stream than in the visual dorsal stream, then it may give explanation as to why the visual system cannot synchronize at the higher frequencies achieved by the auditory system. Of course, it cannot be ruled out that the difference in temporal resolution is due to different levels of timing precision available to the dorsal stream. Reduced timing precision in the visual stream may be caused by increased necessary processing due to richer sensory input of the visual system compared to the auditory system. Indeed, greater processing requirements and longer processing time may help to account for the inability of the visual system to allow for synchronization at the higher tempos allowed by the auditory system.
ROLE OF THE VESTIBULAR-TACTILE-SOMATOSENSORY SYSTEM
Another link between auditory and motor systems is that auditory rhythm perception may be tied to the vestibular-tactilesomatosensory (VTS) system, which is important for movement and dance, and therefore closely tied to the motor system and attuned to timing (Todd and Lee, 2015). In addition to its ties for movement, the VTS system is clearly tied to the auditory system with regards to rhythm perception (Phillips-Silver and Trainor, 2005Trainor, , 2007Trainor, , 2008Trainor et al., 2009), and through common neural activation (Araneda et al., 2017). These ties between the auditory and VTS system may be an additional factor in the dominance of the auditory system in the temporal domain.
Since VTS rhythms are ubiquitous in fetal life through the mother's gait, heart rate, breathing, etc., and since these networks are tied into auditory rhythm systems, it is likely that the VTS system is heavily tied into the timing systems used in auditory rhythm perception and in motor rhythm production (Provasi et al., 2014). This is further strengthened by the fact that movement and rhythms are linked and proprioception (part of the VTS system) plays a large role in perception of rhythms that is tied into auditory rhythm perception and production . Interactions between the VTS system with visual rhythm perception remains mostly unexplored at this point however, so it is unclear how much this system plays a supramodal role in the timing involved in rhythm perception/production, or if it is only tied to the auditory and motor rhythm timing systems. Further research in this area is needed to answer these questions.
EVOLUTIONARY ORIGINS OF SENSORIMOTOR SYNCHRONIZATION
In an evolutionary context, it makes sense that auditory and motor systems would be tightly interconnected. First, rhythms in language are critical for both perception and production and may be a driver of SMS ability (Patel, 2006). Beyond language, matching movement to sound is a necessary result of human evolution that allows for the social and cultural inclination of humanity via music (Hagen and Bryant, 2003;Brown and Jordania, 2013). Dance is also tightly connected with music and culture and can provide a further explanatory account of human SMS capability and the connection between the motor and auditory systems (Fitch, 2016;Iversen, 2016;Laland et al., 2016;Ravignani and Cook, 2016).
Beyond humans, common adaptations appear to increase SMS ability in several non-human species capable of some level of audio-motor entrainment such as parrots , bonobos (Large and Gray, 2015), and sea-lions (Cook et al., 2013). Although some animals can exhibit rhythmic capabilities, some remarkably well like Ronan the sea-lion (Rouse et al., 2016), they are in some ways limited compared to humans (Patel and Iversen, 2014;Merker et al., 2015). Even though there are animals that can entrain to auditory rhythms, only humans appear to be naturally inclined to do so (Wilson and Cook, 2016). Finally, there is some evidence that non-human primates are able to synchronize their movements to predictable visual stimuli (Takeya et al., 2017), yet there has been much less research on visual SMS compared to auditory SMS in non-humans.
GENERAL SYNTHESIS AND FUTURE DIRECTIONS
In looking at how the brain processes timing information, it is clear that many context sensitive mechanisms interact and coordinate to provide optimal timing output. Much of this interaction appears to happen within the motor system and likely involves the subcortical systems to coordinate the various mechanisms. Current research suggests that oscillations play a key role coordinating the interactions among various timing circuits. However, it is not clear if the various timing systems compute measures of time in the same way. When considering that auditory and visual systems take in very different kinds of information and use it in different ways, i.e., auditory has a stronger temporal precision, and visual has a strong spatial bias, it seems likely that the timing mechanisms themselves may greatly differ.
Consider the difference between extracting timing information between a moving visual rhythm and an auditory rhythm. Moving visual stimuli contain more information than auditory stimuli, such that while entraining to auditory stimuli, prediction of the onset of the next event involves encoding the interval between two events and utilizing that information to predict the onset of the next event. With a moving visual rhythmic stimulus, that interval information is present, but so is information on position/velocity/acceleration. This means predictions of the onset of the next event can be made as part of a continuous process. The fact that even with this information, visual SMS is at best equal to auditory SMS except at fast speeds, begs the question as to why visual SMS is less capable. One possible explanation for this is that the visual system has to encode much more information, and further, encoding that information into a form that is usable by the motor network may require extra processing. This may explain the timing activity found within the visual cortex during visual SMS. Even when there is a simple flashing metronome, there is a measure of timing activity originating from the visual cortex. Considering the reduced temporal ability with visual flashing metronomes, it suggests there may be a translation issue in harnessing a system not optimized to temporal processing the way the auditory system has been, resulting in a weaker connection to the motor timing network.
Different timing systems likely employ varying mechanisms and computational principles that are appropriate to the time scale, cellular properties, and general needs of the system. Existing computational models that capture a range of these phenomena across levels include: pacemaker accumulator models, multiple oscillator models, memory trace models, random process models, ramping activity models, delay line models, and state space trajectory-based models (Addyman et al., 2016;Hass and Durstewitz, 2016). Such models help illustrate the variety of ways to process timing information within a neural network. Evidence also suggests that cells with specific timing mechanisms exist in the basal ganglia and cerebellum (Lusk et al., 2016), yet other areas with multiple functional properties also process timing, such as in the prefrontal cortex (Hyman et al., 2012) and hippocampus (MacDonald et al., 2011). The areas that have multiple functions, as in the hippocampus and prefrontal cortex, will then likely have different computational approach than more specialized timing structures.
Given that there are multiple ways to process timing, and that many forms of cognition require some form of temporal processing, it would be surprising to find that timing mechanisms are not ubiquitous in the brain. This raises an important question. If many different timing mechanisms are available for a given task, and only one output (through action), how do neural systems arrive at the best timing information to use? A strong candidate explanation for this would implicate a mechanism that helps integration through an optimal Bayesian process (Hass and Durstewitz, 2016). Evidence from multimodal sensory integration suggests that when timing information is presented from multiple modalities, the modalities are combined and weighted based on reliability in Bayesian optimal solution (Ernst and Banks, 2002). Since most timing related activity requires motor output, we would expect that the source of timing to be utilized would be determined before, or as that timing information becomes available to the motor system. This seems to make the case that the striatal cells operating as a comparator may be the seat of the Bayesian process to determine the optimal timing source for motor timing.
Since there is some disparity in the amount of work on auditory and visual SMS error correction, there is a need to further study the error correction capabilities within visual SMS. It is currently unknown if visual error correction can be as fast as auditory error correction when dealing modality appropriate stimuli, such as a moving visual sequence or bouncing ball. Another major area of needed work is in understanding the mechanisms by which the Bayesian optimal timing source is chosen in cases where multiple sources are available. If timing mechanisms are as ubiquitous in the brain as evidence suggests, then there may be a variety of ways these mechanisms interface with the motor timing system to produce a single output. Further imaging and computational work is required to understanding this mechanism.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
FUNDING
This work was partially supported by a grant from the National Science Foundation BCS-1460633. | 2018-07-18T13:03:30.349Z | 2018-07-18T00:00:00.000 | {
"year": 2018,
"sha1": "80f786fa4c8fa44ebd617731eeee9f60abfdf67f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncom.2018.00053/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "80f786fa4c8fa44ebd617731eeee9f60abfdf67f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
3282912 | pes2o/s2orc | v3-fos-license | Monochorionic diamniotic twins with centrally located and closely spaced umbilical cord insertions in the placenta
Key Clinical Message When to deliver the monochorionic diamniotic (MCDA) twins with specific cord patterns? Although there is no clear evidence supporting an earlier delivery (before 36 weeks of gestation) in MCDA twins, an earlier delivery might prevent intrauterine death or neuromorbidity in MCDA twins with specific cord patterns.
Introduction
The condition in which twins share a single gestational sac is defined as a monochorionic pregnancy. The presence or absence of the amnion is determined after 7 weeks of gestation [1]. A monochorionic diamniotic (MCDA) placenta is defined as a double amniotic cavity with a single placenta and both umbilical cord insertions (UCIs) in each cavity. In MCDA twins, vascular anastomoses are nearly always present and are thought to be responsible for the development of complications in pregnancy, such as twin-to-twin transfusion syndrome (TTTS) and damage to the surviving twin in the event of the intrauterine death (IUD) of its cotwin [2,3]. The cords are most commonly located at a distance from each other, unlike that observed in the monochorionic monoamniotic (MCMA) twin placenta. However, here, we present a rare type of MCDA placenta with two UCIs that were located centrally and in close proximity, such as observed in the MCMA twin placenta.
Case Report
A 34-year-old Japanese woman in her second spontaneous pregnancy was referred to Takeda General Hospital at 17 weeks of gestation and diagnosed with a MCDA twin pregnancy. She was admitted to our hospital at 30 weeks' for management of potential premature delivery. She was regularly monitored by conventional ultrasound to assess growth and amniotic fluid volume, and by Doppler ultrasound of the umbilical artery (Table 1). No TTTS complications were observed during hospitalization. The final routine monitoring before delivery was performed at 35 weeks and 5 days of gestation; and the maximum vertical pockets of the MCDA twins were observed to be 4.2 and 3.6 cm, respectively, with cardiotocography showing reassuring fetal status patterns for both. However, she complained of diminished fetal movement at 35 weeks and 6 days of gestation (approximately 12 h later; at the final confirmation of normal cardiac sound for both twins by fetal Doppler ultrasonography), and the IUD of one fetus was confirmed by ultrasonography. Emergency cesarean section was performed, and the patient delivered a 2306 g surviving twin male infant, and a 1994 g dead twin male infant without any definite anomalies. No autopsy was performed as consent could not be obtained from the parents. The surviving infant's hemoglobin was 13.9 g/dL, and ultrasonography of the head revealed no abnormal findings at birth. Although he showed no cardiac or renal dysfunction after birth, he was diagnosed with large cystic periventricular leukomalacia (PVL) on the basis of magnetic resonance imaging findings at 13 days after birth ( Fig. 1). His placenta was peculiar in that both UCIs were observed to be centrally located and in close proximity on the placenta ( Fig. 2A). We did not observe any specific placental and umbilical cord findings during the fetal period. The placenta was 24 9 19 cm and weighed 778 g. The umbilical cords were found to be of unusual thickness and of 45 and 48 cm in length, respectively. Both umbilical cords were composed of double arteries and a single vein, with neither wrapped around the fetus's neck. There was no overcoiling or undercoiling of the umbilical cord vessels. After delivery, placental injection studies using milk and indigotindisulfonate sodium were performed. The vein and arteries of both umbilical cords were cannulated successively with a 3.5-mm umbilical catheter, and milk and indigotindisulfonate sodium were injected into the umbilical vein of the surviving infant and umbilical arteries of the dead infant, respectively. The presence of several dynamic superficial venovenous (VV) and arterio-arterial (AA) anastomoses was confirmed (Fig. 2B). A cross-section of the placenta showed no calcification, hematoma, or infarction.
Discussion
This was our first experience of a MCDA placenta with both UCIs located centrally and in close proximity, as in a MCMA twin placenta. In this case, placental injection studies revealed the presence of several dynamic superficial VV and AA anastomoses. These factors may have resulted in the development of large cystic PVL in the surviving infant at 13 days after birth. It is likely that fetal blood flow through the intertwined vascular anastomoses is dynamically variable, and that the net result of the combination of anastomosis type could well be quite unpredictable. Lewi [3] reported that multiple vascular anastomoses may indeed cause a transitory cardiovascular imbalance that is severe enough to decrease brain perfusion and cause cerebral lesions without resulting in an IUD or clinically evident TTTS. Moreover, Hillman et al. [4], in a systematic review and meta-analysis of the effects on the surviving twin of single fetal death, reported that monochorionic twins were 4.8 times more likely to experience neurodevelopmental morbidity. It is speculated that the transfusional effects associated with single fetal death in monochorionic twins are associated with transient hemodynamic fluctuations leading to a predisposition to ischemic white matter changes [5].
The number of anastomoses in a monochorionic twin placenta has been reported to be correlated with the distance separating the cord insertion sites [6]. Kellow and Feldstein [7] suggested that monochorionic twins face unique potential complications related to their two cord insertions, such as a higher incidence of velamentous insertions and the intertwining of vascular connections in the single shared placenta. On the other hand, Hack et al. [8] reported that no associations existed between mortality and anastomosis type, or between type and distance between the UCIs or placental sharing in monoamniotic twins. The distance between the cord insertions does not seem to have any particular association with the four types (AA shunt, VV shunt, parenchymal, and component) of anastomoses known to form [6]. Furthermore, it was speculated that most monochorionic placentas had more than one type of anastomosis [6]. In this case, we considered that VV and AA transfusions in placentas with two centrally located UCIs may improve the discordance in birthweight between the twins due to equal placental sharing. These vascular anastomoses have been used to explain the consequences to the surviving twin in cases of the IUD of its cotwin, even if the hemodynamics were balanced to that point [6]. Ultrasound examination, particularly after 8 weeks of gestation, can reliably determine chorionicity and amnionicity in the first trimester [1], significantly benefiting fetal risk assessment and subsequent management decisions. Kaneko et al. [9] suggested that the MCDA twin score, which is composed of five variables (discordancy in the amniotic fluid, discordancy in birthweight, abnormal cord insertion, hydrops fetalis and abnormal fetal heart rate monitoring), had a higher likelihood ratio for predicting poor outcomes higher than did any single variable or combination of the five variables. The consequence of increased surveillance and referral for TTTS improved outcomes in MCDA pregnancies [10]. However, to the best of our knowledge, there have been no reports describing this in detail for MCDA placentas with both UCIs located centrally and in close proximity throughout the gestational period. Thus, the importance of these findings remains controversial. If specific cord patterns, as in our case, are observed during early pregnancy, more careful and regularly evaluation by Doppler ultrasound study may be recommended.
Finally, our experiences give rise to the questions as to when to deliver MCDA twins with specific cord patterns (velamentous or central). Although Hack et al. [7]suggested that delivery of MCDA twins with a single placenta and vascular anastomoses should be planned at 36 weeks of gestation. Cheong-See et al. [11] also reported that there is insufficient evidence to recommended routine delivery before 36 weeks of gestation in MCDA twins. Although there is no clear evidence supporting an earlier delivery (before 36 weeks of gestation) in MCDA twins, an earlier delivery might prevent an IUD or neuromorbidity in MCDA twins with specific cord patterns. Clinical expectations can only be formulated as more cases are encountered, making further study essential to the development of effective management and strategies for MCDA twins with specific cord patterns during the perinatal period.
Consent
Written informed consent to report this case study was obtained from the parents of this infant. | 2018-04-03T05:36:34.826Z | 2018-01-03T00:00:00.000 | {
"year": 2018,
"sha1": "d9421e3707b8996db337cee09f6aa34297ba133c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.1332",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9421e3707b8996db337cee09f6aa34297ba133c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5984328 | pes2o/s2orc | v3-fos-license | Molecular profiling of advanced solid tumors and patient outcomes with genotype-matched clinical trials: the Princess Margaret IMPACT/COMPACT trial
Background The clinical utility of molecular profiling of tumor tissue to guide treatment of patients with advanced solid tumors is unknown. Our objectives were to evaluate the frequency of genomic alterations, clinical “actionability” of somatic variants, enrollment in mutation-targeted or other clinical trials, and outcome of molecular profiling for advanced solid tumor patients at the Princess Margaret Cancer Centre (PM). Methods Patients with advanced solid tumors aged ≥18 years, good performance status, and archival tumor tissue available were prospectively consented. DNA from archival formalin-fixed paraffin-embedded tumor tissue was tested using a MALDI-TOF MS hotspot panel or a targeted next generation sequencing (NGS) panel. Somatic variants were classified according to clinical actionability and an annotated report included in the electronic medical record. Oncologists were provided with summary tables of their patients’ molecular profiling results and available mutation-specific clinical trials. Enrolment in genotype-matched versus genotype-unmatched clinical trials following release of profiling results and response by RECIST v1.1 criteria were evaluated. Results From March 2012 to July 2014, 1893 patients were enrolled and 1640 tested. After a median follow-up of 18 months, 245 patients (15 %) who were tested were subsequently treated on 277 therapeutic clinical trials, including 84 patients (5 %) on 89 genotype-matched trials. The overall response rate was higher in patients treated on genotype-matched trials (19 %) compared with genotype-unmatched trials (9 %; p < 0.026). In a multi-variable model, trial matching by genotype (p = 0.021) and female gender (p = 0.034) were the only factors associated with increased likelihood of treatment response. Conclusions Few advanced solid tumor patients enrolled in a prospective institutional molecular profiling trial were treated subsequently on genotype-matched therapeutic trials. In this non-randomized comparison, genotype-enrichment of early phase clinical trials was associated with an increased objective tumor response rate. Trial registration NCT01505400 (date of registration 4 January 2012). Electronic supplementary material The online version of this article (doi:10.1186/s13073-016-0364-2) contains supplementary material, which is available to authorized users.
(Continued from previous page)
Conclusions: Few advanced solid tumor patients enrolled in a prospective institutional molecular profiling trial were treated subsequently on genotype-matched therapeutic trials. In this non-randomized comparison, genotypeenrichment of early phase clinical trials was associated with an increased objective tumor response rate.
Keywords: Molecular profiling, DNA sequencing, Clinical trials, Solid tumors, Precision medicine Background Molecular profiling can provide diagnostic, prognostic, or treatment-related information to guide cancer patient management. Advances in next-generation sequencing (NGS) have enabled multiplex testing to overcome the constraints associated with sequential single-analyte testing [1][2][3]. Large-scale research projects have elucidated genomic landscapes of many cancers but have provided limited insight into the clinical utility of genomic testing. Our aim was to evaluate if targeted DNA profiling improves outcomes for patients assigned to clinical trials based on knowledge of actionable somatic mutations.
At the Princess Margaret Cancer Centre (PM), the Integrated Molecular Profiling in Advanced Cancers Trial (IMPACT) and Community Molecular Profiling in Advanced Cancers Trial (COMPACT) are prospective studies that provide molecular characterization data to oncologists to match patients with advanced solid tumors to clinical trials with targeted therapies. Here, we report the frequency of alterations, clinical "actionability" of the somatic variants, clinical trial enrollment, and outcome based upon molecular profiling results.
Patient cohort
For IMPACT, patients with advanced solid tumors treated at PM were prospectively consented for molecular profiling during a routine clinical visit. For COMPACT, patients with advanced solid tumors treated at other hospitals in Ontario were referred to a dedicated weekly clinic at PM for eligibility review, consent, and blood sample collection. Eligible patients had advanced solid tumors, were aged ≥18 years, had Eastern Cooperative Oncology Group (ECOG) performance status ≤1, and had available formalin-fixed paraffin-embedded (FFPE) archival tumor tissue. The University Health Network Research Ethics Board approved this study (#11-0962-CE). Enrollment for IMPACT began on 1 March 2012 and for COMPACT on 16 November 2012 and ended on 31 July 2014 for this analysis.
Specimens
DNA was extracted from sections of FFPE tumor specimens from biopsies or surgical resections. If multiple archival tumor specimens were available, the most recent archival FFPE specimen was reviewed, with a minimum acceptable tumor cellularity of 10 %. Tumor regions were isolated by 1-2 × 1 mm punch from FFPE blocks or manual macrodissection of unstained material from 15-20 slides. FFPE samples were deparaffinized, cells lysed with proteinase K, and DNA extracted using the QIAmp DNA FFPE Tissue Kit (Qiagen, Germantown, MD, USA). DNA was quantified using the Qubit dsDNA Assay kit on the Qubit 2.0 Fluorometer (ThermoFisher Scientific, Waltham, MA, USA).
Participants provided a peripheral blood sample (5 mL in EDTA-coated tubes) as a source of matched germline DNA. DNA was extracted using either standard manual phenol/chloroform extraction methods or automated extraction (MagAttract DNA Mini M48 kit; Qiagen). Patients were offered return of pathogenic germline results at the time of consent and asked to identify a family member delegate who could receive results on their behalf if required.
Molecular profiling assays
All testing was performed in a laboratory accredited by the College of American Pathologists (CAP) and certified to meet Clinical Laboratory Improvement Amendments (CLIA). Three molecular profiling assays were used over the study period: a custom multiplex genotyping panel on a matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass-spectrometry platform (MassAR-RAY, Agena Bioscience, San Diego, CA, USA) to genotype 279 mutations within 23 genes (Additional file 1: Table S1); the TruSeq Amplicon Cancer Panel (TSACP, Illumina) on the MiSeq sequencer (Illumina) covering regions of 48 genes (Additional file 1: Table S2); and the Ion AmpliSeq Cancer Panel (ASCP, ThermoFisher Scientific) on the Ion Proton sequencer (ThermoFisher Scientific) covering regions of 50 genes (Additional file 1: Table S3). For more in-depth methodology on molecular profiling assays, including sequence alignment and base calling, see Additional file 1: Supplementary Methods.
Variant assessment and classification
Variants were assessed and classified according to the classification scheme of Sukhai et al. [4]. Briefly, a fiveclass scheme was used to sort variants according to actionability (defined as providing information on prognosis, prediction, diagnosis, or treatment), recurrence of variants in specific tumor sites, and known or predicted deleterious effects on protein function. Interpretation and data integration were performed using Alamut v.2.4.5 (Interactive Biosoftware, Rouen, France). Primary review, assessment, and classification of all variants were independently performed by a minimum of two assessors followed by a third review prior to reporting, with cases where assessors disagreed resolved by group discussion.
Immunohistochemistry (IHC)
Phosphatase and tensin homolog (PTEN) IHC was performed using rabbit monoclonal Ab 138G6 (Cell Signaling Technology, Danvers, MA, USA) on a Dako platform using a dilution of 1:50 and Flex + 30 protocol. Complete absence of tumor cell staining with positive staining of surrounding tumor stroma fibroblasts/endothelial cells was used to denote PTEN deficiency [5].
Return of testing results
The molecular profiling report was included in the electronic medical record and returned to the treating oncologist. The clinical significance of profiling results was discussed with PM patients during a routine clinic visit by their treating oncologist. A PM oncologist reviewed results with patients treated at other hospitals by telephone. All oncologists were provided with regular summary tables of testing results and mutation-specific clinical trial listings available at PM. A monthly genomic tumor board was convened at PM to establish consensus treatment recommendations for patients with complex profiling results. A committee consisting of a molecular geneticist, medical geneticist, genetic councilor, and medical oncologist reviewed pathogenic germline variants before return of germline testing results. Germline results were disclosed to the patient or designate by a genetic counselor or medical geneticist.
Clinical data collection
For each patient, baseline patient and tumor characteristics, treatment regimen(s), time on treatment(s) and survival were retrieved from medical records and updated every three months. Therapeutic clinical trial enrollment was evaluated from the date of reporting molecular profiling results until 9 January 2015. Genotype-matched trials were defined as studies with eligibility criteria restricted to patients with specific somatic mutations, those with a targeted drug with enriched clinical or preclinical activity in a patient's genotype, or those with a drug that inhibited a pathway directly linked to the somatic mutation. Decisions about trial enrollment were based upon trial availability, patient or physician preference, and did not follow a pre-specified algorithm.
Statistics
Descriptive statistics were used to summarize patient characteristics, profiling results, and anti-tumor activity. Comparisons between patients with profiling results treated on genotype-matched and genotype-unmatched trials were performed using a generalized estimating equation (GEE) model [7]. A multi-variable GEE model for response included trial matching by genotype, gender, trial phase, number of lines of prior systemic therapy, investigational agent class, age, tumor type, and sequencing platform. A mixed model was used to compare time on treatment, defined as the date of trial enrollment until the date of discontinuation of investigational treatment. A robust score test was used to compare overall survival following trial enrolment between genotype-matched and genotype-unmatched groups [8]. These comparisons accounted for individual patients who were included on multiple therapeutic trials [8]. Differences with p values of < 0.05 were considered statistically significant.
Molecular profiling
Successful molecular profiling was achieved in 1640 patients (87 %), 827 (50 %) had samples tested by MALDI-TOF MS, 792 (48 %) by TSACP, and 21 (1 %) by ASCP (Fig. 1). One or more somatic mutations were detected in 341 (41 %) patients tested by MALDI-TOF MS, 583 (74 %) by TSACP, and 14 (67 %) by ASCP. Median laboratory turnaround time (sample receipt to report) was 32 days (range, 6-228 days). Of patient samples tested by MALDI-TOF MS, KRAS (21 %) was the most frequently mutated gene, followed by PIK3CA (12 %), with additional genes in the range of 1-5 % frequency. Of samples tested by the TSACP, TP53 had the highest mutation frequency (47 % of all identified (Fig. 2). We attribute the difference in mutation landscape between these two platforms to inclusion of TP53 in the TSACP assay but not in MALDI-TOF (see Additional file 1: Supplemental Methods). Class 1 and 2 variants are the most clinically significant with known actionability for the specific variant in the tumor site tested (Class 1) or a different tumor site (Class 2) [4]. More than 20 % of patients with breast, colorectal, gynecologial, lung, or pancreatobiliary cancers had Class 1 or 2 variants detected by TSACP or MALDI-TOF (Fig. 3
Clinical trials and outcomes
Of the 1640 patients with molecular profiling results, 245 (15 %) were subsequently enrolled in 277 therapeutic clinical trials, including 84 (5 %) treated on 89 genotype-matched trials ( Table 2). Patients with pancreatobiliary, upper aerodigestive tract, and other solid tumors were least likely to be treated on genotype- Table 3.
The age and sex distribution, as well as the number of lines of prior systemic therapy, were similar between the genotype-matched and genotype-unmatched trial patient cohorts ( Table 2). There was no difference in the proportion of trials that were genotype-matched between patients profiled on MALDI-TOF MS (61/176 [35 %]) compared with TSACP (28/101 [28 %]; p = 0.24). A higher proportion of genotype-matched trial patients were treated in phase I studies (81 %) compared with genotypeunmatched trials (46 %; p < 0.001). Genotype-matched trial patients were more likely to be treated with targeted drug combinations without chemotherapy or immunotherapy. The overall response rate was higher in patients treated on genotype-matched trials (19 %) compared with genotype-unmatched trials (9 %; p = 0.026) (Fig. 4). In multi-variable analysis, trial matching according to genotype (p = 0.021) and female gender (p = 0.034) were the only statistically significant factors associated with response (Additional file 1: Table S4). Genotype-matched trial patients were more likely to achieve a best response of any shrinkage in the sum of their target lesions (62 %) compared with genotype-unmatched trial patients (32 %; p < 0.001). There was no difference in the time on treatment (15 months versus 15 months; p = 0.12) or overall survival (16 months versus 13 months; p = 0.10) for patients treated on genotype-matched versus genotypeunmatched trials.
Germline testing
Of the patients who were asked during consent about return of incidental pathogenic germline mutations, 658/ 698 (94.3 %) indicated that they wished to receive these results. Two patients were identified with TP53 variants a c b d in DNA extracted from blood. The first patient was a 36-year-old woman diagnosed with metastatic breast cancer, with a prior papillary thyroid cancer at the age of 28 years, who had a heterozygous germline TP53 c.817C > T (p.Arg273Cys) pathogenic mutation. Her family history was notable for her mother who died from cancer of unknown primary at the age of 63 years and a maternal aunt with breast cancer at the age of 62 years. The second patient, a 77-year-old woman diagnosed with metastatic cholangiocarcinoma, had no family history of malignancy. We detected a heterozygous TP53 c.524G > A (p.Arg175His) pathogenic mutation at 15 % allele frequency in the blood that was not present in tumor. This finding is not consistent with inherited Li-Fraumeni syndrome (LFS), but may represent either clonal mosaicism or an age-related or treatment-related mutation limited to blood.
Discussion
We demonstrated that molecular profiling with massspectrometry-based genotyping or targeted NGS can be implemented in a large academic cancer center to identify patients with advanced solid tumors who are candidates for genotype-matched clinical trials. The rapid enrolment to our study reflects the high level of motivation of patients and their oncologists to pursue genomic testing that has been previously reported by our group [9,10] and others [1,[11][12][13]. Disappointingly, only 5 % of patients who underwent successful molecular profiling in our study were subsequently treated on genotype-matched clinical trials, consistent with other centers. For comparison, the MD Anderson institutional genomic testing protocol matched 83/ 2000 (4 %) of patients [1], the SAFIR-01 breast cancer trial matched 28/423 (7 %) [14], and the British Columbia Cancer Agency Personalized Oncogenomics Trial matched 1/100 (1 %) [15]. To facilitate trial accrual, we incorporated multidisciplinary tumor board discussions, physician-directed email alerts with genotypematched trial listings available at our institution, and individual physician summaries of profiling results. In spite of these efforts, the rate of genotype-matched clinical trial enrolment was low, due to patient deterioration, lack of available clinical trials, and unwillingness of patients to travel for clinical trial participation. There was no difference in proportion of patients treated on genotypematched trials who underwent profiling using MALDI-TOF or a larger targeted NGS panel. This highlights how few somatic mutations are truly "druggable" through clinical trial matching, even in a large academic cancer center with a broad portfolio of phase I/II trials.
A key finding of our study is that patients in genotypematched trials were more likely to achieve response than patients in genotype-unmatched trials. Albeit a nonrandomized comparison, this finding comprises an important metric and distinguishes our molecular profiling program from other prospective studies that have not tracked longitudinal clinical outcome [1,16,17]. Von Hoff and colleagues were the first to report clinical outcome from a prospective molecular profiling (MP) study, with 18/66 (27 %) of patients who received treatment guided by MP data, including RNA-expression profiling and immunohistochemistry (IHC) or fluorescence in situ hybridization (FISH) testing for 11 markers, achieving a progression-free survival (PFS) ratio on MPselected therapy/PFS on prior therapy) of ≥ 1.3 [18]. This study was performed prior to the era of multiplex mutation testing and many patients received MP-guided therapy with cytotoxic therapy using biomarker data that has not been shown to influence treatment response. An analysis of 1114 patients treated on investigational clinical trials at the Clinical Center for Targeted Therapy at MD Anderson Cancer Center reported that the response rate for patients with ≥1 molecular alteration treated on trials with matched therapy was higher (27 % versus 5 %, p < 0.0001) and the time to treatment failure was longer (5.2 versus 3.1 months; p < 0.0001) than those who received non-matched therapy [19].
Limitations of this study were that some patients underwent molecular testing after trial assignment and different sequential molecular tests such as polymerase chain reaction-based sequencing, IHC, and FISH, were performed based upon the patient's tumor type. The same investigators from MD Anderson recently reported the results of their prospective genomic profiling study that enrolled 500 patients with advanced refractory solid tumors assessed in their phase I program [20]. They utilized the FoundationOne™ 236-gene targeted sequencing panel and standard of care biomarker test results (such as ER, PR, and HER2 IHC for breast cancer) to inform treatment selection for commercially available therapies and clinical trial enrollment. A numerically higher rate of prolonged disease control (complete response, partial response, or stable disease ≥ 6 months) was observed in patients who received matched therapy (122/500) compared with those who received unmatched therapy (66/ 500) (19 % versus 8 %, p = 0.061). Higher matching scores, calculated based on the number of drug matches and genomic aberrations per patient, were independently associated with a greater frequency of prolonged disease control ( [21]. In both of these studies, the rate of treatment matching (25 %) was significantly greater than our study (5 %). This may be due to the use of larger gene panels that include copy number alterations and recurrent translocations that may identify more "druggable" alterations for matched therapy; analysis of patient outcomes beyond therapeutic clinical trials that included off-label treatment matching; and varying definitions of genomic alteration and treatmentmatching pairs. For instance, the UC San Diego Moores matched therapy cohort included 11 patients (13 %) with breast cancer who received endocrine therapy based on ER expression and 11 patients (13 %) with breast cancer who received HER2-directed The only randomized trial that has prospectively assessed the utility of molecular profiling (SHIVA) reported no difference in objective response or PFS for patients treated with genotype-matched versus standard treatments [13]. More than 40 % of patients randomized in the SHIVA trial did not have genomic alterations identified and were included based upon expression of hormone receptors. Patients were matched to a limited range of approved targeted agents following a predefined algorithm that did not include best-in-class investigational agents that are being tested in early phase clinical trials. Despite the negative results of SHIVA, enthusiasm to conduct genomic-based clinical trials such as NCI-MATCH [12] [NCT02465060], and LUNG-MAP [22] [NCT02154490] remains strong to further define the value of precision medicine. The findings of our study, in which the majority of patients treated on genotype-matched trials were enrolled in phase I targeted therapy trials, are consistent with a recent meta-analysis of phase I trials that demonstrated a higher overall response rate (30.6 % versus 4.9 %, p < 0.001) and median PFS (5.7 months versus 2.95 months, p < 0.001) for targeted therapy trials that used biomarker-selection compared with those that did not [23].
Measuring the clinical utility of molecular profiling is difficult [3]. We did not comprehensively capture how testing results influenced clinical decisions outside of therapeutic clinical trial enrolment, such as reclassification of tumor subtype and site of primary based on mutation results. For example, we enrolled a patient with an unknown primary cancer with intra-abdominal metastases that was found to harbor a somatic IDH1 p.Arg132Cys variant, leading to the reclassification as a likely intrahepatic cholangiocarcinoma. We also did not fully evaluate the use of testing results to avoid ineffective standard treatments (i.e. KRAS exon 4 somatic variants in colorectal cancer to inform decision not to use EGFR monoclonal antibody treatment) and treatment with approved targeted agents outside of their approved indications. Few patients in our study received targeted treatments based upon profiling results outside of clinical trials, due to limited access to targeted drugs outside of publicly funded standard-of-care indications in Ontario.
New technological advances are being studied in molecular profiling programs-including larger gene panels [2,17]; whole exome [16], whole genome (WGS) or RNA sequencing (RNA-Seq) [24,25]; and integrative systems biology analyses of deregulated cellular pathways [26]. Greater access to clinical trials for genomically characterized patients, such as umbrella and basket trial designs [27], may also improve the success of genotype-treatment matching. To assess whether decision support tools integrated at the point of care can improve enrollment of patients on genotype-matched trials, we are piloting a smart phone application to help physicians identify genotypematched trials for their patients with profiling data.
There are several limitations of our study. Only a single archival sample was profiled for each patient, often obtained many years prior to molecular testing. Fresh biopsy of a current metastatic lesion for molecular profiling at the time of study enrolment may have yielded different results due to clonal evolution or tumor heterogeneity [28]. Our genomic testing was limited to hotspot point mutation testing or limited targeted sequencing and did not include gene copy number alterations or recurrent translocations that may be important for the selection of genotype-matched therapy. There were patients identified with potentially "druggable" mutations who were candidates for genotype-matched trials; however, they could not be enrolled because of the constraints of slot allocation in early phase clinical trials across multiple institutions or were deemed ineligible due to trial-specific exclusion criteria. Our study population also included many patients with heavily pre-treated metastatic disease who were not well enough for further therapy when results of molecular testing were reported. In addition, tumor response is an imperfect surrogate endpoint to assess therapeutic benefit in early phase clinical trials that should interpreted with caution [28]. We did not observe a difference in time on treatment or overall survival for patients treated on genotype-matched versus genotype-unmatched clinical trials. PFS data were not available in our cohort precluding a comparison of the outcome of genotype-matched therapy with the immediate prior line of treatment, as has been reported by other investigators [13,14,21].
Conclusions
We provide preliminary evidence that genotype-matched trial treatment selected on the basis of molecular profiling was associated with increased tumor shrinkage, although only a small proportion of profiled patients benefitted from this approach. Through this initiative, we have created a valuable repository of data and tumor samples that are amenable to additional research and data sharing initiatives. Greater efforts should be made to expand opportunities for genotype-trial matching and further studies are needed to evaluate the clinical utility of targeted NGS profiling.
Additional file
Additional file 1: Supplementary methods and Tables S1-S4. | 2017-07-08T04:53:21.228Z | 2016-10-25T00:00:00.000 | {
"year": 2016,
"sha1": "6cb9eb56d8bd89b07905e44fffc96c524088f788",
"oa_license": "CCBY",
"oa_url": "https://genomemedicine.biomedcentral.com/track/pdf/10.1186/s13073-016-0364-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56475132f86579e6f909d6b5c79a3700ce2ad100",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9674955 | pes2o/s2orc | v3-fos-license | Compounds From Celastraceae Targeting Cancer Pathways and Their Potential Application in Head and Neck Squamous Cell Carcinoma: A Review
Squamous cell carcinoma of the head and neck is one of the most common cancer types worldwide. It initiates on the epithelial lining of the upper aerodigestive tract, at most instances as a consequence of tobacco and alcohol consumption. Treatment options based on conventional therapies or targeted therapies under development have limited efficacy due to multiple genetic alterations typically found in this cancer type. Natural products derived from plants often possess biological activities that may be valuable in the development of new therapeutic agents for cancer treatment. Several genera from the family Celastraceae have been studied in this context. This review reports studies on chemical constituents isolated from species from the Celastraceae family targeting cancer mechanisms studied to date. These results are then correlated with molecular characteristics of head and neck squamous cell carcinoma in an attempt to identify constituents with potential application in the treatment of this complex disease at the molecular level.
INTRODUCTION
Cancer presently accounts for approximately 8 million deaths per year worldwide, a number that should escalate in the next two decades according to current projections [1]. Considered a global health problem, great scientific efforts are being made in order to understand the burden of cancer and prevent an even worse scenario.
In the past 10 years, the PubMed searchable databases alone have registered over a million articles addressing cancer [2]. Results show that despite improvements in early diagnosis and treatment, most cancer patients are still lacking treatment options and that success rates in drug development are low [3,4].
Head and neck squamous cell carcinoma (HNSCC) is responsible for about 90% of the cancers arising in the epithelial lining of the mucosal surfaces of the head and neck [5]. It is considered the sixth most prevalent cancer type worldwide, with approximately 540,000 new cases annually and 271,000 deaths, mostly due to lack of early diagnostic markers and efficient therapies [6]. Major risk factors *Address correspondence to this author at the Albert Einstein Research and Education Institute, Hospital Israelita Albert Einstein, Av. Albert Einstein, 627, 05652-000, Sao Paulo, SP, Brazil; Tel: 55 11 21510507; E-mail: patricia.severino@einstein.br include heavy drinking and tobacco consumption [7] and the human papilloma virus for certain HNSCC subsites [8]. Most patients are diagnosed at advanced cancer stages. At this point treatment requires complex surgeries followed by radiotherapy and/or chemotherapy, with severe consequences in speech, breathing and eating abilities [9,10]. Noteworthy is the fact that over 50% of patients will present recurrence in less than 2 years after initial treatment with overall survival between 6 and 12 months [11,12]. The molecular complexity of this cancer type is certainly the major drawback for the development of more efficient therapies. The use of biologically active molecules acting upon distinct cellular processes could be a desirable alternative for therapy.
Compounds isolated from plants have traditionally been considered for their medicinal properties [13]. Several commercially available drugs for cancer were developed using bioactive molecules originally isolated from plant extracts, including Velban ® (also known as vinblastine, originally isolated from Catharanthus roseus G. Don), Oncovin ® (generically known as vincristine, it was originally obtained from Catharanthus roseus G. Don), Taxol ® (paclitaxel, originally obtained from Taxus brevifolia), Eldisine ® (also known as vindesine, originally obtained from Catharanthus roseus G. Don), Navelbine ® (known as vinorelbine, obtained from Catharanthus roseus G. Don), Taxotere ® (generically known as Docetaxel, a semisynthetic compound derived from bacatin III, which was originally isolated from Taxus baccata needles), Vepesid ® (also known as etoposide, it is a semisynthetic analogue of podophyllotoxin obtained from the root of Podophyllum peltatum), Vumon ® (known as teniposide, it is a semisynthetic analogue of podophyllotoxin obtained from the root of Podophyllum peltatum), Camptosar ® (known as irinotecan, a semisynthetic analogue of the natural alkaloid camptothecin originally isolated from the bark and stem of Camptotheca acuminate) and Hycamtin ® (also known as topotecan, a synthetic analog of the natural chemical compound camptothecin originally obtained from Camptotheca acuminata), as reviewed else where [14,15].
The Celastraceae family is among those greatly investigated for antineoplastic effects. It comprises around 100 genera and 1300 species, most of them distributed in the tropical and sub tropical regions of South America as well as in eastern Asia [16,17]. This review presents a comprehensible collection of articles addressing the antineoplastic effect of Celastraceae plant extracts and/or chemical constituents. Genes and proteins reportedly targeted by these molecules and associated with deregulated signaling pathways in cancer are reported and special emphasis was given to HNSCC molecular features. The final aim was to tackle if plant extracts or constituents isolated from species of the Celastraceae family could be considered potential new sources for therapeutics development for this kind of cancer.
OVERVIEW OF PUBLICATIONS ON CELAS-TRACEAE AND CANCER
In order to identify research articles possibly associating Celastraceae and cancer the PubMed searchable database (accessing mostly journals indexed in the MEDLINE database, Medical Literature Analysis and Retrieval System Online), the repository PubMed Central (PMC) and the Scientific Electronic Library Online (SciELO) were used. The general terms Celastraceae (all fields) AND Cancer (Ti-tle/Abstract) were used in all instances. Publications automatically selected following the criteria described above were manually curated and only those presenting in vitro reports on molecular mechanisms of action of extracts and/or compounds isolated from species of the Celastraceae family were further discussed in this review. Publications reporting only cytotoxicity and cell proliferation results, reviews, retracted articles, articles published in any language other than English, and studies reporting only results other than antineoplastic-related effects were excluded. Reports found using more than one database were included only once in the total number of publications.
A total of 101 publications associating Celastraceae and cancer were identified using the search engines and electronic databases selected for this review. Most literature (61 reports, 83% of all results) was retrieved through PubMed, followed by PubMed Central (14,8%). Of these potentially useful publications, 27 were kept for further discussion in this review since they matched the final selection criteria. Only 2 publications were identified in SciELO, but they did not qualify for further analysis.
In summary, of 101 publications found using the terms "cancer" and "Celastraceae", 73% were excluded. Twentysix of them were reports on the potential use of Celastraceaerelated natural compounds for other diseases or the evaluation of specific properties of compounds isolated from species of the Celastraceae family not directly associated with cancer. Several publications focused on extraction and purification protocols of bioactive molecules, and therefore were dismissed. A total of 27 publications were selected for fulltext reading. The selection results are represented in (Fig. 1).
The 27 selected reports were published between 2005 and 2014, but most of them were published in 2008, 2012 and 2013. Prior to 2005 most publications did not report specific mechanisms of action for the studied biomolecules and were not, therefore, included in this review. China is responsible for 48% of the publications, possibly due to the widespread use of traditional medicine, often based on plant ex- Fig. (1). Summary of screening results for publications associating Celastraceae and cancer. tracts, in that country. The USA appears as the second country with highest number of publications (24%). The remaining of the publications came from Brazil, Korea, Taiwan and Vietnam.
THE ANTI-CANCER POTENTIAL OF COM-POUNDS ISOLATED FROM SPECIES OF THE CE-LASTRACEAE FAMILY
Accumulating evidence presented in this review indicates that several species from the Celastraceae family are potential sources of molecules that may interfere in the progression of cancer. The most studied cancer types were prostate and colon cancer, with 17% of the studies, followed by breast cancer, hepatocellular carcinoma and pancreatic cancer, with 13% of the studies each (Fig. 2). One of the approaches for studying the anti-cancer activities of plant extracts and bioactive molecules is to directly address signaling pathways commonly deregulated in cancer. A comprehensive summary of current results using this approach is shown in ( Table 1). As shown, ten Celastraceae species were studied in the literature reviewed in this work: Celastrus paniculatus, Celastrus hypoleucus, Salacia cochinchinensis, Maytenus ilicifolia, Tripterygium wilfordii, Tripterygium regelii, Tripterygium hypoglaucum, Euonymus alatus, Microtropis fokienensis and Perrottetia arisanensis. The genus Tripterygium was the most frequently studied, followed by Celastrus and Maytenus. These species can be considered source of at least three classes of molecules: terpenoids, alkaloids and polyphenols. Most studies (92%) focused on the anti-tumoral activities of terpenoids. Terpenoids constitute a large and diverse group of naturally occurring products. They are essentially lipids, built up of isopropene units, but differ in their carbon skeleton and functional groups, characteristics responsible for their specific effects in biological systems. Three terpenoids were most cited in the selected studies: sesquiterpenoids, diterpenoids, and triterpenoids, with 4%, 39% and 57% of the citations respectively. Among diterpenoids, triptolide, the principal bioactive ingredient of Tripterygium wilfordii, has a unique structure leading to multiple biological activities [18]. According to reports, triptolide directly induces tumor cell apoptosis, but can also enhance apoptosis induced by cytotoxic agents (e.g. TNF-α and TRAIL) and chemotherapeutic agents by inhibiting NFκB activation [19]. The design and synthesis of triptolide derivatives have been motivated owed to its high potential but limited clinical use due to severe toxicity and water-insolubility. As a matter of fact, PG490-88, a derivative of triptolide, is part of a phase I clinical trial for treatment of prostate cancer in the USA [19].
The most studied triterpenoids are quinone-methides (85% of the studies on triterpenoids). Quinone-methides are common constituents of biological systems and some possess important biological activities, including DNA alkylation and DNA cross-linking [20]. In fact, oxidation to a reactive quinone-methide is the mechanistic basis of many phenolic anti-cancer drugs [21,22]. Quinone-methides obtained from natural sources are among the most promising chemical classes for the development of new drugs against cancer [23,24]. Naturally occurring quinone-methide triterpenoids can only be found as secondary metabolites in plants of the Celastraceae family [25]. Despite their broad pharmacological potential, including anti-cancer effects, these compounds cannot be obtained by chemical synthesis yet. Extraction from plants remains the only feasible strategy and biotechnological techniques, such as in vitro culture of cells, may become an alternative source in the future [25].
Celastrol and its methyl ester, pristimerin, are the most studied triterpenoid quinone-methides in cancer ( Table 2). Originally extracted from the root bark of T. wilfordii, an ivy, vine-like plant native to China, Japan and Korea, these compounds are currently obtained from several other species. The chemical structure of the three most studied compounds obtained from species of the Celastraceae family, namely triptolide, celastrol and pristimerin, are depicted in (Fig. 3).
PRISTIMERIN, CELASTROL AND TRIPTOLIDE: MOLECULAR FUNCTIONS AND MECHANISMS IN CANCER
The triterpenes quinone-methides celastrol and pristimerin and the diterpenoid triptolide are the most studied molecules isolated from species of the Celastraceae family in regard to molecular mechanisms associated with anti-cancer effects.
Triterpene quinone-methides have been found to actively inhibit choline kinase-α, a critical enzyme in the synthesis of phosphatidylcholine, a major structural component of eukaryotic cell membranes, as reviewed in Estévez-Braun and co-authors [89]. The compounds tested in this study were further found to exhibit anti-proliferative activity against human colorectal adenocarcinoma HT29 cells in vitro and they also showed in vivo anti-tumoral activity in xenographs of HT29 cells injected into mice [89].
The anti-cancer effect of the quinone-methide pristimerin has been studied in a variety of cells in vitro and cancer models in vivo. Several molecular mechanisms underlying (*) Commercially obtained (**) obtained from other researchers (donation or collaboration).
In a somewhat different approach, pristimerin was shown to inhibit human telomerase reverse transcriptase (hTERT) expression and activity in human pancreatic cancer cells [90]. The compound inhibited hTERT expression by suppressing the transcription factors Sp1, c-Myc and NF-κB, which are known to control hTERT gene expression, and inhibited protein kinase AKT, which phosphorylates and facilitates hTERT nuclear import and its telomerase activity [90].
Celastrol and triptolide extracted from the Chinese herb Tripterygium wilfordii Hook F (also known as Lei Gong Teng or Thunder of God Vine) were also found to exhibit marked anti-tumoral effects [62]. Triptolide efficiently inhibited cell growth and induced cell death in human prostate cancer LNCaP and PC-3 cell lines in vitro as well as inhibited the xenografted PC-3 tumor growth in nude mice in vivo [62]. Tumor cell apoptosis was induced through the activation of caspases and PARP cleavage and reduced SUMOspecific protease 1 (SENP1, a potential biomarker and therapeutic target for prostate cancer) expression in dose-and time-dependent manner resulting in an enhanced cellular SUMOylation in prostate cancer cells [62]. Meanwhile, triptolide decreased c-Jun expression, and suppressed c-JUN transcription activity. On the one hand, silencing of SENP1 or c-JUN in PC-3 prostate cancer cells decreased cellular viability, suggesting that the cytotoxicity of triptolide could result from triptolide-induced downregulation of SENP1, or c-JUN. On the other hand, ectopic expression of SENP1, or c-JUN significantly increased the viability of prostate cancer cells upon triptolide exposure, indicating that rescuing these triptolide downregulated proteins could inhibit cell toxicity induced by triptolide [62].
Several other studies have addressed the effects of tripolide on cancer cells. For instance, exposure of BE(2) C human neuroblastoma cells to triptolide resulted in reduction in cell growth and proliferation [92]. Along with cell cycle arrest in the S phase and inhibition of the colony forming ability of BE (2) C neuroblastoma cells observed in vitro, reduction of tumor development and growth of tumor grafts was seen in vivo [92]. Triptolide significantly decreased the proportion of regulatory T cells and lowered the levels of FOXP3 transcription factor (also known as scurfin) in the spleen and axillary lymph nodes of tumor-bearing mice [93]. Production of IL-10 and TGF-β in peripheral blood and spleen were also decreased and the production of VEGF in tumor-bearing mice was inhibited [93]. Triptolide attenuated colon cancer growth in vitro and in vivo [94]. Using a proteomic approach, the authors found 14-3-3ε, a cell cycle-and apoptosis-related protein, cleavage and perinuclear translocation, to be induced by triptolide in human colon cancer cells [94].
Triptolide was shown to enhance cisplatin-induced cytotoxicity in human gastric cancer SC-M1 cells [40]. After low-dose combined treatments with triptolide and cisplatin, a decrease in viability with a concomitant increase in apoptosis was observed in SC-M1 cells but not in normal cells [40]. Apoptosis induced by the combined treatments was accompanied by a loss of mitochondrial membrane potential and release of cytochrome c and triptolide also increased the cisplatin-induced activation of caspase-3 and -9 and the downstream cleavage of PARP in SC-M1 cells in vitro [40]. The combined treatment completely suppressed in vivo tumor growth of gastric tumor grafts in mouse xenograft model [40]. In liver cancer, the combination of triptolide plus chemotherapeutics (cisplatin, 5-fluorouracil) reduced liver cancer cell viability and enhanced apoptosis compared with single treatment in vitro [95]. Furthermore, cells treated with triptolide plus chemotherapeutics exhibited marked production of intracellular ROS and caspase-3 activity, induced BAX expression, and inhibited BCL-2 expression [95].
Celastrol, a known natural 26S proteasome inhibitor, promotes cell apoptosis and inhibits tumor growth [27,55,62,63]. Celastrol inhibited the proliferation of various human tumor cells, including multiple myeloma, hepatocellular carcinoma, gastric cancer, prostate cancer, renal cell carcinoma, head and neck carcinoma, non-small cell lung carcinoma, melanoma, glioma, and breast cancer (with concentrations as low as 1 μM). Celastrol decreased protein levels of CCND1 and CCNE, but increased the CDKN1A and 1B protein levels, activated caspase-8, -9, and -3, as well as induced cleavage of BH3 interacting-domain death agonist (BID) and PARP. The apoptotic effects of celastrol were preceded by activation of JNK and repression of AKT signaling [55].
Celastrol induces apoptosis in human cervical cancer cells by targeting a proteasome catalytic subunit β1, endoplasmic reticulum (ER) protein 29 (ERP29) and mitochondrial import receptor Tom22 (TOM22) [73]. Celastrol was found to induce ER stress and induced translocation of BAX into the mitochondria, further upregulating BIM and TOM22, possibly involving glycogen synthase kinase-3β in these events [73]. Celastrol could also induce paraptosis-like cytoplasmic vacuolization in cancer cell lines including HeLa cells, A549 cells and PC-3 cells derived from cervix, lung and prostate, respectively [41]. Celastrol directly affects the biochemical properties of tubulin heterodimer in vitro and reduces its protein level in vivo [74]. At the cellular level, celastrol induces synergistic apoptosis when combined with conventional microtubule-targeting drugs and manifests an efficacy toward taxol-resistant cancer cells. Celastrol inhibited the cell migration and increased G1 arrest, and induced autophagy and apoptosis in human gastric cancer cells [79].
Celastrol was also found to increase the level of autophagy in the human pancreatic cancer MiaPaCa-2 xenograft tumor model. However, autophagy inhibitor 3-MA could improve the therapeutic effect of celastrol in vitro and in vivo [96]. Celastrol could inhibit proliferation of human osteosarcoma cells accompanied by G2/M phase arrest, activation of caspase-3, -8, and-9, as well as triggering autophagic pathway, as evidenced by formation of autophagosome and accumulation of LC3B-II protein [97]. Intriguingly, inhibition of apoptosis enhanced autophagy while suppression of autophagy diminished apoptosis in osteosarcoma cells upon celastrol exposure. Celastrol also induced JNK activation and ROS generation, while the JNK inhibitor significantly attenuated celastrol-triggered apoptosis and autophagy while ROS scavenger could completely reverse them [97]. Celastrol induced autophagy in human androgen receptor (AR)positive prostate cancer cells, while the AR knockdown resulted in enhanced autophagy induced by celastrol, and autophagy inhibition by miR-101 mimic was found to enhance the cytotoxic effect of celastrol in prostate cancer cells [98].
Celastrol decreased gastric cancer cells viability via reduced IκB phosphorylation, nuclear p65 subunit protein levels and NF-κB activity [81]. Furthermore, celastrol could increase miR-146a expression and upregulation of miR-146a expression could suppress NF-κB activity. However, downregulation of miR-146a expression can reverse the effect of celastrol on NF-κB activity and apoptosis in gastric cancer cells. Combination of TRAIL and celastrol induced apoptosis in human pancreatic cancer cells through upregulation and dephosphorylation of EIF4E-BP1 protein [82]. Celastrol was also found to exhibit anticancer activity in KU7 and 253JB-V bladder cells by inducing apoptosis, inhibition of growth, colony formation and migration in vitro and in vivo [84]. Celastrol was shown to decrease expression of specificity protein transcription factors Sp1, Sp3 and Sp4 and several Sp-regulated genes/proteins including VEGF, survivin and CCND1 and fibroblast growth factor receptor (FGFR)-3 [84].
Suberoylanilide hydroxamic acid (SAHA) is a promising histone deacetylase inhibitor approved by the US Food and Drug Administration but its clinical application for solid tumors is partially limited by decreased susceptibility of cancer cells due to NF-κB activation [52]. As an NF-κB inhibitor, celastrol exhibits potent anti-cancer effects but has failed to enter clinical trials due to its toxicity [52]. The combination of celastrol and SAHA exerted substantial synergistic efficacy against human cancer cells in vitro and in vivo, accompanied by enhanced caspase-mediated apoptosis [52]. This combination inhibited the activation of NF-κB caused by SAHA monotherapy and consequently led to increased apoptosis in cancer cells [52]. Interestingly, E-cadherin was dramatically downregulated in celastrol-resistant cancer cells and E-cadherin expression was closely related to decreased sensitivity to celastrol. However, the combination treatment significantly augmented the expression of E-cadherin, suggesting that mutual mechanisms contributed to the synergistic anti-cancer activity [52]. Furthermore, the enhanced anticancer efficacy of celastrol combined with SAHA was validated in human lung cancer 95-D xenografts in mice in vivo without increased toxicity [52]. These synergistic anti-cancer effects of celastrol and SAHA could be underlined by their reciprocal sensitization, which was simultaneously regulated by NF-κB and E-cadherin [52].
The mechanistic effects of plant extracts have also been addressed to certain extent. A spray-dried extract of Maytenus ilicifolia was shown to induce apoptosis in human hepatocellular HepG2 cells and human colorectal carcinoma HT-29 cells via down-regulation of BCL-2 and activation caspase-3 [46]. Celastrus orbiculatus extract significantly inhibited cell viability and induced apoptosis of human hepatocellular carcinoma LM6 cells in a dose-dependent manner [96]. In this study, apoptosis was accompanied by an increased BAX expression and decreased BCL-2 expression, induced release of cytochrome C, activation of caspase-3, and cleavage of PARP [99]. Furthermore, activation of ERK, p38 MAPK, and JNK phosphorylation, and downregulation of AKT phosphorylation was observed [96]. Compound oleanen from Celastrus hypoleucus also exhibits antitumor activity toward human cervical cancer cells by increasing in activity of caspase -3, -7, and -6, as well as a proapoptotic protein BIM [39].
Emerging evidence shows that quinone-methide triterpenes exert multiple molecular mechanisms leading to decrease of tumor cell viability or even cell death, and therefore are indeed promising compounds in the context of cancer treatment. However, existing data also highlight the need for more comprehensive, far-reaching approaches and technologies that would lead to a better understanding of direct and indirect effects of these compounds on molecular processes in tumor cells in vitro and in vivo.
CELASTRACEAE AND POTENTIAL MOLECU-LAR TARGETS IN HEAD AND NECK SQUAMOUS CELLS CARCINOMA
HNSCC arises from premalignant progenitor cells that progress to invasive malignancy due to cumulative genetic alterations [100]. Conventional treatment modalities -surgery, radiation and chemotherapy -are nonselective therapies that not only cause damage to normal tissue but which are associated with systemic toxicities that reduce compliance and, consequently, the success of therapy [9,10]. The past decade has witnessed significant improvements in the knowledge on the complex molecular abnormalities underlying the clinicopathological characteristics of HNSCC, a promising scenario for the development of novel diagnostic markers and therapeutic procedures for the clinical management of patients [8,101]. In theory, once major molecular mechanisms involved in the pathogenesis of HNSCC are known, a cancer therapy working at the molecular level, targeting deregulated pathways may be created. In practice, a very limited number of therapeutic agents for the targeted treatment of HNSCC are currently undergoing clinical trials [102], and the only established therapeutic target is the epidermal growth factor receptor (EGFR). EGFR is a cellsurface protein that regulates cell growth and differentiation and it can be targeted by the monoclonal antibody cetuximab (™Erbitux), resulting in the elimination of signal transduction. EGFR is overexpressed in HNSCC when compared with cancer-free mucosa with predictive and prognostic value [10,[103][104][105]. However, despite the abundant expression in HNSCC, only a subset of patients responds to EGFR inhibitors since alternative downstream signaling pathways may remain activated [10,11,[106][107][108]. These results indicate the need for combined therapy approaches and for the continuous search for new active compounds that may target molecular processes in HNSCC.
Upon comparing literature on genetic and molecular characteristics of HNSCC and data on the effects of Celastraceae-derived compounds on gene expression and protein levels, a lot of overlapping information can be found. We summarize the molecular alterations of HNSCC that have been addressed in studies on anti-cancer effects of Celastraceae-derived compounds and extracts ( Table 3).
Loss of heterozygosity at the chromosomal region 9p21 is found in 70-80% of HNSCC cases, representing the most common genetic alteration in this type of cancer and in early pre-invasive lesions [109]. The CDKN2A gene locus found within this chromosome encodes transcript p16 involved in G1/S cell cycle regulation through the inhibition of cyclin dependent kinases such as CDK4 and CDK6 [110]. These kinases phosphorylate retinoblastoma protein (pRB) leading to the progression from G1 phase to S phase. For instance, pristimerin was previously shown to modulate the activity of CDK-4 and -6 resulting in G1-phase arrest of various human cancer cells [43,51]. Emerging evidence shows that triterpenes affect the expression levels of other genes related to cell cycle progression, including including cyclin D1, cyclin E, p21, p27 and c-Myc, which have also been studied in the context of HNSCC [27,32,34,36,49,[111][112][113]. Therefore, a therapeutic interference able to regulate these CDKs, such as one seen upon treatment of tumor cells with pristimerin, could be important potential asset helping in cell proliferation control of HNSCC, as well.
FASN
Compounds isolated from Celastraceae also target a hepatocyte growth factor receptor (HGFR, also known as MET, encoded by the c-MET gene), the key player in PI3K/AKT/mTOR signaling regulation. MET was found to be overexpressed in up to 84% of HNSCC cases and is being tackled as a therapeutic target for HNSCC [142]. Additionally, if the antioxidant systems that allow stem-like cancer cells to avoid oxidative stress and resist EGFR inhibition are targeted, this may sensitize the remaining surviving cells, which will become sensitive to treatment [143]. Thus, the effects of triterpenoids on the redox state of cells may also be explored in this context [144].
As an example of how Celastraceae compounds could affect HNSCC, the treatment of human tongue cancer cells with triptolide, ionizing radiation, or triptolide plus ionizing radiation was reported to oral cell colony numbers [145]. In the study, triptolide was shown to increase apoptosis and decrease the expression of anti-apoptotic proteins in oral cancer cells in vitro. In addition, in vivo a combination treatment (triptolide with radiation) synergistically reduced tumor weight and volume in vivo possibly via the induction of apoptosis and reduction in anti-apoptotic protein expression suggesting that this may be a promising combined modality therapy for advanced oral cancer [145].
Celastraceae triterpenoids (dihydrocelastrol and celastrol) were identified as potent inducers of unfolded protein response (UPR) signaling and cell death in a panel of oral squamous cell carcinoma (OSCC) cells [76]. The pharmacological exacerbation of the UPR was suggested to be an effective approach to eliminate OSCC cells [76]. The UPR is executed via distinct signaling cascades, whereby an initial attempt to restore folding homeostasis in the endoplasmic reticulum during stress is complemented by an apoptotic response if the defect cannot be resolved. Moreover, bio-chemical and genetic assays using OSCC cells demonstrated that intact protein kinase RNA-like endoplasmic reticulum kinase (PERK)-eukaryotic initiation factor 2 (eIF2)activating transcription factor 4 (ATF4)-DNA damageinducible transcript 3 (DDIT3, also known as C/EBP homologous protein, CHOP) signaling is required for proapoptotic function of UPR, and subsequent death of OSCC cells upon celastrol treatment [76].
Altogether, current data on the anti-cancer effects of compounds isolated from Celastraceae ( Table 3) points to a prolific and promising field of research in which such molecules should be able to either inhibitor up-regulate key pathways involved in HNSCC phenotype. New studies in this area should contribute to bringing about additional molecules of interest to this still scarce treatment scenario.
CONCLUSION
At the clinical and molecular level, HNSCCs are characterized by extensive heterogeneity, a picture that defies their classification as a single disease. HNSCC treatment should undergo substantial changes in the near future due to the present-day exploration of its mutational landscape. However, the development of effective therapy modalities involves not only the increased understanding of the mechanisms involved in HNSCC carcinogenesis but also the identification of new molecules capable of acting upon several molecular mechanisms. Ideally compounds should distinguish themselves from conventional cytotoxic agents and from drugs that target a single step in signal transduction pathways. This review shows that a variety of compounds isolated from species from the Celastraceae family and, at times, plant extracts, have been addressed as multifunctional drugs, interfering in multiple steps in key pathways involved in the development and progression of HNSCC. Few studies have investigated the potential of Celastraceae molecules to target HNSCC features, a scenario that will hopefully change in the next few years.
CONFLICT OF INTEREST
The author(s) confirm that this article content has no conflict of interest. | 2018-04-03T00:56:55.315Z | 2016-12-14T00:00:00.000 | {
"year": 2017,
"sha1": "5a63b513dd13c4421e418a0c5e9c94de59b44717",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc5321769?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a63b513dd13c4421e418a0c5e9c94de59b44717",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
24061975 | pes2o/s2orc | v3-fos-license | Biofeedback therapy for faecal incontinence : a rural and regional perspective
Introduction: Faecal incontinence is the involuntary loss of liquid or solid stool with or without the patient’s awareness. It affects 8–11% of Australian community dwelling adults and up to 72% of nursing home residents with symptoms causing embarrassment, loss of self-respect and possible withdrawal from normal daily activities. Biofeedback, a technique used to increase patient awareness of physiological processes not normally considered to be under voluntary control, is a safe, conservative first-line therapy that has been shown to reduce symptom severity and improve patient quality of life. The Townsville Hospital, a publicly funded regional hospital with a large rural catchment area, offers anorectal biofeedback for patients with faecal incontinence, constipation and chronic pelvic pain. The aim of this report is to describe the effect of the biofeedback treatment on the wellbeing of regional and rural participants in a study of biofeedback treatment for faecal incontinence in the Townsville Hospital clinic. Methods: There were 53 regional (14 male) and 19 rural (5 male) participants (mean age 62.1 years) enrolled in a biofeedback study between January 2005 and October 2006. The program included 4 sessions one week apart, 4 weeks home practice of techniques learnt and a final follow-up reassessment session. Session one included documenting relevant history, diet, fibre, and fluid intake and treatment goals; anorectal function and proctometrographic measurements were assessed. Patients were taught relaxation (diaphragmatic) breathing in session two with a rectal probe and the balloon inserted, prior to inflating the balloon to sensory threshold. In session three, patients were taught anal sphincter and pelvic floor exercises linking the changes in anal pressures seen on the computer monitor with the exercises performed and sensations felt. Session four included improving anal and pelvic floor exercises, learning a defecation technique and receiving instructions for 4 weeks home practice. At the fifth session, home practice and bowel charts were reviewed and anorectal function was reassessed. Symptom severity and quality of life were assessed by surveying participants prior to sessions one and two and following session five. Patients were interviewed after session
Introduction
Faecal incontinence (FI), the involuntary loss of liquid or solid stool with or without the patient's awareness, may cause embarrassment, loss of self-respect, psychiatric disorders, and withdrawal from the community 1 .Little systematic research of this socially disabling condition has been conducted to determine either the true burden on individuals and communities or the results of treatment in northern Australia.
Community prevalence of FI has been reported to range between 8% and 11% in South Australia and New South Wales [2][3][4] .Faecal incontinence is a leading reason for nursing home placement in Australia where up to 72% of residents have the condition 5 .In studies conducted at the Colorectal and Urogynaecology outpatient clinics of the Townsville Hospital (TTH) in North Queensland more than one in five patients reported FI 1,6 .
Biofeedback is a safe, conservative first-line treatment for FI 7 .The Townsville Hospital, a publicly funded regional hospital with an extensive rural catchment area, operates a nurse-run holistic biofeedback program for patients with FI, constipation or pelvic pain 8,9 .
A Cochrane review of biofeedback for the treatment of FI found no evidence that any method of biofeedback or pelvic floor exercises provided better outcomes than any other conservative treatment method 10 .Standard care including diary and symptom questionnaire, structured assessment, patient teaching, emotional support, lifestyle modifications, management of FI and urgency control was a method that provided equivalent results 11 .When telephone assisted support for remote patients was compared with face-to-face biofeedback protocol for regional patients, no significant outcome differences were found 12 .
This clinical study was designed to assess two exercise regimens, the efficacy of biofeedback program components for FI (L Bartlett, K Sloots; unpubl.data, 2005-2006) and whether treatment outcomes (ie FI severity or quality of life [QOL]) differed between rural and regional participants.
Study procedure
Faecal incontinence patients on the TTH biofeedback waitlist were initially telephoned, had the study explained to them and were invited to participate.An information pack about the study and biofeedback treatment with appointment dates and a bowel chart were mailed to them.Treatment included 5 outpatient sessions: 4 at weekly intervals, 4 weeks home practice of techniques learnt, then an assessment session.Detail of the study procedure is provided (Fig1).
Participants met with the researcher immediately prior to the initial biofeedback session and completed a self administered FI questionnaire 1 , including the 29 question Fecal Incontinence Quality of Life Scale (FIQL) survey tool 13 .The researcher completed the Cleveland Clinic Florida Fecal Incontinence Score (CCF-FI) 14 with them.Session one with the biofeedback therapist included documenting: relevant medical, surgical, obstetric and medication history; and bowel problems and habits.Diet, fibre, and fluid intake were discussed together with the aim of therapy and the establishment of treatment goals and instructions given to record food, fluid, supplement intake and medications used in the patient diary.Anorectal function and proctometrographic evaluation were assessed using clinic manometric equipment 15,16 .The therapist presented coping strategies and dietary advice 8 .The pre-treatment bowel chart was reviewed and comprehensive instructions were given to accurately record daily bowel accidents and toileted motions using the Bristol Stool Form Scale 17 .Immediately before session two, participants repeated the FIQL and CCF-FI with the researcher.The biofeedback therapist then reviewed the previous week's diary and bowel chart with the patient noting the impact of any dietary or coping modifications used, before instructing each patient in slow relaxation (diaphragmatic) breathing.Patients had the rectal probe and the balloon inserted, prior to inflating the balloon to sensory threshold.Lying in the supine position with one hand lightly resting on the upper abdomen to monitor diaphragmatic movement and rate of breathing, each participant practiced relaxation breathing for 5-10 min.Visual biofeedback was provided from the clinic computer monitor with verbal feedback from the therapist to improve the technique 9 .Patients were instructed to practise relaxation breathing at home at least twice per day and complete the bowel chart for the following week.
Before session three the biofeedback therapist was advised the exercise regimen to which the patient had been randomised; that is: standard exercises (sustained pelvic floor and anal squeeze exercises) or alternative exercises (rapid and sustained pelvic floor and anal squeeze exercises) 18 .
In session three the previous sessions' therapy components were reviewed and amended.Anal sphincter and pelvic floor muscle exercises were taught according to the relevant exercise regimen.Participants were coached to link the changes in pressures seen on the computer monitor with the exercises performed and sensations felt.The aims of the exercises and techniques were to reduce urgency and frequency, and to improve sensitivity, anorectal coordination and continence.Patients were asked to perform their individual prescribed exercises at home (Fig2).At the fifth session, patients' home practice and bowel charts were reviewed with the biofeedback therapist; anorectal function was reassessed, and suggestions made for future improvements.Patients who felt they needed further support were able to book a follow-up appointment.At the completion of the fifth session the researcher reassessed severity of symptoms, the effect of FI on QOL and satisfaction with treatment outcomes; and also conducted a short semi structured interview to elicit participants' opinions about: the reasons for the delay in seeking treatment for FI; advice they would give fellow FI sufferers; suggestions they could provide to improve FI disclosure; and usefulness of a home biofeedback device.
In February 2008 all participants were mailed a follow-up survey.
Statistical analysis
Data were analysed on an intention-to-treat basis and patients who failed to complete the program were treated as missing.Numerical data are given as mean value and standard deviation (SD) or median value and interquartile range (IQR), depending on the distribution.Comparisons between characteristics were undertaken using χ² tests and χ² tests for trend, non-parametric Wilcoxon tests, and ttests.Statistical analyses were conducted using SPSS for Windows v17 (SPSS Inc; Chicago, IL, USA; www.spss.com).Throughout the analyses p<0.05 was considered statistically significant.
Participants
Of 101 consecutive patients with FI referred for biofeedback, 72 participants (19 male), mean age 62.1 years (95%CI 38.3-85.9),were both eligible and consented to participate.Twenty participants (6 male) had previously undergone bowel surgery, 12 for colorectal cancer (5 male).The surgery performed on these participants was: anterior resection, 11 (9 for low rectal carcinoma, 1 for diverticulitis, 1 for prolapse); segmental colectomy, 5 (carcinoma 1; diverticulitis 2; ischaemia of colon 1; rectal prolapse 1); and total proctocolectomy with ileal J-pouch anastomosis, 4 (carcinoma 2; diverticulitis 1; constipation 1).Eight participants (4 male) reported difficulty with rectal emptying.Of the 53 female patients, 38 (72%) had external anal sphincter defects; 13 had been surgically repaired prior to biofeedback referral, 26 had difficult vaginal deliveries requiring forceps or vacuum extraction, 5 women had vaginal repair surgery only and 10 women had both vaginal repair surgery and difficult vaginal deliveries.Fifty-three participants (14 male) lived within 30 min drive of the clinic (median 7.8 km, IQR: 5.7-12.0)while 19 (5 male) travelled up to 903 km (median 339 km, IQR: 136-388) from rural locations (p<0.001) to attend the clinic.Female participants were younger than male participants, and significantly so for regional residents (p=0.044,Table 1).Overall, participants had suffered from FI for a median duration of 24 months (IQR 18-48) with rural women reporting FI for a significantly shorter period before seeking treatment than their regional counterparts (p=0.034,Table 2).There were no adverse events as a result of treatment.
Baseline data
Pre-existing medical conditions and prior surgical history known to be risk factors for FI were similar for rural and regional participants.Rural participants reported poorer general health than regional participants (p=0.004) and lower QOL with regard to lifestyle (p=0.028,Table 3).Rural participants also presented with more severe FI than regional participants (CCF-FI, Table 3), significantly so for males (p=0.044).
Participants who failed to complete treatment
Sixty-nine participants completed all 5 treatment sessions (median duration 8 weeks).Three patients (all regional) failed to attend the final session: one with minimal FI (CCF-FI=1 and FIQL=4 for each scale) advised he had acquired sufficient skills in the first 4 sessions and did not need to continue; a second suffered post-surgery bowel dysfunction (following treatment for diverticulitis) and found the exercises exacerbated the pain and was not prepared to continue; the third did not provide a reason, but at the 2 year follow up requested further sessions with the biofeedback therapist.
p=0.023).
There were no significant differences in any objective measure between rural and regional participants (Table 4).Participants were very satisfied with the treatment program, with their median rating being 9 (7.5-10) out of a maximum of 10.They also rated individual components of the program from very to extremely helpful (Fig3).While improvement in rural participants' FIQL and CCF-FI scores over the course of treatment had been marginally better than that of regional participants, there were no significant differences in subjective or objective treatment outcomes between regional or rural participants at the final treatment session.
Final interview
At the session five interviews at least a quarter of participants (33% rural, 25% regional) reported they had sought help for their bowel leakage as soon as it occurred, while more than a third (45% rural, 40% regional) had sought help within 12 months.However, more than a quarter of participants (22% rural, 35% regional) did not seek help for more than a year.The reasons patients gave for the delay in obtaining treatment included: believing the problem would go away (26 patients, 6 rural); being too embarrassed to seek help (11 patients, 2 rural); being given poor advice by a GP, for example that nothing could be done, or that it was a normal problem after a 10lb baby (11 patients, 3 rural); just coping with the problem (13 patients, 2 rural); thinking FI was a normal part of aging (6 patients, 2 rural); believing they were the only one with the problem and not knowing it was treatable (5 patients, 2 rural); and experiencing previous unsuccessful treatments such as medication, anal stretching or fistula operations (11 patients, 5 rural).
More than 83% of the participants (15 rural, 45 regional) sought initial help from their GP, 4% (2 rural, 1 regional) from hospital doctors and 7% (2 rural, 3 regional) from their colorectal surgeon.Over 91% were directly referred to the colorectal surgeon; the remainder had colonoscopy or other investigations before referral to the colorectal surgeon.All participants attending the final session reported that they would advise a friend in a similar situation not to wait, but seek help immediately, with 53% specifically citing the biofeedback program, 14% their GP and 2% their specialist.
When asked for recommendations to facilitate patient disclosure of FI to doctors, suggestions included: asking patients directly about FI (54%: 14/19 rural; 22/48 regional, p=0.039); listening to patients (39%: 10/19 rural; 16/48 regional); exhibiting empathy (24%: 8/19 rural; 8/48 regional, p=0.028); providing advice about FI risk factors (24%, 6/19 rural; 11/48 regional); recommending biofeedback (18%: 2/19 rural; 11/48 regional); surveying patients (7%); shortening biofeedback waitlists (6%); providing private FI treatment facilities (6%); GP referral to specialist (4%); and more education about available treatment for FI for GPs and hospital doctors (12%: 4/19 rural; 4/48 regional).Patients were asked 'Would a confidential survey, completed in the waiting room that you handed straight to GP aid discussion of this or other potentially embarrassing problems?' 86% of those asked (15/17 rural; 29/34 regional) said it was a good idea; 5 patients (1 rural) said they would not use it because they had good communication with their GP; one person thought a general consultation was too short to deal with an additional issue, but it could prompt a future discussion; while another would prefer to fill it in at home for use at a subsequent consultation.
More than 78% of participants had never seen information about FI in the community; those who had seen such information cited their pharmacy, community nurse, speakers at an older women's network, or the internet.
Over 97% of patients reported that the biofeedback program was very/extremely helpful.Five patients mentioned they were confident doing their exercises in the clinic with biofeedback, but were concerned that they were not doing them correctly at home.Of the 49 who were asked if they would be interested in trialling a home biofeedback device (with an anal sensor), 44 said they would because it would 'be motivating'; 'be good to see an improvement'; or
Very helpful
A little helpful Not helpful confirm they were doing the exercises correctly.Other qualitative feedback supported the satisfaction scores.
Two year follow up
Fifty-nine participants (12 rural) responded to the February 2008 survey.Thirteen participants were lost to follow up; three were deceased (1 rural) and ten (6 rural) could not be contacted.For regional participants FIQL and CCF-FI scores continued to improve (Table 3), although these results were not significantly different from their final treatment session, with 44% (19/43) reporting no faecal leakage.In contrast rural participants' FIQL scores had declined over time, and with the exception of the FIQL lifestyle scale (p=0.033)they were not significantly better than the pre-treatment scores (Table 3).For responding rural women, improvement in FI severity was maintained at the 2 year follow up; however, the three rural men who answered FI severity questions had reverted to pre-treatment levels.Only 18% (2/11) of rural respondents reported no faecal leakage.Of the 33 patients (9 rural) who reported still having some faecal leakage 14 (2 rural) reported mostly staining, 14 (6 rural) reported moderate faecal losses and 1 (regional) reported loss of a large amount of stool.There were no significant differences in results during the treatment program between the rural patients who responded to the 2 year questionnaire and those who did not.
Since completion of the biofeedback therapy, five survey respondents had sought additional help for their FI.New treatments included silicone anal implants (1 rural, 1 regional), stoma (2 rural) and additional medication (1 rural).Eleven participants (1 rural) requested further biofeedback sessions.
There were no significant differences between rural and regional participants in the number of exercises they performed or their confidence in performing these exercises, although rural participants performed their exercises more frequently.Additionally, stool type for rural participants was looser (p=0.033),they reduced food intake before going out (p=0.005), avoided travelling (p=0.045)particularly by aeroplane or train (p=0.002), and had more faecal urgency (p=0.048) and avoided visiting friends marginally more often (p=0.033).When asked directly, they reported feeling more depressed (p=0.048),felt less healthy (p=0.015),enjoyed life less (p=0.031),were more afraid to have sex (p=0.031), and were more likely to avoid going out to eat (p=0.001).
Discussion
The major findings of this study were that the biofeedback treatment program significantly improved continence and QOL for both regional and rural participants.While FI severity and QOL had continued to improve in regional participants 2 years later, for rural participants FI severity and QOL had regressed to pre-treatment levels.
Many people enjoy living in rural locations due to higher general wellbeing, personal safety and community connection 20 .Rural participants reported poorer general health than regional participants prior to treatment, which has been previously described in rural populations 21 .Poorer rural health has been linked to lower levels of education, employment and income, occupational risks, higher levels of hypertension, high cholesterol, asthma, diabetes and risky behaviour such as smoking and alcohol abuse, reduced access to health services, and driving long distances 21,22 .
Rural female participants sought help earlier than regional women despite their FI severity scores not being significantly different.This is possibly due to the greater inconvenience to their lifestyle which involves more planning and the need to travel further, with less access to toilets.In comparison with regional participants, rural participants avoided travelling, going out to eat, visiting friends, were more afraid to have sex, were more depressed and enjoyed life less, all of which could explain their reduced sense of wellbeing.
While significant improvement of FI severity and QOL in both rural and regional participants was achieved during treatment, the QOL of rural participants failed to be maintained over time.As there was no difference in exercise maintenance at the 2 year follow up, poorer rural QOL could be due to other reasons, such as a change in diet, reduced social interaction or lower tolerance of the impact of FI on QOL.Rural diet tends to be very different from urban diet, including more meat, biscuits and cakes 23 .Thus the dietary changes rural individuals needed to make may have been more difficult to maintain over the long term in their rural setting.Further research is required to investigate this issue.
Men and women who reside in rural northern Queensland may be required to perform heavy physical work (eg farmers and cane growers).Heavy lifting has been shown to put stress on pelvic floor muscles 24 which may in turn contribute to FI 25 .Additionally, in the long term, regular heavy physical work or the long working hours of primary producers may reduce the likelihood of performing prescribed exercises at the end of a tiring day, compared with people in more sedentary professions who can perform them at any time 26 .
Disclosure of taboo subjects can be seen as socially risky, and people are less likely to disclose embarrassing information, particularly to close friends, relatives or respected associates such as GPs 27 , especially if they believe the consequences will be negative 6,28 .By not admitting an urgent need to access toilet facilities to prevent bowel leakage, rural participants' social or informal support networks may fail 22 .To maintain post-treatment QOL improvements, rural participants may require referral to a counsellor at the end of biofeedback treatment, or longer term biofeedback clinic support by way of a home biofeedback device, a telephone helpline, newsletter, or webpage.
Participants reported that disclosure of FI to their doctor was embarrassing and many delayed seeking help.Most thought that an 'embarrassing topic survey tool' available in their GP's surgery may have assisted them to disclose their FI earlier, or the GP to ask patients with risk factors whether they had FI, directly and with empathy.They felt this would enable disclosure and facilitate treatment, while maintaining the professional doctor-patient relationship.An embarrassing topic survey tool is currently being assessed.
The short treatment program (5 x 1.5 hour sessions over 8 weeks), which is comparable with other biofeedback programs 7,29 , may not be sufficiently supportive for rural patients in the long term.A similar program in Sydney, Australia with 5 monthly sessions, used telephone assisted support between initial and final face-to-face sessions for rural/remote patients and found no difference in results between that method and full clinic attendance for regional participants 12 .The treatment duration of that study was twice the length of this study, even though the number of sessions was equivalent.Advantages of the longer treatment duration may include greater time for patients to practise techniques learnt, greater opportunity to present problems to the therapist and for the therapist to customise treatment.However this may be at the cost of building a strong therapist-client relationship, patient focus and motivation in the short term.
Conclusion
For rural participants to maintain similar long-term improvement in continence and QOL to regional participants, an additional follow-up session with the biofeedback therapist and ongoing local support by continence advisors should be investigated for these patients.
A telephone helpline, newsletter, or webpage may also be beneficial.
MethodsParticipantsClinic patients were eligible to participate in the study if their FI had persisted for at least 6 months and had failed to respond to standard treatment recommended by their GP.Further eligibility criteria included being at least 18 years of age and not pregnant; and having no terminal illness, mental illness or gastrointestinal stoma.Participants were referred by a colorectal surgeon following anorectal physiologic assessment including manometry and endoanal ultrasound.They attended the biofeedback program between January 2005 and October 2006 and signed informed consent forms.
Figure 2 :
Figure 2: Rapid and sustained anal sphincter squeeze instruction.
Figure 3 :
Figure 3: Participant rating of treatment components.
Table 1 : Participants' ages according to sex and location Age (years) (95% CI) Male Female Total Participant location n Mean n Mean P-value †
P value comparing age by sex for rural and regional participants measured using the Wilcoxon unpaired test.
*P value significant.
Table 2 : Participants' duration of faecal incontinence according to sex and location Sex Median (IQR) Male Female Total Participant location n FI Duration † n FI Duration † P-value ¶ n FI Duration †
FI, Faecal incontinence.†Duration in months; ¶p value comparing duration of faecal incontinence by sex for rural and regional participants measured using the Wilcoxon unpaired test.*P value significant.
Table 3 : Quality of life and faecal incontinence severity over study period, according to location 13,19
CCF-FI, Cleveland Clinic Florida Fecal Incontinence Score; FIQL, Fecal Incontinence Quality of Life Scale; IQR, inter-quartile range; n, number of patients who completed questionnaires; † Mann-Whitney unpaired test; ¶Outcome compared with baseline, Wilcoxon signed ranks test.FIQL: Rockwood et al 2000[13]; scales calculated as per Rockwood 2008[19].*P value significant. | 2018-04-03T02:25:16.514Z | 2011-03-02T00:00:00.000 | {
"year": 2011,
"sha1": "c80b8668e0506e5ed72c46df431c85f42b505b98",
"oa_license": "CCBY",
"oa_url": "https://www.rrh.org.au/journal/download/pdf/1630/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c80b8668e0506e5ed72c46df431c85f42b505b98",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
109198613 | pes2o/s2orc | v3-fos-license | Large Area Silicon Carbide Vertical JFETs for 1200 V Cascode Switch Operation
SiC VJFETs are excellent candidates for reliable high-power/temperature switching as they only use pn junctions in the active device area where the high-electric fields occur. VJFETs do not suffer from forward voltage degradation, exhibit excellent short-circuit performance, and operate at 300◦C. 0.19 cm2 1200 V normally-on and 0.15 cm2 low-voltage normally-off VJFETs were fabricated. The 1200-V VJFET outputs 53 A with a forward drain voltage drop of 2 V and a specific onstate resistance of 5.4 mΩ cm2. The low-voltage VJFET outputs 28 A with a forward drain voltage drop of 3.3 V and a specific onstate resistance of 15 mΩ cm2. The 1200-V SiC VJFET was connected in the cascode configuration with two Si MOSFETs and with a low-voltage SiC VJFET to form normally-off power switches. At a forward drain voltage drop of 2.2 V, the SiC/MOSFETs cascode switch outputs 33 A. The all-SiC cascode switch outputs 24 A at a voltage drop of 4.7 V.
INTRODUCTION
Wideband gap semiconductors like silicon carbide (SiC) and the III-IV nitrides are currently being developed for high-power/temperature applications.Silicon carbide (SiC) is ideally suited for power-conditioning applications due to its high saturated drift velocity, its mechanical strength, its excellent thermal conductivity, and its high critical field strength.For power devices, the tenfold increase in critical field strength of SiC relative to Si allows high-voltage blocking layers to be fabricated significantly thinner than those of comparable Si devices.This reduces device onstate resistance, and the associated conduction and switching losses, while maintaining the same high-voltage blocking capability.Figure 1 shows the theoretical specific onstate resistance of blocking regions designed for certain breakdown voltages in Si and 4H-SiC, under optimum punchthrough conditions [1].The specific onstate resistance of 4H-SiC is approximately 400 times lower than that of Si at a given breakdown voltage.This allows for high current operation at relatively low-forward voltage drop.In addition, the wide band gap of SiC allows operation at high temperatures where conventional Si devices fail.Forward voltage drop versus current density of Northrop Grumman's all-SiC vertical junction field effect transistor-(VJFET-) based cascode switch, and those of commercial Si MOSFET, Si IGBT, and Si CoolMOS switches are shown in Figure 2. The SiC switch has a lower voltage drop at a given current density, even at the elevated temperature of 150 • C. The low-loss and the high-temperature operational capabilities of SiC devices can potentially eliminate the costly cooling systems present in today's Si based power electronics.
Presently, several SiC devices are being developed for 600 V (1200 V rating) power switching applications.SiC MOS-based devices show promise as normally-off power switches but suffer from low-MOS mobility and native oxide issues that limit reliable operation to below 175 • C [2].Furthermore, several temperature-dependant factors result in a decrease of the SiC MOSFET threshold voltage with temperature.This may lead to unwanted MOSFET turnon at temperatures over 200 • C.
The SiC bipolar junction transistor is another normallyoff power switching candidate.However, as with all SiC bipolar devices, its long term performance deteriorates due to forward bias voltage degradation [3].Also, the BJT is a current controlled device that can require substantial base drive current [4].The SiC VJFET is a very promising candidate for highpower/temperature switching as it only uses pn junctions in the active device area, where the high-electric fields occur, and can therefore fully exploit the high-temperature properties of SiC in a gate voltage controlled switching device.VJFETs for high voltage applications are typically normally-on devices, and an all-SiC normally-off power switch can be implemented by combining a high-voltage normally-on VJFET with a low-voltage normally-off VJFET in the cascode configuration.
In this paper, we review the reliability and high temperature characteristics of 1.25 × 10 −3 cm 2 area unipolar ionimplanted SiC VJFETs.Subsequently, we present the forward current and blocking voltage characteristics of 0.19 cm 2 area 1200 V normally-on and 0.15 cm 2 area low-voltage normally-off SiC VJFETs.The 0.19 cm 2 1200-V VJFETs have been connected in the cascode configuration with Si MOSFETs and 0.15 cm 2 low-voltage SiC VJFETs to form normally-off power switches.
SIC VJFET STRUCTURE
A cross-section schematic of a high-voltage p+ ionimplanted 4H-SiC VJFET is shown in Figure 3.
The channel layer is doped to low 10 16 cm −3 , and the drift layer is doped to mid 10 15 cm −3 .To ensure >1200 V blocking, a 12 μm drift layer thickness is used.The substrates and epitaxy are grown by commercial vendors.In the onstate, majority carriers (electrons) flow vertically from source to drain.To control the current through the device, the gates are subjected to a voltage, which adjusts the width of the depletion regions between the p-type gates in the n-type channel.In normally-off VJFETs, the p+ implant depletion regions must overlap at 0 V gate bias.Reducing the gate-togate spacing leads to higher depletion region overlap and the VJFET blocks increasingly higher drain voltages.For larger gate-to-gate separations, the 0 V gate bias depletion regions do not overlap and the VJFET is normally-on.
As the normally-off VJFET need only block low voltages in cascode switching operation, its drift layer is approximately 2.5 μm to minimize onstate resistance and losses.
To ensure high-voltage operation with minimum associated onstate resistance, a robust, self-aligned, multiple floating guard-ring edge termination was designed and fabricated.The high-voltage VJFETs (12 μm drift layer doped at mid 10 15 cm −3 ) exhibited breakdown voltages of up to 2022 V, which corresponds to a record 93% of the calculated 4H-SiC material limit [5].The measured specific onstate resistance was 2.1 mΩ cm 2 , a value close to the theoretical limit of the 4H-SiC material, Figure 1 [6].
Initially, limited by the low 4H-SiC material quality, and the high micropipe defect density in particular, "small" 1.25 × 10 −3 cm 2 area VJFETs were fabricated and paralleled to increase current output.The small area VJFET manufacturing was optimized and high yield with excellent wafer parameter uniformity were achieved [7,8].
SIC VJFET CASCODE RELIABILITY AND HIGH-TEMPERATURE OPERATION
To assess the reliability of the 1.25 × 10 −3 cm 2 area VJFETs, the forward voltage drops across the VJFET's gate-to-drain and gate-to-source pn junctions were measured at constant junction current densities of 100 A/cm 2 .After 500 hours of continuous room temperature operation under this DC bias condition, no measurable forward voltage drift was detected [9].Additionally, 1200 V SiC VJFETs were subjected to shortcircuit testing to determine the survivability time prior to the onset of catastrophic device failure.The SiC VJFETs exhibited hold-off times in excess of 1 millisecond, a sixfold improvement over Si MOSFETs of similar voltage rating [9].
To implement all-SiC normally-off power switches, high voltage normally-on and low-voltage normally-off VJFETs were connected in the cascode configuration.A schematic of the cascode switch and its constituent VJFETs are shown in Figure 4.The switches are voltage-driven, and have exhibited excellent power switching characteristics including low onstate resistance, high speed, and low switching losses [10].A typical breakdown voltage curve of a normally-off (1250 V at V gs = 0 V) all-SiC cascode switch is shown in Figure 5.
The cascode switch's internal PiN diode has exhibited a very fast 100 nanoseconds reverse recovery time, Figure 6, which can potentially eliminate the need for external diodes in power switching circuits [11].A half-bridge inverter was demonstrated using SiC cascode switches with no external antiparallel diodes.The inverter consisted of high-side and low-side cascode switches that were pulse-width modulated from a 500 V bus to produce a 60 Hz sinuso at the output [11].
The high-temperature operational capability of SiC VJFETs is crucial in eliminating costly cooling in power systems.To investigate the effect of temperature on blocking voltage, the blocking characteristics of a normally-off VJFET were measured at 25 • C and 300 • C junction temperatures.At a given gate-to-source bias V gs , the blocking voltage decreases with temperature as shown in Figure 7.This is in agreement with theory as the reverse-bias drain-to-source leakage current increases with temperature, due to the higher number of thermally generated carriers.The measurements were performed using a Tektronix 371 A curve tracer.As the 300 • C measurement setup required modification of the curve tracer's looping compensation, the 25 • C and 300 • C leakage current levels cannot be directly compared.
The effect of temperature on the onstate drain current of the cascode switch is illustrated in Figure 8 at junction temperatures of 25 • C and 300 • C, for gate-to-source biases of 0 to 3 V in steps of 0.5 V.As VJFETs do not have gate oxides, they reliably operate at junction temperatures of 300 • C. The measured drop in current with increasing temperature in Figure 8 is in good agreement with the theoretical reduction in SiC electron mobility.As the channel and drift regions are designed independently in VJFETs, the onstate resistance can be tuned for maximum current output [12].
LARGE AREA VJFETs
To meet the current handling requirements of modern power conditioning systems, 1200 V normally-on VJFETs of 0.19 cm 2 area (4.4 mm × 4.33 mm) were manufactured.Excluding the bonding pads and edge termination region, the active area as defined by the pn-junctions is 0.143 cm 2 .A photograph of large area VJFETs fabricated on a 3-inch 4H-SiC wafer is shown in Figure 9.
Large area VJFETs were soldered into packages and wire bonded using thick aluminum wires, Figure 10.
To attain the desirable 1200 V blocking voltage capability, a 12 μm drift layer with a doping concentration in the mid 10 15 cm −3 was used.The blocking voltage characteristics of the 0.19 cm 2 VJFET were measured with a Tektronix 371 A curve tracer and are shown as a function of gate voltage in Figure 11.At a gate-to-source bias of −24 V, the VJFET blocks 1680 V at a drain current density of 1 mA/cm 2 .
Room-temperature pulsed onstate drain current measurements were performed on packaged 1200-V VJFETs (single chip), at a gate bias range of 0 to 3 V in steps of 0.5 V, Figure 12.At a gate bias of 2.5 V, the VJFET's drain current is 40 A with a forward drain voltage drop of 1.5 V and a specific onstate resistance of 5.4 mΩ cm 2 .The current density is 280 A/cm 2 , and the power density is 420 W/cm 2 .The gate current at V gs = 2.5 V is 12 mA, which results in a transistor current gain of 3333.At the same gate bias of 2.5 V, the VJFET outputs a drain current of 53 A at a forward drain voltage drop of 2 V.The current density is 371 A/cm 2 , and the power density is 741 W/cm 2 , which is within the heat load capability of advanced water-cooled packages [13].The specific onstate resistance is 5.4 mΩ cm 2 , and the transistor current gain is 4417.A drain current of 100 A at a forward Drain voltage (kV) 0.143 cm 2 VJFET: 1680 V @ 1 mA/cm 2 V gs = −24 V Drain voltage (V) Figure 12: Onstate drain current characteristics versus drain voltage characteristics of a high-voltage 0.143 cm 2 active area packaged VJFET, at a gate-to-source bias range of 0 to 3 V in steps of 0.5 V.At a gate-to-source bias of 2.5 V (with a gate current of 12 mA), the VJFET outputs 53 A and 100 A at forward drain voltage drops of 2 V and 4.8 V, respectively.
drain voltage drop of 4.8 V is measured at a gate bias of 2.5 V (gate current of 12 mA).Finally, a record high onstate current of 161 A is measured at a drain voltage drop of 16 V, for a gate bias of 3 V.Although biasing the gate pn junction above its ∼2.7 V built-in potential increases drain current, it can seriously degrade current gain.Therefore, in practical power switching circuits, the gate driver biases the pn junction below its builtin potential.
As pointed out earlier and evident from the blocking voltage characteristics presented in Figure 11, the 0.143 cm 2 1200-V VJFET is designed normally-on to minimize onstate resistance and maximize current gain.Presently, inherently safe operation gate-drive circuits are being developed to utilize normally-on SiC VJFETs as power switches [14,15].However, most circuit designers require a normally-off SiC- Drain voltage (V) based switch as a direct replacement to silicon MOSFETs or IGBTs.Connecting a high-voltage, low onstate resistance SiC VJFET in the cascode configuration with a low-voltage silicon power MOSFET creates a normally-off power switch with a control characteristic similar to a silicon MOSFET or IGBT [16].The cascode circuit diagram is similar to the one that appears in Figure 4, with the low-voltage normally-off part being a silicon MOSFET.
In the cascode configuration and with the MOSFET being biased in the on state, the 1200-V SiC VJFET and the Si MOSFET operate in series, with the gate of the 1200-V SiC VJFET automatically biased at a voltage value equal to the negative of the drain-to-source voltage drop across the lowvoltage Si MOSFET.In the offstate, the MOSFET's drain-tosource blocking voltage provides the necessary negative gate bias to pinch off the 1200-V SiC VJFET.After the VJFET is pinched off, further increase in reverse voltage at the drain of the cascode is supported by the 1200-V SiC VJFET.
The 1200-V SiC VJFET, whose onstate characteristics appear in Figure 12, was connected in the cascode configuration with two paralleled commercial 75-V/97-A rated silicon MOSFETs.The onstate drain current characteristics versus drain voltage of the resulting cascode switchwere measured at MOSFET gate biases of 0 V, 5 V, 10 V, and 15 V, Figure 13.At a MOSFET gate-to-source bias of 15 V, the cascode switch outputs 33 A at a forward drain voltage drop of 2.2 V.Under these biasing conditions, the gate of the 1200-V SiC VJFET experiences a bias equal to the negative of the measured 0.2 V drain-to-source voltage drop across the MOSFETs.The forward voltage drop across the 1200-V SiC-VJFET component of the cascode switch is 2 V. Its current and power densities are 231 A/cm 2 and 462 W/cm 2 , respectively.
The SiC-VJFET/Si-MOSFET normally-off power switch exploits the high-blocking voltage with low onstate resistance capability of the 1200-V SiC VJFET.However, the Si MOSFET sets an upper limit on temperature operation and introduces gate oxide capacitance.To overcome these Drain voltage (V) Figure 14: Blocking voltage versus gate bias characteristics of a 0.13 cm 2 active area low-voltage normally-off SiC VJFET.At a gateto-source bias of 0 V and a drain current density of 1 mA/cm 2 , the VJFET blocks 44 V (normally off to 44 V).
limitations and exploit the high-temperature capability of SiC, a 0.15 cm 2 SiC VJFET was fabricated to be used as the low-voltage normally-off component of the cascode switch (Figure 4).Excluding the bonding pads and edge termination regions, the low-voltage normally-off VJFET's pn-junction active area is 0.13 cm 2 .Its blocking voltage characteristics at different gate biases are demonstrated in Figure 14.The device has a thin 2.5 μm drift layer to minimize onstate resistance and losses, and blocks 44 V at zero gate-to-source bias with a drain current density of 1 mA/cm 2 .Room temperature onstate drain current measurements were performed on the low-voltage normally-off 0.15 cm 2 VJFETs at a gate bias range of 0 to 3.5 V, Figure 15.
At a gate bias of 2.5 V, the VJFET's drain current is 28 A with a forward drain voltage drop of 3.3 V and a specific onstate resistance of 15 mΩ cm 2 .The current and power densities are 215 A/cm 2 and 711 W/cm 2 , respectively.A drain current of 50 A at a forward drain voltage drop of 4 V is measured at a gate bias of 3.5 V.As the low-voltage VJFET is designed for normally-off operation, it is more resistive than the normally-on 1200-V VJFET of Figure 12, and consequently outputs less current under similar drain biasing conditions.
To implement a 1200 V all-SiC normally-off power switch similar to the one schematically shown in Figure 4, a single 0.143-cm 2 1200-V SiC VJFET was connected in the cascode configuration with a single 0.13 cm 2 lowvoltage SiC VJFET.Room temperature onstate drain current measurements were performed at cascode gate biases of 0 to 3.5 V, Figure 16.
At a cascode gate bias of 2.5 V, the all-SiC cascode switch outputs 24 A at a forward drain voltage drop of 4.7 V.At this biasing condition, drain voltages of 2.8 V and 1.9 V are dropped across the 1200-V and low-voltage VJFETs, respectively.The current and power densities are 168 A/cm 2 and 470 W/cm 2 for the 1200-V VJFET, and 185 A/cm 2 and 351 W/cm 2 for the low-voltage VJFET.In forward cascode operation, the gate of the 1200-V VJFET is biased at a voltage Drain voltage (V) value equal to the negative of the drain-to-source voltage drop across the low-voltage VJFET (Figure 4).Thus, the gate of the 1200-V VJFET is biased at −1.9 V when 4.7 V are dropped across the cascode switch.
In the SiC-VJFET/Si-MOSFETs cascode, a current of 33 A passes through the switch at a forward drain bias of 2.2 V, Figure 13.This is higher than the 24 A current of the all-SiC cascode under similar 1200-V VJFET power density biasing conditions.The mature Si wafer technology allows the fabrication of MOSFETs of a larger size, which minimizes their resistance and voltage drop.Consequently, at a power density of about 470 W/cm 2 on the 1200-V VJFET of the cascode, the gate of the 1200-V SiC VJFET is biased at the −0.2 V MOSFET voltage drop in the SiC-VJFET/Si-MOSFETs case, and at the −1.9 V low-voltage VJFET voltage drop in the all-SiC cascode case.This difference in 1200-V VJFET gate bias is responsible for the disparity in cascode current outputs under similar power density biasing conditions.Paralleling multiple low-voltage SiC VJFETs will minimize the voltage drop on the low-voltage portion of the all-SiC cascode switch and lead to higher cascode current output.
Operating the 1200-V SiC VJFET as a switch in an inherently safe gate-drive circuit eliminates the need for a low-voltage normally-off SiC VJFET cascode component.Moreover, in a 1200-V normally-on VJFET switch a gate bias of 2.5 V (12 mA gate current) can be applied, which allows for high-current/high-gain operation with low onstate resistance, Figure 12.
In a cascode switch, the gate of the high-voltage VJFET is biased at a voltage value equal to the negative of the drainto-source voltage drop across the low-voltage component.Hence, in forward cascode operation, the high-voltage VJFET's gate is always at a negative bias, which lowers current output and increases onstate resistance.To visualize the impact of negative gate bias on the 1200-V VJFET of the cascode, the onstate drain current characteristics of the 0.143-cm 2 1200-V VJFET are plotted at a gate bias range of 0 to −4.5 V in steps of 0.5 V, Figure 17.Drain voltage (V) Figure 16: Onstate drain current characteristics versus drain voltage characteristics of an all-SiC cascode switch consisting of a low-voltage 0.13 cm 2 active area VJFET and a 1200-V 0.143 cm 2 active area VJFET.The measurements were taken at cascode gate biases of 0 to 3.5 V, in steps of 0.5 V.At a cascode gate bias of 2.5 V, the switch outputs 24 A with a forward drain voltage drop of 4.7 V. Drain voltage (V) Figure 17: Onstate drain current versus drain voltage characteristics of a 1200-V 0.143-cm 2 active area packaged VJFET at a gate bias range of 0 to −4.5 V, in steps of 0.5 V.At a gate-to-source bias of −4.5 V, the VJFET is pinched off and negligible current flows through its drain.
It is evident from Figure 17 that 1200-V VJFET current output decreases nonlinearly with negative bias on its gate.Thus, the voltage drop across the low-voltage cascode component limits the current output of the entire cascode by reverse biasing the gate of the 1200-V VJFET.At −4.5 V gate bias, the VJFET turns off and negligible current flows through its drain.
CONCLUSION
The SiC VJFET is a very promising candidate for reliable high-power/temperature switching as it only uses pn junctions in the active device area where the high-electric fields occur.VJFETs do not suffer from forward voltage degradation, and exhibit holdoff times higher than those of their Si counterparts in short circuit testing.The VJFET based all-SiC normally-off cascode switch's internal diode has exhibited a very fast 100 nanoseconds reverse recovery time, eliminating the need for antiparallel diodes in power switching circuits.VJFETs were successfully operated at 300 • C junction temperature.The measured reduction in onstate current is in good agreement with the theoretical reduction in SiC electron mobility.
To meet the current handling requirements of modern power conditioning systems, 1200 V normally-on VJFETs of 0.19 cm 2 and low-voltage normally-off VJFETs of 0.15 cm 2 areas were fabricated.At a gate bias of 2.5 V, the 1200-V VJFET outputs 53 A with a forward drain voltage drop of 2 V and a specific onstate resistance of 5.4 mΩ cm 2 .The lowvoltage VJFET's drain current is 28 A, at a gate bias of 2.5 V, with a forward drain voltage drop of 3.3 V and a specific onstate resistance of 15 mΩ cm 2 .
A 1200-V SiC VJFET was connected in the cascode configuration with two commercial Si MOSFETs to form a normally-off power switch.At a MOSFET gate-to-source bias of 15 V, the cascode switch outputs 33 A at a forward drain voltage drop of 2.2 V. To fully exploit the high-temperature capability of SiC in a normally-off power switch, a 0.15 cm 2 low-voltage normally-off SiC VJFET was connected with a 0.19 cm 2 1200-V normally-on VJFET in the cascode configuration.At a forward drain voltage drop of 4.7 V, the all-SiC cascode switch outputs 24 A at 2.5 V cascode gate bias.Operating the 1200 V normally-on SiC VJFET as a switch in an inherently safe gate-drive circuit eliminates the need for a low-voltage normally-off SiC VJFET cascode component, and enables high-current/high-gain operation with low voltage drop and low onstate resistance.
Figure 1 :Figure 2 :
Figure 1: Theoretical specific onstate resistance of blocking regions designed for certain breakdown voltages in Si and 4H-SiC, under optimum punch-through conditions.
Figure 3 :
Figure 3: Simplified cross-section schematic of a normally-on ionimplanted SiC VJFET.The layer dimensions are not to scale.
Figure 4 :
Figure 4: Schematic of Northrop Grumman's all-SiC power switch consisting of high-voltage normally-on and low-voltage normallyoff VJFETs connected in the cascode configuration.
Figure 6 :
Figure 6: The very fast 100 nanoseconds reverse recovery time of the cascode switch's internal diode.
Figure 10 :
Figure 10: A 0.19 cm 2 area VJFET soldered into a package and wire bonded using 10 mil thickness aluminium wires.
Figure 11 :
Figure11: Blocking voltage characteristics of a 0.143 cm 2 active area VJFET at gate biases of −4 V to −24 V, in steps of −2 V.At a gate-to-source bias of −24 V and a drain current density of 1 mA/cm 2 , the VJFET blocks 1680 V.
Figure 13 :
Figure13: Onstate drain current characteristics of a switch consisting of a single 0.143 cm 2 active area 1200-V SiC VJFET connected in the cascode configuration with two paralleled 75-V/97-A commercial silicon MOSFETs.At a MOSFET gate-to-source bias of 15 V, the cascode switch outputs 33 A at a forward drain voltage drop of 2.2 V.
Figure 15 :
Figure15: Onstate drain current characteristics versus drain voltage characteristics of a 0.13 cm 2 active area low-voltage normally-off packaged VJFET.Gate biases of 0 to 3.5 V were applied, in steps of 0.5 V.At a gate bias of 2.5 V, the VJFET outputs 28 A at a forward drain voltage drop of 3.3 V. | 2019-04-12T13:55:38.557Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "da872c3bd84fc32f09f9db028940d25144c39b6e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2008/523721.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f30b474ca12145b59095e072d7153b72e91cc650",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
110688 | pes2o/s2orc | v3-fos-license | Inflammation-associated alterations to the intestinal microbiota reduce colonization resistance against non-typhoidal Salmonella during concurrent malaria parasite infection
Childhood malaria is a risk factor for disseminated infections with non-typhoidal Salmonella (NTS) in sub-Saharan Africa. While hemolytic anemia and an altered cytokine environment have been implicated in increased susceptibility to NTS, it is not known whether malaria affects resistance to intestinal colonization with NTS. To address this question, we utilized a murine model of co-infection. Infection of mice with Plasmodium yoelii elicited infiltration of inflammatory macrophages and T cells into the intestinal mucosa and increased expression of inflammatory cytokines. These mucosal responses were also observed in germ-free mice, showing that they are independent of the resident microbiota. Remarkably, P. yoelii infection reduced colonization resistance of mice against S. enterica serotype Typhimurium. Further, 16S rRNA sequence analysis of the intestinal microbiota revealed marked changes in the community structure. Shifts in the microbiota increased susceptibility to intestinal colonization by S. Typhimurium, as demonstrated by microbiota reconstitution of germ-free mice. These results show that P. yoelii infection, via alterations to the microbial community in the intestine, decreases resistance to intestinal colonization with NTS. Further they raise the possibility that decreased colonization resistance may synergize with effects of malaria on systemic immunity to increase susceptibility to disseminated NTS infections.
malaria parasite infection on the intestine, C57BL/6 mice were inoculated with P. yoelii. Mice developed parasitemia, measured as the percentage of red blood cells harboring parasite, that peaked at maximal levels between days 10 and 15 after inoculation. We selected this phase of maximal parasitemia to interrogate effects of malaria on the intestine (Fig. 1A). During this time, P. yoelii-infected blood cells could be observed in the intestinal microvasculature, with evidence of sequestration on the vascular endothelium (Fig. 1B). Blinded histopathology analysis of the large intestinal wall at the cecum revealed mild but significant changes, including edema of the lamina propria, focal loss of goblet cells, hyperplasia of undifferentiated enterocytes, and focal infiltration of mononuclear cells into the lamina propria, but no evidence of inflammatory cell exudation into the intestinal lumen (Fig. 1C).
Analysis of the cellular infiltrates by flow cytometry revealed that they consisted primarily of T cells (CD3+ ), as well as CD11b+ and CD11c+ myeloid cells ( Fig. 1D and Fig. S1). Approximately 30% of the infiltrating CD11b+ cells exhibited an inflammatory phenotype, as evidenced by expression of Ly6C (Fig. 1E). Together, these results show that acute malaria parasite infection is associated with inflammatory changes in the wall of the intestine.
Inflammatory changes result from parasite infection, and do not require the endogenous microbiota. We next interrogated whether the mucosal inflammation observed in the P. yoelii-infected mice resulted from the parasite infection itself, or rather from resulting penetration of the endogenous intestinal microbiota into the tissue. To address this question, we compared intestinal responses of conventionally reared or germ-free C57BL/6 mice during maximal parasitemia with P. yoelii ( Fig. 2A). Both groups of mice exhibited similar levels of parasitemia at d15 after inoculation, so this time point was used for comparison ( Fig. 2A). Histopathology scoring revealed a comparable severity of histologic changes in P. yoelii-infected germ-free mice compared to conventional mice ( Fig. 2B and Fig. 1C). Based on our observation ( Fig. 1) that inflammatory infiltrates in the intestine of conventional mice contained T cells and macrophages, we analyzed mucosal expression of S100a8 and S100a9, produced by inflammatory macrophages during malaria 20 , as well of IL-10 and interferon gamma, produced by T cells 21 (Fig. 2B). In conventional mice, expression of S100a8, S100a9, Il10 and Ifng was elevated at d10 and d15 after P. yoelii infection. A similar pattern of induction was observed in germ-free mice evaluated at d15. There was a nonsignificant trend (P > 0.05) for lower induction of S100a8 and S100a9 in the germ-free mice. These results suggest that while an intact intestinal microbiota may contribute to inflammatory changes in the gut mucosa during P. yoelii infection, it is not required for this effect, rather it is the malaria parasite infection driving this inflammatory response.
The composition of the intestinal microbiota is altered during malaria parasite infection.
To determine whether malaria parasite infection impacts the resident microbiota, fecal pellets were collected from two groups of co-housed C57BL/6 mice before inoculation with P. yoelii and at days 10, 15 and 30 days post infection. Illumina MiSeq analysis of amplicons from the 16S rRNA locus in fecal DNA extracts revealed significant alterations in the colonic microbiota (Fig. 3). At the phylum level, a decreased abundance of Firmicutes and a relative increase in the abundance of Bacteroidetes were observed at d10 (Fig. 3A). These changes were not simply the result of fluctuation in the resident microbiota over time or of husbandry-related effects, since a group of mock-infected mice from the same colony exhibited a stable microbiota composition over time (Fig. S2). At the genus level, acute malaria parasite infection at d10 was associated with an increase in the relative abundance of unclassified members of the Rikenellaceae (P = 0.009), Ruminococcaceae (P = 0.007), and Bacteroidales (P = 0.024), as well as of Turicibacter (P = 0.001). A decrease in Ruminococcus was also noted (P = 0.041; Fig. 3B and Table 1). Since the P. yoelii-infected mice were co-housed, we cannot formally exclude a contribution of coprophagy to the altered fecal microbiota. However one co-housed mouse in this group (not shown) was not infected with P. yoelii and did not exhibit these alterations in the fecal microbiota, suggesting that coprophagy alone is not sufficient to alter the microbiota in a conventionally-reared mouse. Overall, as P. yoelii infection progressed, the diversity of the fecal microbiota decreased by d10, with a gradual recovery by d30 after infection (Fig. 3C). At day 30, after resolution of infection, the composition of the microbiota most closely resembled the composition prior to infection, as shown by principal component analysis (Fig. 3D), suggesting that the effect of malaria parasite infection on microbial communities in the large intestine is transient.
Malaria parasite infection lowers the implantation dose for S. Typhimurium in mice. The finding that P. yoelii infection altered the intestinal microbiota raised the possibility that these changes Fig. S1. Significance for differences between experimental groups was determined using Student's t test on logarithmically transformed data. Mice were housed in groups of 4-5 per cage. could affect susceptibility of mice to infection with NTS. To address this question, we determined the dose at which 50% of mice would become infected with S. Typhimurium (implantation dose 50 or ID 50 ) at 1 day after infection, according to the method of Reed and Muench 22 . In control mice, the ID 50 for S. Typhimurium IR715 was 1.1 × 10 4 CFU, 34-fold higher than at the peak of P. yoelii infection, where this value was reduced to 3.2 × 10 2 CFU (Table 2 and Fig. S3A). Further, P. yoelii-infected mice inoculated with S. Typhimurium at varying doses were colonized at significantly higher levels, as assessed by determining CFU in the feces (Fig. 4A). By 4 days after S. Typhimurium infection, colonization levels were similar in both groups (data not shown), likely because S. Typhimurium infection elicits intestinal inflammation in the control mice, a factor that promotes its outgrowth in the intestinal lumen [23][24][25][26] . However, elevated intestinal colonization of S. Typhimurium at 1 day post inoculation in the P. yoelii-infected mice was independent of the ability of S. Typhimurium to elicit a mucosal inflammatory response, since an invA spiB mutant (SPN487), defective in the SPI-1 and SPI-2 encoded type III secretion systems required for mucosal invasion and inflammation 27,28 , was also recovered in higher numbers from P. yoelii-infected Table S1. Each bar represents an individual mouse. Images were acquired with 10× (left panels) and 40× objectives (right panels). Arrow indicates mononuclear infiltration. (C) Expression analysis of inflammatory markers by qRT-PCR. Transcript levels of calprotectin (subunits S100a8 and S100a9), interferon gamma (Ifng) and interleukin-10 (Il10) were determined in cecal tissue from Conv or GF mice sacrificed at 10, 12 or 15 d after P. yoelii inoculation. Data shown as fold-change over mock-treated Conv mice (indicated with dashed line at 1) with mean + SEM for (Conv, n = 5 − 11; GF, n = 3). Asterisk (*) indicates significance (P < 0.05) when compared to mock-treated mice as determined by Student's t test on logarithmically transformed data, (ns) indicates no significance (P > 0.05). Groups of mice were co-housed.
Scientific RepoRts | 5:14603 | DOi: 10.1038/srep14603 mice ( Fig. 4B and Fig. S3B). Further, the human commensal strain Escherichia coli HS 29 , which does not cause intestinal inflammation, also colonized the intestine of P. yoelii-infected mice at higher levels than in control mice (Fig. 4C), and this elevated colonization was maintained for several days after E. coli inoculation (data not shown). Of note, we did not observe an effect of P. yoelii infection on colonization of E. coli in our 16S microbiota analysis (Fig. 3), most likely because our mice (C57BL/6J) were not consistently colonized with detectable levels of E. coli. However, mice inoculated concurrently with E. coli and P. yoelii exhibited higher colonization of E. coli compared to control mice 14 days later (data not shown), implying that if E. coli is present at the outset of infection, its outgrowth is promoted during malaria. Together, these results suggest that changes to the intestinal milieu caused by P. yoelii infection promote colonization of the intestine with both S. Typhimurium and E. coli.
Alterations to the microbiota induced by malaria parasite infection promote colonization with S. Typhimurium. To determine the significance of the altered intestinal microbiota for increased S. Typhimurium colonization during malaria, we performed a microbiota transfer experiment. Cecal contents were isolated under anaerobic conditions from three control mice and three mice acutely infected with P. yoelii at d10 post infection (Fig. S3C), and the contents from each single mouse were transferred to an individual germ-free Swiss-Webster recipient via oral gavage. After allowing 6 days for the microbiota to become established, mice were inoculated via gavage with S. Typhimurium. One day later, S. Typhimurium colonization was measured via fecal shedding. Figure 4D shows that recipients of the microbiota transplant from P. yoelii-infected mice were colonized at a tenfold higher level with S. Typhimurium than recipients of the microbiota from control mice. These results suggest that dysbiosis induced by malaria parasite infection lowers colonization resistance of mice against S. Typhimurium.
Discussion
Studies in murine models have shown that multiple responses to malaria parasite infection conspire to increase susceptibility to disseminated infection. Malaria-induced hemolysis impacts maturation of neutrophils, which play a critical role in containing spread of extracellular bacteria 10 . Further, malaria-induced IL-10, that is beneficial in the context of dampening parasite-induced inflammation, has a detrimental effect on control of intracellular S. Typhimurium replication within hepatic macrophages 11 . As a consequence, once bacteria have disseminated from the gut, control of systemic infection is compromised. Further, disruption of intestinal barrier function and suppression of NTS-induced neutrophil recruitment to the mucosa may facilitate disseminated infection 12,14 . This study identifies a new factor that suppresses resistance to initial colonization of the intestine by S. Typhimurium, namely alterations to the community structure of the intestinal microbiota, which outnumbers the body's own cells by an order of magnitude (Ref 30). As a result, malaria reduces colonization resistance against S. Typhimurium-in our model of concurrent infection, the effective dose of bacteria needed to establish intestinal infection was decreased by 97%. Loss of colonization resistance during malaria did not involve epithelial invasion or induction of inflammation by S. Typhimurium, as it was independent of the SPI-1 and SPI-2 Type III secretion systems that are needed for invasion and induction of intestinal inflammation (Fig. 4) 31 . Further, a non-invasive commensal E. coli strain also exhibited enhanced colonization in our model, suggesting that perturbation of the microbial community by malaria opens an ecologic niche that can be occupied by either S. Typhimurium or E. coli. This altered environment was associated with mononuclear infiltration of the intestinal mucosa ( Fig. 1 and Fig. 2), suggesting the possibility that inflammatory changes may drive these changes to the microbiota. However, malaria-induced inflammation did not appear to be necessary for loss of colonization resistance, because reduced colonization resistance could be transferred to germ-free mice independently of malaria, by transfer of the cecal microbiota (Fig. 4D). Of note, based on the different types of inflammatory responses observed in the intestinal mucosa, the mechanism by (D) Susceptibility of germ-free mice reconstituted with colonic microbiota from P. yoelii-infected or control mice to colonization with S. Typhimurium. Each reconstituted mouse was housed individually for the duration of the experiment. Each symbol represents an individual mouse, with horizontal bars representing the geometric mean. Dashed lines indicate limit of detection. Significance of differences between experimental groups was determined using a Student's t test on logarithmically transformed data. which malaria alters the endogenous microbiota is likely to be different from the mechanism by which S. Typhimurium promotes its own outgrowth via inflammation. In our study, we observed an infiltration of T cells and mononuclear phagocytes in P. yoelii-infected mice (Fig. 1). In addition, we observed an increase in Fcε RI-positive cells, which is consistent with our previous report of an increase in mucosal mast cells in this model ( Fig. S1 and 14 ). In contrast, in the murine colitis model used to model enteric pathology of S. Typhimurium infection, a massive exudation of neutrophils into the mucosa and the intestinal lumen results in production of oxygen and nitrogen radicals that alter the environment and promote outgrowth of S. Typhimurium in the gut lumen 25 .
P. yoelii infection resulted in a decrease in the complexity of the cecal microbiota, as well as a decrease in the abundance of Firmicutes. Interestingly, members of this phylum have been shown to be decreased after treatment with antibiotics including cefoperazone 32 , shown to reduce colonization resistance against Clostridium difficile 32 and streptomycin, which enhances colonization with S. Typhimurium 33 . Further, a decrease in members of the Firmicutes has been observed in patients with inflammatory bowel disease, a condition that is associated with increased colonization of E. coli 26,[34][35][36][37][38] . However, whether shared mechanisms underlie outgrowth of S. Typhimurium and E. coli in each of these conditions is unknown, since the mechanisms linking alterations in the microbiota with reduced colonization resistance are incompletely understood.
Taken together, the results of this study suggest that malaria, via alterations to the intestinal environment, shifts the community structure of the gut microbiota to provide a benefit to colonizing S. Typhimurium and E. coli.
Methods
Plasmodium yoelii nigeriensis (P. yoelii). Parasite stocks were obtained from the Malaria Research and Reference Reagent Resource and the species and strain identities were confirmed by DNA sequencing of merozoite surface protein-1 (MSP-1) 14 . Parasite stocks were prepared by passage in CD-1 mice. For experiments, mice were inoculated intraperitoneally (i.p.) on day 0 with blood containing approximately 4 × 10 7 infected red blood cells (iRBCs). Mock-treated controls were injected with an equal volume of blood from uninfected CD-1 mice.
Animal experiments.
All experiments were performed in accordance with guidelines and regulations as outlined and approved by the UC Davis or Ohio State University Institutional Animal Care and Use Committees (IACUC). Specific pathogen free (SPF) mice: 6-8 week-old female C57BL/6J mice were purchased from the Jackson Laboratory (Bar Harbor, Maine) and maintained under SPF conditions. Germ-free (GF) mice: GF C57BL/6 and Swiss Webster mice were bred inside germ-free isolators. Experimentation in GF mice was performed in an independent GF isolator and for fecal microbiota reconstitution; mice were transferred to a biosafety cabinet for inoculation and maintained in sterile cages for the duration of the experiment. After reconstitution, GF mice were caged individually.
Microbial readouts of colonization. Parasite infection was monitored by blood collection from tail snips. Parasitemia was determined by counting the percentage of Plasmodium yoelii iRBCs on thin blood smears stained with Giemsa (Acros Organics). For quantification of S. Typhimurium or E. coli, fecal pellets, collected 1 day after intragastric inoculation, were homogenized and serial dilutions spread on LB agar plates containing appropriate selective antibiotics.
Histopathology. Histological samples were collected at the time of necropsy. 5 μ m sections were cut from formalin fixed paraffin embedded tissues and stained with hematoxylin and eosin or Giemsa by the UC Davis Veterinary Pathology Laboratory. A veterinary pathologist (MXB) performed histopathology scoring in a blinded fashion, according to scoring criteria detailed in Table S1.
RNA extraction, reverse transcription-PCR (RT-PCR), and real-time PCR. Animal tissues were frozen in liquid nitrogen at necropsy and stored at − 80°C. RNA was extracted from tissue as described previously 42 using Tri-Reagent (Molecular Research Center) according to the manufacturer's instructions. RNA was treated with DNAseI (Ambion) to remove genomic DNA contamination. For a quantitative analysis of mRNA levels, 1 μ g of total RNA from each sample was reverse transcribed in a 50-μ l volume (TaqMan reverse transcription [RT] reagent; Applied Biosystems), and 4 μ l of cDNA was used for each real-time reaction. RT-PCR was performed using the primers listed in Table S2, SYBR green (Applied Biosystems) and ViiA 7 Real-Time PCR System (Applied Biosystems). Data was analyzed by using the comparative threshold cycle (C T ) method (Applied Biosystems). Target gene transcription of each sample was normalized to the respective levels of beta-Actin mRNA and represented as fold change over gene expression in control animals. Microbiota Sequencing. DNA was extracted from homogenized stool samples using the protocols and reagents specified in the PowerFecal ™ DNA Isolation Kit (MoBio Laboratories, Inc.). To facilitate efficient assemblies and longer accurate reads, paired end (PE) libraries were constructed. Bacterial DNA was amplified by PCR enrichment of 16S rRNA encoding sequences from each sample using primers 515F and 806R that flank the V3-V4 hypervariable region and were modified by adding a unique set of 8 oligonucleotide barcodes for purposes of multiplexing.
The resulting PE 16S rRNA amplicons were purified and quantified on an Invitrogen Qubit system. Libraries were normalized and quality assessed on an Agilent Bioanalyzer prior to sequencing with an Illumina MiSeq system. As quality control, sequences containing uncalled bases, incorrect primer sequence, or runs of ≥ 12 identical nucleotides were removed from the data.
Phylogenetic analysis of the 16S rRNA sequences was accomplished using customized Linux-based command scripts for trimming, demultiplexing, and quality filtering the raw PE sequence data. Using the QIIME 43 open source software package, the demultiplexed sequences were aligned, clustered, and operational taxonomic units (OTUs) were determined utilizing the Greengenes reference collection (greengenes.lbl.gov). Principal Component Analysis was performed using METAGENassist 44 . Alpha and beta diversity were evaluated using QIIME and the Megan 45 open source software package. Student's T-tests were used to identify taxa that displayed statistically significant differences between experimental groups and controls. 16S rRNA sequences are deposited in the Sequence Read Archive (Bioproject PRJNA287262) at the National Center of Biotechnology Information (NCBI).
Microbiota reconstitution of Germ-free Mice. Control or parasite-infected C57BL6/J mice at 10 days post P. yoelii inoculation were euthanized and ceca removed aseptically with cuts 2 cm above and below the cecum to minimize oxygen exposure. Ceca were then transferred to an anaerobic chamber (Bactron I Anerobic Chamber; Sheldon Manufacturing, Cornelius) for processing. The cecal contents from each donor mouse were collected and suspended in 2 ml pre-reduced PBS. Each recipient germ-free Swiss Webster mouse was orally inoculated with 0.2 ml of cecal suspension from one donor mouse and housed in an individual cage for 6 days to allow for microbiota reconstitution.
Statistical analysis. The statistical significance of differences between groups was determined by a Student's t test on data transformed to a logarithmic scale. A P value of 0.05 or less was considered to be significant. All data were analyzed using two-tailed tests. | 2016-05-12T22:15:10.714Z | 2015-10-05T00:00:00.000 | {
"year": 2015,
"sha1": "f2537944e0d2cece6eab6a2b7572540250f43f56",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep14603.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2537944e0d2cece6eab6a2b7572540250f43f56",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
204739293 | pes2o/s2orc | v3-fos-license | Galactomannan Pentasaccharide Produced from Copra Meal Enhances Tight Junction Integration of Epithelial Tissue through Activation of AMPK
Mannan oligosaccharide (MOS) is well-known as an effective fed supplement for livestock to increase their nutrients absorption and health status. Pentasaccharide of mannan (MOS5) was reported as a molecule that possesses the ability to increase tight junction of epithelial tissue, but the structure and mechanism of action remains undetermined. In this study, the mechanism of action and structure of MOS5 were investigated. T84 cells were cultured and treated with MOS5 compared with vehicle and compound C, a 5′-adenosine monophosphate-activated protein kinase (AMPK) inhibitor. The results demonstrated that the ability of MOS5 to increase tight junction integration was inhibited in the presence of dorsomorphine (compound C). Phosphorylation level of AMPK was elevated in MOS5 treated group as determined by Western blot analysis. Determination of MOS5 structure was performed using enzymatic mapping together with 1H, 13C NMR, and 2D-NMR analysis. The results demonstrated that the structure of MOS5 is a β-(1,4)-mannotetraose with α-(1,6)-galactose attached at the second mannose unit from non-reducing end.
Introduction
Tight junction is also one of the crucial compartments of a barrier function of an epithelial tissue which is a part of anatomical barriers of innate immunity in complex and higher organisms. An impaired tight junction is a common pathologic feature in many inflammatory diseases such as inflammatory bowel disease (IBD) or Crohn's disease or even chronic diarrhea in HIV-infected patient, or a side-effect of some drugs such as gefitinib or other drugs in EFGR-inhibitor family [1][2][3]. Impaired tight junction may lead to many symptoms such as leaking of intra-and intercellular fluid to the gut, may leading to diarrhea, malnutrition, and colitis [4][5][6][7][8]. The loss of tight junction barrier function might also lead to a dysfunction of villi that might resulting in an abnormality in nutrient absorption [1,[9][10][11]. Interestingly, many recent studies had reported that β-glycans can increase tight junction formation Biomedicines 2019, 7, 81 2 of 12 through 5 -adenosine monophosphate-activated protein kinase (AMPK) signaling pathway which is a part of the mTOR signaling pathway [12][13][14].
Mannan oligosaccharides (MOS) is well-known as a supplementary fed for livestock which can increase the body composition, also raised immunity and stress response [15][16][17][18][19][20][21][22][23]. Although MOSs have been proven to have biological activities over the livestock quality enhancement, its mechanism of action in the tissue or cells is still unknown. Previously, MOS5, a pentasaccharide obtained from the digestion of pretreated galactomannan from copra meal with recombinant β-mannanase was reported to have the ability to enhance tight junction integration in epithelial cells [24]. Previous reports have demonstrated that β-glycan enhances tight junction integration of epithelial cells through activation of the AMPK pathway [12,13,25,26]. This infers that MOS5, which is also a β-glycan, might also enhance a tight junction integration of epithelial cells through activation of AMPK pathway.
In this study, T84 cells, a lung metastasis-colonic human colonic carcinoma cells, were used as a epithelial cell model for tight junction integrity studies induced by MOS5. Trans-epithelial electrical resistance (TEER) assay was employed. Dorsomorphine (compound C), an inhibitor of AMPK, was used to elucidate the involvement of AMPK in the activation of tight junction integration of epithelial cells by MOS5. Moreover, MOS5 structure was successfully elucidated using specific enzymatic mapping and 1 H, 13 C NMR, and 2D NMR analysis.
Cell Culture
For transepithelial electrical resistance (TEER) experiment, T84 cells (American Type Culture Collection, VA, USA) were grown in a mixture of DMEM (Invitrogen Co., Calsbad, CA, USA), supplemented with 10% v/v fetal bovine serum (FBS), 100 U/mL penicillin, and 100 ug/mL streptomycin. The cells were cultured in 25 cm 2 cell culture flask (Corning life science, Tewksbury, MA, USA) maintained at 37 • C in a humidified CO 2 incubator [12]. To form polarized monolayers, T84 cells were seeded in the Transwell ® insert (Corning life science, Tewksbury, MA, USA) at a density of approximately 5 × 10 5 cells/insert and cultured for 14 days or until transepithelial electrical resistance (TEER) reached 1000 Ohm/cm 2 . The culture media were replaced daily [26].
Preparation of MOS5
MOS5 was produced by an enzyme hydrolytic method using RMase24 with a pre-treated galactomannan substrate obtained from copra meal in our previous study [24]. The separated MOS5 was also purified through Biogel P2 and Biogel P4 size exclusion column chromatography, respectively. The column size was 27 cm in length, 3.2 cm in diameter, with a flow rate of 0.46 mL/min at room temperature.
Determination of MOS5 Effects on TEER and Calcium Switch Assay
Each purified MOS5 was dissolved separately in ultrapure water to make a 50 µM to 100 µM stock concentration then filtered through 0.2-µm filter membrane before mixing with DMEM/Ham's F-12 without FBS up to desired concentration (0.1, 1, 5, 10, and 20 µM). After the cells were grown and the polarized monolayer was formed, each prepared MOS in DMEM was treated to the cells and the change in TEER was monitored before and at 24 h after treatment. For calcium switch assay, T84 cells were cultured in DMEM in a transwell ® insert (Corning life science, Tewksbury, MA, USA) until cells formed the monolayer and the population of the cells reached 80% or until the TEER of the cells were steady. After that, DMEM medium was substituted with the minimum essential medium eagle, spinner modification (SMEM) (Ca 2+ -free culture media) to disrupt tight junctions. After 24 h, the SMEM medium was replaced with regular DMEM/Ham's F-12 (containing Ca 2+ ) supplemented with the vehicle, MOS5 (10 µM), MOS5 (10 µM) plus compound C (80 µM), or compound C (80 µM) alone. TEER was measured before and every 15 min after Ca 2+ switch up to 12 h [25,26].
Western Blot Analysis
T84 cells were treated with 10 µM of purified MOS5, compared with non-treated group. After treatments for subjected time point, cell lysates were harvested using RIPA buffer (20 mM Tris-HCl pH7.4, 150 mM NaCl, 1 mM EDTA, 1% Triton-X100, 1% sodium deoxycholate, 0.1% SDS. Protease inhibitors: 1 mM PMSF, 5 ug/mL aprotinin and 5 ug/mL leupeptin were added prior to use). A total of 30 µg of protein was separated using sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) before transferring to nitrocellulose membrane. The membrane was incubated for 1 h with 5% non-fat dried milk (BioRad, Hercules, CA, USA), and incubated overnight with rabbit antibodies to phosphorylate-AMPK/Thr-172 (p-AMPK), AMPK-α and β-actin (Cell Signaling Technology, Boston, MA, USA). The membrane was then washed for four times with Tris-buffered saline Tween-20 (TBST) and incubated for 1 h at room temperature with horseradish peroxidase-conjugated goat antibody to rabbit immunoglobulin G (Cell Signaling Technology, Boston, MA, USA) [26]. The signals were detected using Luminata Crescendo Western HRP Substrate (Merck Millipore, Billerica, MA, USA). Band density was analyzed using Image J software (version 1.51s, National Institute of Health, Bethesda, MD, USA).
Purification of α-Galactosidase from Achantina Fulica
A crude α-galactosidase (A. fulica) was kindly offered by from Amano Enzyme Inc., Nagoya, Aichi, Japan and labelled as CAf GLA. One hundred milligrams of crude enzyme powder was diluted and mixed well with 20 mL of deionized water and precipitated with 30% ammonium sulfate solution for overnight at 4 • C. The supernatant was collected by centrifugation at 12,000× g, 10 min before repeating the precipitation by scaling up the ammonium sulfate concentration to 70% and 90%, respectively. The protein was then dialyzed with 5mM phosphate buffer pH 7.4 for 12 h, twice, to remove the excess ammonium sulfate salt. This dialyzed enzyme was labelled as pCAf GLA and was then separated through Sephadex G150 gel filtration column chromatography with gravity at 4 • C. The column size was 3 cm in diameter, 48 cm in length. Activities of separated fractions were investigated with mannobiose and melibiose to determine the digestion ability of the enzyme over β-1,4-mannosidic linkage and α-1,6-galactosidic linkage at 37 • C. The result was analyzed by TLC.
Structural Analysis of MOS5 Products
For structural analysis of MOS5, 50 mg of purified MOS5 was dissolved in ultrapure water before digesting directly with enzyme with excess amount of selected fractions of pCAf GLA at 37 • C for overnight that shown endo-mannosidase activity before purification through Biogel p4 size exclusion chromatography and then analyzed with thin layer chromatography in butanol: acetic acid: water (3:3:2) system for 3 ascents. The molecular weight of m-2 and m-3, a digestion product of MOS5, was also confirmed with MALDI imaging mass spectrometer (Solari X, FT mass spectrometry, Bruker, Billerica, MA, USA) before analyzing their structure on a FT-NMR spectrometer (AVANCE 300, Bruker, Billerica, MA, USA) at room temperature with 3-(trimethylsilyl)-1-propanesulfonic acid sodium salt (DSS) as external standard.
Statistical Analysis
The ratio of phosphorylation of p-AMPK/AMPK-α over of the MOS5 treated (M) and untreated group (N) at each time point was calculated by the following equation: The significant difference between samples was determined by one-way ANOVA using GraphPad Prism 7 software (GraphPad Software Inc., La Jolla, CA, USA).
Effects of MOS5 on Tight Junction Assembly of MOS5 via AMPK Pathway
In this study, determination of MOS5 concentration results shows that a concentration at 10 µM and 20 µM of MOS5 significantly increased TEER when compared to the vehicle group (n = 3-4, one-way ANOVA, p = 0.002 and p < 0.0001, respectively). The concentration below 5 µM showed no difference in TEER level ( Figure 1a). TEER results under a challenging of AMPK inhibitor, compound C, showed a significant difference of TEER recovering between MOS5 treatment and vehicles. TEER value from a treatment group of MOS5 + compound C showed no difference as compared to compound C treated group (n = 4-5, two-way ANOVA, p < 0.0001) ( Figure 1b). To confirm this hypothesis, Western blot analysis of p-AMPK/AMPK-α expression was done. Total protein of each sample was extracted and then the blot analysis was performed and the band intensities were measured with ImageJ software. The band intensity of p-AMPK and AMPK-α were analysed with beta-actin band intensities before calculation. The results showed that the phosphorylation of AMPK was significantly increased at 60 min after administration of 10 µM MOS5 to the cells (n = 3, one-way ANOVA, p = 0.0014) (Figure 1c,d).
The significant difference between samples was determined by one-way ANOVA using GraphPad Prism 7 software (GraphPad Software Inc., La Jolla, CA, USA).
Effects of MOS5 on Tight Junction Assembly of MOS5 via AMPK Pathway
In this study, determination of MOS5 concentration results shows that a concentration at 10 µ M and 20 µ M of MOS5 significantly increased TEER when compared to the vehicle group (n = 3-4, oneway ANOVA, p = 0.002 and p < 0.0001, respectively). The concentration below 5 µ M showed no difference in TEER level (Figure 1a). TEER results under a challenging of AMPK inhibitor, compound C, showed a significant difference of TEER recovering between MOS5 treatment and vehicles. TEER value from a treatment group of MOS5 + compound C showed no difference as compared to compound C treated group (n = 4-5, two-way ANOVA, p < 0.0001) (Figure 1b). To confirm this hypothesis, Western blot analysis of p-AMPK/AMPK-α expression was done. Total protein of each sample was extracted and then the blot analysis was performed and the band intensities were measured with ImageJ software. The band intensity of p-AMPK and AMPK-α were analysed with beta-actin band intensities before calculation. The results showed that the phosphorylation of AMPK was significantly increased at 60 min after administration of 10 µ M MOS5 to the cells (n = 3, one-way ANOVA, p = 0.0014) (Figure 1c,d).
Determination of MOS5 Structure via Enzymatic Hydrolysis Assay
The result from incomplete digestion of purified MOS5 with α-galactosidase from A. fulica (Amano Enzyme Inc., Nagoya, Aichi, Japan) (CAf GLA) showed a monosaccharide band which has a different retention distance (Rf) than mannose (g), and a band with the same Rf to mannotetraose (m-4) (Figure 2a). Interestingly, once the concentration of α-galactosidase is increased, the products revealed other oligosaccharides. There was a disaccharide band with the same Rf to mannobiose (m-2) and a trisaccharide band (m-3) with a slightly different Rf than mannotriose (Figure 2a). Digestion of MOS5 with exo-β-D-mannosidase (A. fulica) (Seikagaku Corporation, Shiyoda-ku, Tokyo, Japan) revealed a monosaccharide band with the same Rf to mannose and a tetrasaccharide band with a different Rf from mannotetraose standard. Interestingly, recombinant β-mannannase, RMase24, cannot perform further digestion with MOS5 (Figure 2b).
Determination of MOS5 Structure via Enzymatic Hydrolysis Assay
The result from incomplete digestion of purified MOS5 with α-galactosidase from A. fulica (Amano Enzyme Inc., Nagoya, Aichi, Japan) (CAfGLA) showed a monosaccharide band which has a different retention distance (Rf) than mannose (g), and a band with the same Rf to mannotetraose (m-4) (Figure 2a). Interestingly, once the concentration of α-galactosidase is increased, the products revealed other oligosaccharides. There was a disaccharide band with the same Rf to mannobiose (m-2) and a trisaccharide band (m-3) with a slightly different Rf than mannotriose (Figure 2a). Digestion of MOS5 with exo-β-D-mannosidase (A. fulica) (Seikagaku Corporation, Shiyoda-ku, Tokyo, Japan) revealed a monosaccharide band with the same Rf to mannose and a tetrasaccharide band with a different Rf from mannotetraose standard. Interestingly, recombinant β-mannannase, RMase24, cannot perform further digestion with MOS5 ( Figure 2b).
Next, further purification of pCAfGLA was performed through Sephadex G150 gel filtration column chromatography and 5 mL was collected in each fraction. Fraction number 35 (F35) of separated pCAfGLA was labelled as PAfGLAF35, which has endo-β-mannosidase activity. Digestion of MOS5 with a lower amount of PAfGLAF35 than 2 µ L per 1 µ L of 1 µ M MOS5 produced oligosaccharides larger than MOS5. A concentration of PAfGLAF35 used in digestion for NMR analysis was at excess to avoid a transferase by-product (Figure 2c). From this information, we can conclude the structure of MOS5 as shown in Figure 3. Next, further purification of pCAf GLA was performed through Sephadex G150 gel filtration column chromatography and 5 mL was collected in each fraction. Fraction number 35 (F35) of separated pCAf GLA was labelled as PAf GLAF35, which has endo-β-mannosidase activity. Digestion of MOS5 with a lower amount of PAf GLAF35 than 2 µL per 1 µL of 1 µM MOS5 produced oligosaccharides larger than MOS5. A concentration of PAf GLAF35 used in digestion for NMR analysis was at excess to Biomedicines 2019, 7, 81 6 of 12 avoid a transferase by-product (Figure 2c). From this information, we can conclude the structure of MOS5 as shown in Figure 3. The structure of MOS5 was further confirmed by NMR and MS. Digestion products of MOS5 with PAfGLAF35, m-2 and m-3, were collected and purified through Biogel P2 size exclusion chromatography before submitting to mass spectrometry and NMR analysis to confirm the structure. Mass of m-2 and m-3 were analyzed and showed a peak at 365 m/z and 527m/z, which indicated the molecular weight of disaccharide and trisaccharide with sodium salt, respectively (Figure 4a,b). The structure of MOS5 was further confirmed by NMR and MS. Digestion products of MOS5 with PAf GLAF35, m-2 and m-3, were collected and purified through Biogel P2 size exclusion chromatography before submitting to mass spectrometry and NMR analysis to confirm the structure. Mass of m-2 and m-3 were analyzed and showed a peak at 365 m/z and 527m/z, which indicated the molecular weight of disaccharide and trisaccharide with sodium salt, respectively (Figure 4a,b). Figure 5. The anomeric proton of mannose was identified with a chemical shift of protons following by; C1 on β-1,4 was identified at 5.169 ppm. C2 approximately at 3.98 to 4.06 ppm, and C4 approximately at 3.56. to 3.61 ppm. Moreover, a long-range CH proton chemical shift was also found at 4.731 ppm and 3.96 ppm which represented proton of C1 and C4 at β-1,4-mannosidic linkage, respectively. This had been confirmed with results from 13C NMR which revealed a chemical shift of carbon at following; anomeric C1 at 96.53 ppm, C2 at 72.90 and 73.23 ppm, and non-linkage C4 at 69.41ppm (Figure 5a-c). Figure 5. The anomeric proton of mannose was identified with a chemical shift of protons following by; C1 on β-1,4 was identified at δ 5.169 ppm. C2 approximately at δ 3.98 to 4.06 ppm, and C4 approximately at δ 3.56. to 3.61 ppm. Moreover, a longrange CH proton chemical shift was also found at δ 4.731 ppm and δ 3.96 ppm which represented proton of C1 and C4 at β-1,4-mannosidic linkage, respectively. This had been confirmed with results from 13C NMR which revealed a chemical shift of carbon at following; anomeric C1 at δ 96.53 ppm, C2 at δ 72.90 and 73.23 ppm, and non-linkage C4 at δ 69.41ppm (Figure 5a-c). Whereas 1 H and 13 C NMR spectra of m-3 represent the similar chemical shift signals as m-2, other different signals had been detected. 1 H NMR spectra of m-3 revealed a signal of C1 on β-1,4 at 5.217 ppm. C2 approximately at 4.04 to 4.13 ppm, and C4 approximately at 3.61 ppm, but there were others proton chemical shift at 3.88 ppm, 4.04 ppm and 5.055 ppm which were identified as a proton chemical shift of C2, C4, and anomeric proton of C1 of α-1,6 linkage of galactose, respectively. The result shown on 13 C NMR went along with the same trend as 1 H NMR as it showed a similar pattern of carbon chemical shift of mannose at 96.57ppm on anomeric C1, 72.13 and 73.35 ppm on C2, and non-linkage C4 at 69.47ppm. 13 C NMR result of m-3 also showed additional signal at 101.48, 71.12, and 71.99 ppm representing a chemical shift of C1 anomeric carbon, C2, and C3 of α-1,6 galactose, respectively (Figure 6a-c). Further 2D NMR analyses supporting m-2 and m-3 structures are provided in Figure S1-S6 and Figure S7-S12, respectively.
Discussion
From our recent study, MOS5 is the main compound in crude MOS from the enzymatic digestion that shows the ability to enhance the tight junction of epithelial cells [24]. In this study we varied the concentration of MOS5 to determine the optimal comcentration that will best promote tight junction assembly. MOS5 at 10 µ M was found to be the optimal concentration to promote tight junction assembly. This concentration was then used in further experiments. Interestingly, a higher concentration of MOS5, 20 µ M, shows a lower trend in promoting tight junction. This phenomena was observed earlier for β-glycan activation of tight junction [12,13,[25][26][27][28].This might be a result of over activation of the cellular signaling pathway. Further studies are required.
Therefore, the detailed mechanism of action of MOS5 remains unknown. However, MOS5 may increase tight junction assembly through the activation of AMPK via its downstream pathway in the epithelial cells. This hypothesis was supported by previous reports demonstrating the ability of oligosaccharides to increase tight junction assembly through the activation of AMPK [12,26]. Substantiation of this hypothesis was drawn with the determination of tight junction assembly with MOS5 in the presence and absence of compound C, an AMPK inhibitor. TEER result shows that under
Discussion
From our recent study, MOS5 is the main compound in crude MOS from the enzymatic digestion that shows the ability to enhance the tight junction of epithelial cells [24]. In this study we varied the concentration of MOS5 to determine the optimal comcentration that will best promote tight junction assembly. MOS5 at 10 µM was found to be the optimal concentration to promote tight junction assembly. This concentration was then used in further experiments. Interestingly, a higher concentration of MOS5, 20 µM, shows a lower trend in promoting tight junction. This phenomena was observed earlier for β-glycan activation of tight junction [12,13,[25][26][27][28].This might be a result of over activation of the cellular signaling pathway. Further studies are required.
Therefore, the detailed mechanism of action of MOS5 remains unknown. However, MOS5 may increase tight junction assembly through the activation of AMPK via its downstream pathway in the epithelial cells. This hypothesis was supported by previous reports demonstrating the ability of oligosaccharides to increase tight junction assembly through the activation of AMPK [12,26]. Substantiation of this hypothesis was drawn with the determination of tight junction assembly with MOS5 in the presence and absence of compound C, an AMPK inhibitor. TEER result shows that under inhibition of AMPK with compound C, MOS5 could no longer enhance the tight junction assembly of T84 after the destruction of cellular tight junction by Ca 2+ removal. This result indicated that MOS5 might activate cellular tight junction via AMPK pathway. To confirm this hypothesis, Western blot analysis of AMPK phosphorylation was performed to observe the changes in phosphorylation level of AMPK after MOS5 treatment. The result revealed that treatment of MOS5 over T84 can increase phosphorylation of AMPK at 60 min post-treatment. From these results it can be concluded that MOS5 activates cellular tight junction of epithelial cells through phosphorylation of AMPK.
Several studies reported that MOS molecules obtained from copra meal consist of mannose and galactose but the order of their repeating units and the structure of the functional MOS molecules remained unknown [29][30][31][32][33]. In our study, determination of MOS5 structure was performed using enzymatic mapping together with NMR analysis methods. Digestion of MOS5 with crude α-galactosidase released galactose and a tetrasaccharide which had a similar retention distance on TLC to mannotetraose standard. Traces amount of m-2 and m-3, resulted from the contamination of endo-mannosidase within the crude enzyme. Furthermore, hydrolysis of MOS5 with exo-β-1,4-mannosidase resulted in mannose unit and tetrasaccharide product, which had a different retention distance from mannotetraose. This suggested that this tetrasaccharide obtained from this reaction was a heteromer tetramer composed of mannose and galactose. It has been reported that the presence of galactose in galactomannan polymer can limit the hydrolysis activity of exo-mannosidase [34].
To determine the order of mannose and galactose unit in MOS5 structure, MOS5 was digested with endo-β-mannosidase. Ademark, et al. reported that only mannobiose and mannotriose products were obtained from the digestion of mannopentose [35], suggesting that endo-β-mannosidase cannot perform a further digestion on disaccharide or trisaccharide which will be helpful to determine the structure of MOS5. The endo-β-mannosidase digestion resulted in disaccharide and trisaccharide as the digestion products. The digestion products were purified and molecular weight of these products were confirmed by mass spectrometry analysis. The structure of the disaccharide and trisaccharide were determined by 13 C NMR, 1 H NMR, and 2D NMR analysis. The results showed that the disaccharide product was β-(1,4)-mannobiose, while the trisaccharide product was β-(1,4)-mannobiose with α-(1,6)-galactose attached on the second mannose unit from non-reducing end. From the results of α-galactosidase, exo-β-1,4-mannosidase digestions, and the NMR results of the disaccharide and trisaccharide product from the endo-β-mannosidase digestion, we can conclude that MOS5 is a pentasaccharide containing mannotetraose with α-(1,6)-galactose attached to the second mannose unit from non-reducing end, showed in figure3.
Conclusions
Pentasaccharide, MOS5, from copra meal digestion has the ability to increase tight junction of epithelial tissue. Enzymatic hydrolysis of MOS5 and 13 C NMR, 1 H NMR, and 2D NMR analysis of MOS5 hydrolytic products demonstrated that the structure of bioactive MOS5 is a β-(1,4)-mannotetraose with α-(1,6)-galactose attached to the second mannose unit from the non-reducing end. MOS5 activate tight junction integration though the activation of AMPK pathway. | 2019-10-17T09:06:05.893Z | 2019-10-14T00:00:00.000 | {
"year": 2019,
"sha1": "e110077b90d16c0f0a67e0961a7e5bba1162a519",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/7/4/81/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "123cf6e3b4fc5c9f8e11963ccf899d413f601cc1",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
123903481 | pes2o/s2orc | v3-fos-license | An Evaluation of Archaeological Sites in the Vicinity of Floodwater Retarding Structure No. 2 Dry Comal Creek, Comal County, Texas
Part of the American Material Culture Commons, Archaeological Anthropology Commons, Cultural Resource Management and Policy Analysis Commons, Historic Preservation and Conservation Commons, History Commons, Human Geography Commons, Other Anthropology Commons, Other Arts and Humanities Commons, Other History of Art, Architecture, and Archaeology Commons, Other Social and Behavioral Sciences Commons, and the Technical and Professional Writing Commons Tell us how this article helped you.
. The sites tested were in the area to be modified by the construction of Floodwater Retarding Structure No.2 located on Dry Comal Creek in Comal County, south central Texas (Fig. 1). The testing was the second phase of investigation, following a survey in 1974, in the areas to be modified by the construction of Floodwater Retarding Structures 1 and 2 (Hester, Bass and Kelly 1975). In 1975, limited testing and additional survey were undertaken in the area affected by Structure No.1 (Kelly and Hester 1975a;1975b).
The field work done in 1977, the subject of this report, was supervised by Crist; Assad, aided by Waynne Cox and Thomas Miller. All field notes, maps and artifacts are on file at the Center for Archaeological Research.
PREVIOUS RESEARCH
As of November 1977, 105 archaeological sites in Comal County had been recorded with the Texas Archeological Research Laboratory, Austin, Texas. Hester, Bass and Kelly (1975) previously have discussed the major site types and prehistory of the area. Further archaeological surveys in the immediate areas of Floodwater Retarding Structures 1 and 2 (including the current project) have increased the number of archaeological sites in the Comal River Watershed from 14 to 33 since the initial survey in late 1974 (Hester, Bass and Kelly 1975;Kelly and Hester 1975a).
GOALS OF THE FIELD RESEARCH
The intent of the current project was to fulfill the archaeological recommendations for the area of Floodwater Retarding Structure No.2 (Hester, Bass and Kelly 1975) and to determine if further research was necessary at the archaeological sites to be directly affected by the planned construction of the dam and related facilities.
The overall project area includes 77 acres for the dam and spillway, 25 acres in the borrow pit area and 54 acres for construction of the sediment pool. An additional 557 acres will be subject to temporary inundation (USDA and SCS 1975).
Eight archaeological sites are recorded in the Floodwater Retarding Structure No.2 area. Hester, Bass and Kelly (1975) recommended three occupation sites for intensive testing and surface survey (41 CM 62,41 CM 63 and 41 CM 64), while further intensive survey and limited controlled surface collecting were recommended for the remaining five quarry/workshop sites (41 CM 65,41 CM 66,41 CM 67,41 CM 68 and 41 CM 69). The aim of the current project, therefore, was to determine the extent of deposit (both horizontally and vertically) and the This page has been redacted because it contains restricted information.
cultural context of the three occupation sites by excavation and intensive surface reconnaissance. For the quarry/workshop sites, surface reconnaissance was carried out in an attempt to determine the extent, intensity and nature of the cultural debris.
One new site (41 CM 105) was found; due to its location in the borrow pit area, testing was deemed necessary. Because the 41 C~l 64 site could not be located during the current survey, the time originally allocated to investigate this site was used to test 41 CM 105.
ENVIRONMENT OF THE AREA 3 This brief description of the environment and geology of the Comal River Watershed area applies to all of the sites tested and surveyed within the impact zone of Floodwater Retarding Structure No.2.
The area is situated near the southeastern edge of the Edwards Plateau and is in the Balcones Fault Zone (Blair 1950). Dry Comal Creek is, geologically, part of the Edwards Limestone Formation of the Lower Cretaceous (Barnes 1974). The chert-bearing Edwards Formation was a valuable source of raw material for the aboriginal populations, as evidenced by the numerous and extensive quarry-workshops in the area. Much of the soil in the study area is of the Del Rio Clay Series (USDA and SCS 1975).
Dry Comal Creek is a natural stream with ephemeral flow. At the present, some springs along the creeks flow during wet seasons but not during the dry seasons (USDA and SCS 1975).
eM 62
Although originally described as a burned rock midden in the preliminary survey (Hester, Bass and Kelly 1975), recent re-evaluations suggest 41 CM 62 can better be described as an extensive terrace campsite/quarry. The site is situated atop a steep terrace 10 m above Dry Comal Creek. Site dimensions are 200 m north/ south by 150 m east/west with a drainage at both'the northern and southern boundaries (see Fig. 1). Although relic collectors have been active in the area for many years (H. Kreusler, personal communication*), many prehistoric lithic artifacts were still observed on the surface of ,the site prior to subsurface testing. Along the terrace edge and especially in the southern part of the site, quarry activities are to be found in the form of sampled chert (having only one or two flakes removed), chert nodular cores, some tabular chert cores and large quarry blanks (see Fig. 2). Limestone bedrock and chert, the majority of which is nodular, are found eroding out along all of the terrace edge. Subsurface examination of the site was conducted by excavation of five 1 m 2 test units. These units were arbitrarily distributed across the site to test for depth and nature of the cultural deposits. All soil was screened through 1/4-inch wire mesh. Vertical control for all units was established at 5-cm intervals. Artifact deposits at all units did not exceed 15 cm (see Table 1 for artifact proveniences from the excavation units). One feature was noted during the testing operation: a possible hearth in Unit 5. Although the site as a whole contains an extensive surface scattering of lithic debris, subsurface examination shows the vertical depths of cultural deposits are minimal and occur in a thin layer across the terrace.
In addition to limited testing, a detailed site map was drawn (see Fig. 2). The entire prehistoric site was also intensively surveyed and all observed artifacts were collected, flagged or mapped in place (see Fig. 3,a and Table 2).
The results of the testing operations at 41 CM 62 suggest the locality once served as a preferred occupational site at different periods throughout the long history of aboriginal occupations in the region. The Late Paleo-Indian, Pre-Archaic and all three phases of the Archaic (Early, Middle and Late) periods are represented at 41 CM 62 in the form of lithic artifacts. A probable Ango~~uka point ( Fig. 4,a) of the Late Paleo-Indian time period, a Pre-Archaic "Early Triangular ll point (Hester 1971;Fig. 4,f) and a Guada1..upe. tool (Hester and Kohnitz 1975), and two Early Archaic (T~av~ and Nolan) dart points ( Fig. 4,g,h) were recovered. Other diagnostic projectile points at the site include a Pe.d~nal~ from the Middle Archaic period (Fig. 4,c) and an En6o~ and Mo~e.ll from the Late Archaic ( Fig. 4,d,e). The Mo~e.ll point was found in the first level (0-5 cm) of Un{t 1. Five unclassified dart points or fragments were also found on the surface of the site.
Other lithic artifacts from 41 CM 62 include two triangular plano-convex tools (Fig. 5,d,e). These triangular plano-convex tools are similar to Cle.~ Fo~k tools (Howard 1973;Hester, Gilbow and Albee 1973), but they are shorter and have a wider base than a Cle.~ Fo~k tool found in a bulldozer cut at 41 eM 63 ( Fig. 5,a). A tool similar to the artifacts at 41 eM 62 was found at the La Jita Site, Uvalde County, and referred to as a "gouge-scraper ll (Hester 1971).
Many other lithic artifacts from the s~rface of 41 CM 62 were either noted or collected. A general descriptive term was given to each of the surface artifacts noted; the majority were not collected ( Table 2). The purpose of the intensive mapping of 41 CM 62 was to ascertain surface distribution of lithic artifacts within a limited field work period. All of the surface artifacts are inventoried in Table 2 and plotted on the site map of 41 CM 62 (Fig. 2).
surface survey and limited excavations suggest the locality was a prehistoric occupation site on a lower terrace of Dry Comal Creek. No evidence of a burned rock accumulation was found during the recent s~rvey operations.
The centerline for Floodwater Retarding Structure No.2 runs through a segment of 41 CM 63. Prior to the current archaeological investigations, about 40% of the site was destroyed, especially along the dam centerline (see Fig. 3,b and Fig. 6).
The surface of 41 CM 63 has an undulating appearance probably due to ongoing erosion and recent activities involved with preparation for the construction of Subsurface examination of 41 CM 63 consisted of two 1 m 2 test units excavated by trowels and screened through 1/4-inch wire mesh. Vertical control was established by the use of 10-cm intervals.
Unit 1 was located at the edge of dense juniper growth. It was excavated to 50 cm in depth before reaching a culturally sterile level. Unit 2 was located in an open .and possibly eroded area. A culturally sterile level was reached at a depth of 20 cm. The artifacts recovered from the excavation units are inventoried in Table 1.
Surface examination included a collection of four 1 m 2 units (Units 3-6) from an undisturbed area of the site. In addition to the controlled collection of artifacts, a general surface collection was made of potentially diagnostic artifacts which were dispersed across the site. These artifacts were arbitrarily collected and mapped in (no detailed provenience was recorded for artifacts found in bulldozer cuts). Lit~le in the way of controlled collections was carried out due to the disturbed nature of the site. All artifacts from both surface collections are inventoried in Table 2. All units and collected artifacts are plotted on the site map (Fig. 6).
The lithic artifact analysis of 41 CM 63 is limited here to an identification of chronologically diagnostic artifacts. All of the lithic artifacts are listed in Tables 1 and 2. A Guadalupe tool (Fig. 5,b) of the Pre-Archaic, a Nolan projectile point (Fig. 4,i) of the Early Archaic and an Archaic Ciea~ FO~Q tool (Fig. 5,a) were collected from the surface. A triangular plano-convex tool (Fig. 5,f) similar to a Cie~ FO~Q tool was also found. The chronological age of these triangular tools is not known but they probably date to the Archaic period. Some other lithic materials observed on the surface of the site, but not collected, were trimmed/utilized flakes, cores, a variety of bifaces (whole and fragmented preforms and blanks) and flakes. All of the diagnostic artifacts were from the general surface collection. This page has been redacted because it contains restricted information.
eM 705
. While surveying the borrow pit area of Floodwater Retarding Structure No.2, a previously unrecorded site was found (see Fig. 1). 41 CM 105 is a small prehistoric occupation site located on an eastern upper terrace of Dry Carnal Creek. The site is ca. 60 x 60 m and is located near the 830-foot contour line as represented on USGS topographic maps (see Fig. 7). A water tank lies northwest of 41 CM 105, and a small part of the site may have been destroyed by its construction. A ranch complex (house, barn, outbuildings, etc.) lies southward within 1 km of 41 CM 105, and another ranch complex lies to the southeast, also within 1 km of the site. In addition to alteration of the site by construction of the water tank, obvious damage to 41 CM 105 has been caused by a dirt road, intense grazing of cattle and some erosion.
The surface of 41 CM 105 is relatively level (see Fig. 8,a). There are a few shallow erosional channels across the site. Visible concentrations of cultural material were found in these cuts.
The vegetation at 41 CM 105 is currently being affected by intense cattle grazing. Sparse grasses and a few juniper trees are present.
Since the site is in the borrow pit area of Floodwater Retarding Structure No.2, the survey crew decided that further investigation would be necessary to evaluate its archaeological potential prior to its destruction. A 1 m 2 test unit was excavated to a depth of 20 cm. Limestone bedrock was encountered at that depth. The unit was dug by trowel in arbitrary 10-cm i nterva 1 s, and all soil was screened through 1/4-inch wire mesh. The cultural material recovered from the unit is inventoried in Table 1. The excavation unit was arbitrarily placed in order to obtain information of the depth of the cultural deposit and for possible recovery of stratified diagnostic artifacts which would aid in the evaluation of the archaeological significance of 41 CM 105.
The surface of the site was intensively surveyed and sketch-mapped, and lithic artifacts were collected (they did not include the many flakes or other miscellaneous debris). Finished artifacts include two Guadatupe tools (Fig. 5,c), an Ango.6tW1.a. point (Fig. 4,b), "Early Corner Notched" projectile point fragments (Fig. 4,j), four additional unidentified dart point fragments and two preforms. The surface artifacts are inventoried in Fig. 1). Vegetation on these sites varies from very dense to sparse grasses and brush throughout the area. In virtually every area where chert is eroding out along the edge of the creek and the upper terraces, sampled or broken nodules of chert can be found. Lithic debris on these sites varies from a light scatter to heavy concentrations. The east side of the creek has a more gradual rise in elevation than the west side at this locale. Many small drainages cut through parts of the site, and in times of heavy rainfall, this side is probably subject to flooding.
At points where chert nodules are exposed, there are indications of quarrying activities, and some nodules exhibit intensive use. Many large bifacial quarry blanks were observed; they were 10-15 cm in length, with all or most of the cortex removed. Secondary and interior flakes are present along with various bifaces (quarry blanks, fragmented and medium to small sized) and cores scattered throughout the concentrations; few primary flakes were noted.
The terrain of the site is composed of limestone bedrock in the south (especially in Area A) to light brown/orange soil with limestone rock and chert nodules eroding out in the northern parts. Juniper and oak, along with brushy vegetation, are extremely dense in spots.
CM 66
41 CM 66 is on the west side of Dry Comal Creek. The site is bounded by the dam and spillway area on the south. A drainage separates this site and 41 CM 62 at the north (see Fig. 1). The site is over 2 km in length and has been divided into four major areas of lithic concentrations (A through D; see Fig. 1). Chert nodules are exposed from the creek bed up to the 8l0-foot contour line as well as throughout the site itself. This is approximately the elevation where the downward slope ends and the terrain levels out. The flood pool easement line is over 20 m below this contour.
Area A is over 300 m in length. Some unmodified chert has been broken by heavy machinery in this area but there was also extensive aboriginal activity. The lithic scatter in Area A is light, with many interior flakes and few primary or secondary flakes seen. The soil is a bright red/orange color.
Area B is about 550 m in length and is separated from Area A by 200 m with little to no lithic scatter. Area B is heavily littered with cores, large and small bifaces, secondary flakes and interior flakes. The soil matrix is a red/ orange color with many small chunks of chert buried in it.
Two controlled surface collections were made in Area B (see Figs. 1 and 8,b). Each of the two collection zones consisted of five 1 m 2 units. These ran parallel to the creek at approximately the 850-foot contour. All artifacts were mapped in place and all culturally altered materials were collected.
Two unifaces were recovered in Units 1-5 and many trimmed/utilized flakes were found in both groups of collection units (see Table 3). The abundance of trimmed/utilized flakes would tend to give support to the argument that more than lithic reduction was being carried out, at least at 41 CM 66 (Area B). Kelly and Hester (1975b) Area C is about 325 m in length and is separated from Areas Band D by natural drainages. Area D is separated from 41 CM 62 by another drainage and is roughly 400 m in length. The juniper and oak vegetation is very dense throughout both of these areas. Worked chert is found washed down the sides of all the drainages which cut into 41 CM 66.
CM 67
41 CM 67 is on the east side of Dry Comal Creek (Fig. 1). This site, along with 41 CM 69 which is directly opposite on the west side, is very lightly scattered and difficult to define. The main difficulty in describing attributes of this site lies in the fact that it is in the flood plain of the creek and is being repeatedly cut into by erosional activities. Vegetation varies from medium grass cover to thick brush.
CM 68
41 CM 68 is on the west side of Dry Comal Creek. Four areas of lithic concentration were designated (A through D, running north to south) for this quarry workshop site (Fig. 1). As in the case of all the quarry/workshop sites, the boundaries of 41 CM 68 reflect the exposure of chert nodules.
Area A is a band 20 m wide consisting of exposed and sampled chert nodules at its northernmost extreme. This area is about 150 m long (heading south toward Area B) and is found above and below the 848.8-foot contour line. The soil is a red clay and the vegetation is open grassland with juniper and oak trees. The co artifacts found on the surface of Area A include large cores, quarry blanks, preforms, fragmented bifaces (of various sizes), unifaces and trimmed/utilized flakes.
Areas A and B are separated by a 50-m strip which is void of lithic material. The dense vegetation and the reduction in occurrence of chert were used to distinguish between the two areas. The soil in Area B is the same found at Area A. The concentration is found above and below the 848.8-foot contour line and within the easement line as one moves south. This area is about 300 m in length. Some of the artifacts observed in the two areas included unifaces (some with concave edges), large and small bifaces, cores and primary, secondary and interior flakes. There is a 25-m strip of disturbed surface area at approximately the center of the site. This disturbance is the direct result of construction of an underground pipeline.
Area C is roughly 75 m south of Area B. A strip in between the two areas has the appearance of being cleared. Indeed, there are signs of trees being burned and uprooted throughout the length of 41 CM 68. The soil of Area C is an orange clay. The lithic scatter is lighter than Areas A and B, exhibiting a few scrapers, many cores and various bifaces (i .e., large, crude, fragmented and several preforms). More than half of Area C lies above the 848.8-foot contour.
Areas C and D are separated by 600 m. All of the 600-m area separation, including Area D, lies within the easement line and is in a series of erosional drainage channels or a flood plain. The brush and grass are dense and the chert is lightly scattered. Area D has less of a lithic concentration than do Areas A ahd B, but this is probably due to the extent of erosion in this part of the site.
eM 69
41 CM 69, like 41 CM 67, is composed of two light lithic concentrations which are separated by a light and sporadic lithic scatter (Fig. 1). These two concentrations are at opposite ends of the site. The total length of the site is just over 900 m and is nearly all below the 848.8-foot contour. The southernmost area is directly north of campsite 41 CM 62 and is divided by a drainage. This area is above the creek bed, but as one moves north, the terrain drops to just slightly above the level of the creek. The vegetation varies from open grassland to dense juniper and brush.
SW1MARY AND RECOMMENDATIONS
The field work described in this report fulfills the recommendations for controlled surface collection and testing as suggested for the archaeological resources to be affected by Fl oodwa ter Retardi ng Structure No. 2 (Hes ter, Bass and Kelly 1975).
An extensive reconnaissance of the five quarry/workshop sites has been performed. Lithic concentrations within the sites have been isolated when possible. A controlled collection of surface artifacts has provided further information about quarry/workshop sites in the Comal River Watershed.
The three sites tested are of importance in providing further information on predominantly Pre-Archaic and Early Archaic terrace campsites. The data obtained from 41 CM 63 and 41 CM 105 is considered to be sufficient considering the shallow and sometimes disturbed deposits. The survey phase of archaeological assessment for Floodwater Retarding Structure No.2 (Hester, Bass and Kelly 1975) was to provide an inventory of the archaeological resources in the area along with recommendations for actions concerning these resources. The fact that the 41 CM 63 site was partially destroyed before the recommended testing commenced is very disturbing. The 50;1 Conservation Service should make every effort to have contractors avoid causing damage to archaeological resources which are recommended for further study. The value of the information that was lost can never be assessed.
The final site that was tested, 41 CM 62, provided us with valuable archaeological information regarding its use as an occupation and quarry site. Artifacts from this site ranged from the Late Paleo-Indian period through the Archaic period.
41 CM 62 still has the potential for providing valuable archaeological data pertaining to the aboriginal utilization of a preferred prehistoric campsite over an extended period of time. This is possible since the site has been relatively protected from damage in the past. It is our recommendation that 41 CM 62 remain unaltered; however, if that is not possible, then intensive excavation is recommended for any portions of the site to be affected by actions other than possible temporary inundation by the flood pool.
During February of 1978, Mr. B. J. Gunter (letter dated February 24, 1978), Project Construction Engineer of the Soil Conservation Service (Seguin office), indicated that the elevation of 41 CM 62 is roughly 848.8 feet. The above elevation is the same as that of the proposed extent of the flood pool of Floodwater Retarding Structure No.2. Since less than half of 41 eM 62 is located below the flood pool line, the site should remain relatively unaltered in the event that impounded waters reach maximum level. However, the SCS should advise the contractors to keep heavy equipment off of the site area during the construction phase.
Many quarry/workshop sites are to be found in the Comal River Watershed, Comal County, Texas. Some of these sites have been examined in the past by Kelly and Hester (1975b) in the form of controlled surface collections. The purpose of this section is to provide comparative data on collected lithic artifacts from four quarry/workshop sites: 41 CM 66 (examined during recent field work), 41 CM 84,41 CM 85 and 41 CM 86 (Kelly and Hester 1975b). Kelly and Hester 1975b). The size of the 10 collection units at 41 CM 66 was 1 m 2 each. Similar artifacts were recovered at both sites. Kelly and Hester (1975b) Kelly and Hester (1975b; see Table 5). The frequency' of primary flakes at 41 CM 66 is 22% while the mean of the other three sites is 23%. A combination of primary and secondary flakes by Kelly and Hester (1975b) produced a frequency range of 55% to 75%. The combination of the primary and secondary flakes from 41 CM 66 is 80%.
These high primary and secondary flake frequencies and a scarcity of finished artifacts are further evidence to indicate the highly specialized nature of the quarry/workshop sites. It appears that the raw material was being reduced to portable form (quarry blanks/preforms) and removed to occupation sites (as suggested in Kelly and Hester 1975b). A general set of terms was established for the lithic artifacts discussed throughout this report. The terminology corresponds closely to the definitions in Kelly and Hester (1975b). Brief descriptions of the terms follow.
Con~:
Nodular or tabular chert specimens which have one or more flakes removed.
Flak~:
Fragments of chert detached from cores. The major kind of flakes are: primary (with up to 90% surface cortex), secondary (less than 90% cortex) and interior (no cortex). Trimmed/ utilized flakes are derived from the above categories and exhibit edge modification through use or retouch. All lithic debris was included in the flake categories whether or not a platform and bulb of percussion were present.
Uni6aQ~:
Usually thick flakes which have been modified on one face. Most of these unifacia11y flaked tools have steeply trimmed edges and were probably used as scrapers. Bi6aQ~: Chert specimens that have been bifacia1ly flaked. Large, thick crudely worked bifaces (usually with some surface cortex) appear to be quarry blanks. Preforms represent a subsequent phase of reduction. The size of bifaces can range from small (less than 10 em long) to very large (10 cm or more in length). | 2018-11-26T00:46:32.264Z | 1978-01-01T00:00:00.000 | {
"year": 1978,
"sha1": "007086232598fc2ef80d6a9824390bee5de64a17",
"oa_license": "CCBYNC",
"oa_url": "https://scholarworks.sfasu.edu/cgi/viewcontent.cgi?article=1343&context=ita",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cf04a40a6f0df4092c0b7afb17690ac45d69538e",
"s2fieldsofstudy": [
"Art",
"Environmental Science",
"Geography",
"History"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
252844372 | pes2o/s2orc | v3-fos-license | 99mTechnetium‐pyrophosphate bone scan: A potential biomarker for the burden of transthyretin amyloidosis in skeletal muscle: A preliminary study
Abstract Introduction/Aims Transthyretin amyloidosis (ATTR) proteins can infiltrate skeletal muscle and infrequently cause a myopathy. 99mTechnetium‐pyrophosphate (99mTc‐PYP) is a validated biomarker for cardiac involvement in variant and wild‐type ATTR (ATTRv and ATTRwt, respectively). The aim of this study was to test the hypothesis that 99mTc‐PYP is a biomarker for muscle burden of ATTR. Methods Radioisotope uptake in the deltoid muscles of patients with ATTR was compared to uptake in control subjects without amyloidosis in a retrospective study. 99mTc‐PYP scans were evaluated in 11 patients with ATTR (7 ATTRv, 4 ATTRwt) and 14 control subjects. Mean count (MC) values were measured in circular regions of interest (ROIs) 2.5–3.8 cm2 in area. Tracer uptake was quantified in the heart, contralateral chest (CC), and deltoid muscles. Results Tracer uptake was significantly higher over the deltoids and heart but not the CC, in patients with ATTR than in control subjects. MC values were 120.1 ± 43.7 (mean ± SD) in ATTR patients and 78.9 ± 20.4 in control subjects over the heart (p = 0.005), 73.3± 21.0 and 63.5 ± 14.4 over CC (p = 0.09), and 37.0 ± 11.7 and 26.0 ± 7.1 averaged over both deltoid muscles (p = 0.014). Discussion 99mTc‐PYP is a potential biomarker for ATTR amyloid burden in skeletal muscle.
| INTRODUCTION
Transthyretin amyloidosis (ATTR) is a multi-system disease caused by the deposition of a transthyretin (TTR) variant (ATTRv) or wild-type TTR (ATTRwt), the latter being a common entity associated with aging. 1 ATTR may present with a variety of musculoskeletal manifestations, such as carpal tunnel syndrome (CTS), lumbar spinal stenosis, and myopathy, several years before the onset of cardiomyopathy or polyneuropathy, which are the cardinal manifestation of ATTRwt and ATTRv. [2][3][4][5] Disease modifying treatment of ATTR is rapidly evolving, with two TTR gene silencers and a TTR stabilizer already approved by Food and Drug Administration (FDA) and European Medicines Agency (EMA) for ATTR neuropathy and cardiomyopathy. 6-8 ATTR myopathy is likely underdiagnosed partly because of weakness being attributed to the systemic disease. As the disease-modifying treatment of ATTR neuropathy is more effective when started early in the disease course, 9,10 and musculoskeletal manifestations often precede the diagnosis of cardiomyopathy and neuropathy by years, 2,4,5 biomarkers to assess ATTR burden in musculoskeletal tissue are needed. The diagnosis of ATTR is based on demonstration of tissue deposition of amyloid and then confirmation of ATTR through amyloid subtyping.
However, ATTR cardiomyopathy can be diagnosed based on nuclear scintigraphy without a tissue biopsy. 11 Nuclear scintigraphy with 99m Technetium-pyrophosphate ( 99m Tc-PYP scan) is a validated disease biomarker for cardiac involvement in ATTR, 12 with a high sensitivity and specificity in differentiating ATTR and non-ATTR cardiomyopathy. 11,13 A previous study demonstrated extensive uptake of technetium-99m-labeled 3,3-diphosphono-1,2-propanodicarboxylic acid ( 99m Tc-DPD) in the skeletal muscle of patients with ATTRwt and ATTRv (specially due to V122I mutation). 14 Furthermore, increased musculoskeletal uptake of 99m Tc-DPD nuclear scintigraphy was reported in a patient with ATTR myoneuropathy. 15 99m Tc-DPD and 99m Tc-PYP are both bone seeking radiotracers that have a high uptake in the myocardium of patients with ATTR 16 . 99m Tc-DPD is not approved by FDA and is not available in the United States.
We hypothesized that 99m Tc-PYP scanning may be a biomarker to assess muscle burden of ATTR. In this proof-of-concept study, we investigated whether there is an increased uptake of 99m Tc-PYP in the deltoid muscles of patients with confirmed ATTR compared to the patients who did not have ATTR.
| Subjects
The study was approved by the University of Chicago Biological Science Division Institutional Review Board before any data collection.
This was a retrospective study using a database of 176 patients who underwent 99m Tc-PYP cardiac imaging at the University of Chicago Hospitals between March 1, 2015, and March 1, 2020. Only patients whose arms were at their sides during the scanning were included, 151 patients were excluded because their scan was done with the arm stretched above the head, as the deltoid muscle tissue could not be clearly and reliably delineated. We identified 11 patients who carried a diagnosis of ATTR cardiac amyloidosis and 14 who had nonamyloid cardiac disease (control subjects). ATTR was diagnosed based on a cardiac visual score ≥ 2 on 99m Tc-PYP, and exclusion of amyloid light chain (AL) amyloidosis with serum protein immunofixation and light chain panel 11 ; ATTRwt and ATTRv were then differentiated with TTR gene sequencing. Only one of the 11 patients underwent a muscle biopsy to confirm the diagnosis of ATTR. Control patients were those with cardiomyopathy and heart failure with negative 99m Tc-PYP cardiac scans, defined by American Society of Nuclear Cardiology (ASNC) guidelines as grade 0. 17
99m
Tc-PYP planar cardiac imaging of the chest was done using twoheaded gamma cameras with low-energy, high-resolution collimators.
The dose of 99m Tc-PYP ranged from 10 to 25 mCi which was then allowed to incubate for 1 h with the option of extending to 3 h if additional information was needed. 18 The cardiac retention was determined using a semiquantitative visual score ranging from 0 (no uptake) to 3 (uptake greater than rib) 17,19 and a quantitative heart to contralateral (H/CL) ratio of total counts in the region of interest (ROI) over the heart divided by background counts in an identical size ROI of the contralateral chest (CC). 17,18 Results were considered positive for ATTR cardiac amyloidosis if there was a visual score ≥ 2 or an H/CL ratio ≥ 1.5. 18 Mean counts (MCs) over the deltoid muscles, heart, and CC were measured using IntelliSpace PACS 4.4 Enterprise software (Intelerad, Montreal, Canada), in circular ROIs of 2.5-3.8 cm 2 in circumference.
Only anterior views of the thorax at 1-h incubations were used to assess the MCs. To avoid contamination with bony structure uptake, ROIs for the deltoids were located lateral and inferior to the shoulder joint.
| Statistical analysis
Data analysis was performed using Stata 17 (College Station, TX).
Continuous variables are presented as mean (±SD) and categorical variables are summarized with counts (percentages). The 2 tailed ttest was used with continuous variables to compare between ATTR patients, their subgroups, and control subjects. The Fisher exact test was used to compare categorical variables. A p-value <0.05 was considered statistically significant.
| RESULTS
Seven of the 11 ATTR patients had ATTRv due to the V122I mutation and the rest had ATTRwt (Table 1). Neurological signs or symptoms were present in 8 of 11 ATTR patients, including distal sensory symptoms in 4, asymmetrical upper limb predominant neuropathy in 1, and proximal more than distal limb weakness in 3 ( Table 2 and Supplemental Table S1). Autonomic symptoms were not documented, and auto- (Table 1). Compared to measurements over the heart, the MC from the deltoids were equally useful in differentiating control subjects from ATTR patients: the area under the receiver operating characteristic (ROC) curve (AUC) was the same for measurements from the heart and deltoids (AUC = 0.786; Figure 1(B)).
| DISCUSSION
In this small proof-of-concept study, we demonstrated that the uptake of 99m Tc-PYP is increased in the deltoid muscles of patients with ATTR compared to control subjects.
There was no significant difference in the 99m Tc-PYP uptake in patients with or without deltoid weakness or EMG abnormality in that muscle. Possible explanations include: (1) small sample size of this study; and (2) asymptomatic nature of TTR deposition in some of the patients.
Two of the patients in our ATTR cohort had increased deltoid with normal heart uptake ( Table 1): patient 1, who had heart transplantation before the scintigraphy, and patient 9 who had limb weakness without heart disease.
Patient 6 presented with rapidly progressive respiratory, bulbar, and limb weakness, and a distal axonal polyneuropathy. He was diagnosed with amyotrophic lateral sclerosis (ALS) superimposed on an underlying ATTRv related cardiomyopathy and polyneuropathy.
ATTRv cases with a presentation mimicking ALS have been previously reported, 21,22 although with a slower rate of disease progression and a purely lower motor neuron phenotype. The patient did not have a nerve and muscle biopsy as he went on comfort care, and a postmortem examination was not conducted.
The limitations of this study are its retrospective nature, the small sample size, lack of data on 99m Tc-PYP uptake in the lower limbs, demonstration of amyloid deposition in the muscle biopsy in only one patient, and lack of variants other than V122I, which was the sole mutation in our ATTRv cohort due to its high prevalence in the metropolitan United States. 23 T A B L E 3 Characteristics of control and patient groups There was significant overlap between the uptake of 99m Tc-PYP in the deltoids of patients with and without ATTR raising the question of whether skeletal muscle 99m Tc-PYP uptake will ultimately prove useful as a clinical diagnostic test. The small sample size of the study prevents a definitive answer to this question, but even with the overlap the sensitivity and specificity of deltoid uptake of 99m Tc-PYP for ATTR diagnosis was comparable to that of the currently accepted measure of heart 99m Tc-PYP, with the same AUC for both measures. While these findings suggest that 99m Tc-PYP is a potentially useful biomarker for ATTR amyloid burden in the skeletal muscle, a larger, prospective study that includes the lower limbs will be needed to determine the applicability of this test in clinical practice.
| CONCLUSION
This preliminary, proof of concept study suggests that 99m Tc-PYP may be a viable biomarker to assess the muscle burden of ATTR.
ACKNOWLEDGEMENTS
This research received no specific grant from any funding agency.
CONFLICT OF INTEREST
Dr. Rezania has received honoraria from Alnylam and Akcea for serving in the advisory boards and as a speaker. The remaining authors have no conflicts of interest.
DATA AVAILABILITY STATEMENT
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
ETHICS STATEMENT
A preliminary version of this work was presented as a poster in American Association of Neuromuscular and Electrodiagnostic Medicine (AANEM) virtual meeting 2020.
We confirm that we have read the Journal's position on issues involved in ethical publication and affirm that this report is consistent with those guidelines. | 2022-10-13T06:18:09.951Z | 2022-10-12T00:00:00.000 | {
"year": 2022,
"sha1": "78ba7536210e3ccb59fafe3b3dddd96205bc8a29",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "ded71e1aaafad3bd3d2c605e9cfa89e6f45c603e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
103709039 | pes2o/s2orc | v3-fos-license | Characterization of Volatile Compounds from the Concrete of Jasminum grandiflorum Flowers
Jasmine is one of the most popular and important traditional loose flowers grown in India. Three species of jasmine viz., Jasminum sambac, Jasminum auriculatum and Jasminum grandiflorum is cultivated in a commercial scale (Rimando, 2003; Green and Miller, 2009). It holds a vital place in all the religious, social and cultural activities of the Indian society. Jasmine flowers have multifarious uses including use as fresh flowers for garland making for adorning hair of women and in religious offerings and also for extraction of its highly valued essential oil which is popularly used in the perfumery industry.
Introduction
Jasmine is one of the most popular and important traditional loose flowers grown in India. Three species of jasmine viz., Jasminum sambac, Jasminum auriculatum and Jasminum grandiflorum is cultivated in a commercial scale (Rimando, 2003;Green and Miller, 2009). It holds a vital place in all the religious, social and cultural activities of the Indian society. Jasmine flowers have multifarious uses including use as fresh flowers for garland making for adorning hair of women and in religious offerings and also for extraction of its highly valued essential oil which is popularly used in the perfumery industry.
Among these, Jasminum grandiflorum is semi-evergreen to deciduous shrub reaching a length of 8 meters, often with pendulous branches (Kulkarmi and Ansari, 2004;Sharma et al., 2005). Jasmine oil has great value for treating severe depression, respiratory tract, for muscle pain and for toning the skin. Three species of jasmine viz., Jasminum sambac, Jasminum auriculatum and Jasminum grandiflorum is cultivated in a commercial scale (Rimando, 2003;Green and Miller, 2009). Among these, the flowers of Jasminum grandiflorum are white with faint, delightfully fragment, and borne in lax, terminal inflorescences. In this study, Jasminum grandiflorum concrete extraction was carried out by solvent extraction with hexane for three genotypes viz., CO-1 pitchi, CO-2 pitchi and White pitchi which is cultivated in South India. The chemical composition of the genotypes was analysed by gas chromatography-mass spectrometry (GC-MS). The results showed that percentage yield of concrete were in the range of 0.29 to 0.34% per cent. The major chemical components detected were Pentane, 3-ethyl-2, 2, 2-dimethyl-; 1-Pentanol; 4-methyl-2-propyl-; Triacontane; Nonacosane; Octacosane; Tetratriacontane and Tetracosane. The result of this study showed that the GC-MS study is selective, rapid and efficient for the identification of volatile components and composition variations. and for toning the skin. This oil is expensive. It takes approximately 10000 flowers to make 1 kilo of concrete jasmine. Egypt is the main producer of jasmine oil.
The fully blossomed flower is used to extract its oil and concrete. A non-polar solvent such as Hexane is used to "wash" the aromatic compounds out of the flowers. After the hexane is evaporated a waxy, semisolid substance known as a "concrete" is left. The concrete then undergoes a series of "washings" with a polar solvent such as ethanol. The polarity of the ethanol will allow extraction of the volatile aromatics from the concrete while leaving behind the non-polar plant waxes which do not dissolved in the ethanol. Finally, the ethanol is evaporated to leave behind the ABSOLUTE which will typically have 1-5% ethanol remaining in it and sometimes a trace of hexane. The volatile emission pattern varies widely in different climatic conditions and between different genotypes.
Presence of all volatile compounds in the flowers only will give good quality concrete. In nature all the volatile compounds are fixed in the flowers with fibrous materials. Concrete is extracted from the freshly harvested flower or when the fragrance emission is slow. All the fragrance compounds are not easily released from the fibrous materials of the flowers. It is advisable to do the extraction of concrete when the major fragrance compounds are started to release vigorously i.e. when sudden increase in fragrance takes place from the harvested flower.
Gas Chromatography-Mass Spectrometry (GCMS) is a process that integrated the features of gas chromatography and mass spectrometry to improve efficacy of qualitative and quantitative analysis within a test sample. The gas chromatograph applies which depend on the column (type, material, length, diameter, film thickness) as well as the phase properties. The mass spectrometer does this by breaking each molecule into ionized fragments and detecting these fragments using their mass to charge ratio (Bramer, 1998). Applications of GC-MS include drug detection, plasma detection, fire investigation, environmental analysis, explosives investigation, and identification of unknown samples. Additionally, it can identify trace elements in materials that were previously thought to have disintegrated beyond identification. The purpose of this study was to identify the volatile compounds released from three genotypes of Jasminum grandiflorum viz., CO-1 pitchi, CO-2 pitchi and White pitchi.
Flower preparations
Freshly opened blossoms were collected every day before 9.30 a.m., weighed and subjected to extraction.
Extraction method-Solvent extraction
For extraction of concrete, the flowers were harvested when fully opened before 9.30 AM. Concrete content of flowers was analyzed by solvent extraction method (ASTA, 1960) with food grade hexane, averaged and expressed in per cent of concrete recovery. A sample of fifty gram was taken in the glass column of Soxhlet appraratus and concrete content was estimated using food grade hexane as solvent. Soluble extract was then drained off into a pre weighed 100 ml beaker (W 1 ).
The extract was then evaporated on a steam bath and heated for 30 minutes in an oven at 60 o C, cooled and weighed (W 2 ).
The concrete content was calculated using the following formula and expressed in per cent.
Volatile compound analysis using GC/MS analysis
The volatile oil from jasmine flowers was dissolved in hexane and directly injected into the injection port of gas chromatograph (Agilent Technologies 7890A GC system) coupled with a mass spectrometer (Agilent Technologies 5975C inert XL EI/CI MSD with Triple-Axis Detector).The GC was operated on an Agilent J&W GC column HP5 column (30 m x 0.32 mm, id. with 0.52μm film thickness) and helium was used as the carrier gas.
The temperature program was started with an initial temperature of 150°C and held for 4 min at this temperature, then heated up to 170°C with a heating rate of 0.8°C/min and held for 1 min, heated up to 220°C with a heating rate of 3.0°C/min and held for 1 min, heated up to 240°C with a heating rate of 1.0°C/min and held for 1 min and heated up to 250°C with a heating rate of 5.0°C/min and held for 5 min at a flow rate of 0.7 mL/min. The obtained mass spectra were preliminarily interpreted by comparing with those of Enhance Chemstation Version D00.00.38 (Agilent Technologies), the Mass Spectral Search Library of the National Institute of Standards and Technology (NIST, Gaithersburg, USA).
Results and Discussion
The concrete of three genotypes of Jasminum grandiflorum viz., CO-1 pitchi, CO-2 pitchi and White pitchi was prepared by solvent extraction. The percentage of the concrete in jasmine genotypes was ranged from 0.29 to 0.34% per cent ( Table 1). The highest recovery of 0.34 per cent was observed in White pitchi while the genotype CO-1 recorded 0.29 percent and CO-2 recorded 0.32 percent. The chromatogram generated by gas chromatography shows the composition of the volatile oils from Jasminum grandiflorum genotypes viz., CO-1 pitchi, CO-2 pitchi and White pitchi (Fig.1). | 2019-04-09T13:02:55.671Z | 2017-07-20T00:00:00.000 | {
"year": 2017,
"sha1": "d42a4acf7e5ca08aa4c7d8f9cc7746dd75aeb110",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/6-7-2017/P.%20Ranchana,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2194a3c980f151dd603e0812345f6df41aa3c804",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Biology"
]
} |
18880300 | pes2o/s2orc | v3-fos-license | Algorithms for laying points optimally on a plane and a circle
Two averaging algorithms are considered which are intended for choosing an optimal plane and an optimal circle approximating a group of points in three-dimensional Euclidean space.
Introduction.
Assume that in the three-dimensional Euclidean space E we have a group of points visually resembling a circle (see Fig. 1.1). The problem is to find the best plane and the best circle approximating this group of points. Any plane in E is given by the equation where n is the normal vector of the plane and D is some constant. The vector r in (1.1) is the radius-vector of a point on that plane, while (r, n) is the scalar product of the vectors r and n. Once a plane (1.1) is fixed and r is the radius-vector of some point on it, a circle on this plane is given by the equation (
1.2)
Here ρ is the radius of the circle (1.2) and R is the radius-vector of its center. Having a group of points r [1], . . . , r[N ] in E, our goal is to design an algorithm for calculating the parameters n, D, R, and ρ in (1.1) and (1.2) thus defining a plane and a circle being optimal approximations of our points in some definite sense.
Defining an optimal plane.
Assume that n is a unit vector, i. e |n| = 1, and assume that we have some plane defined by the equation (1.1). Then the distance from the point r[i] to this plane Typeset by A M S-T E X is given by the following well-known formula: If we denote by d the root of mean square of the quantities (2.1), then we have Definition 2.1. A plane given by the formula (1.1) with |n| = 1 is called an optimal root mean square plane if the quantity (2.2) takes its minimal value.
It is easy to see that d 2 in (2.2) is a function of two parameters: n and D. It is a quadratic function of the parameter D. Indeed, we have 3) The quadratic polynomial in the right hand side of (2.3) takes its minimal value if Substituting (2.4) back into the formula (2.3), we obtain (2.5) In the next steps we use some mechanical analogies. If we place unit masses m[i] = 1 at the points r [1], . . . , r[N ], then the vector is the radius-vector of the center of mass. In terms of this radius vector the formula (2.6) for D is written as follows: Now remember that the inertia tensor for a system of point masses m[i] = 1 is defined as a quadratic form given by the formula: (see [1] for more details). We shall take the inertia tensor relative to the center of mass. Therefore, we substitute r[i] − r cm for r[i] into the formula (2.8). As a result we get the following expression for I(n, n): Each quadratic form in a three-dimensional Euclidean space has 3 scalar invariants. One of them is trace the invariant. In the case of the quadratic form (2.9), the trace invariant is given by the following formula: Combining (2.9) and (2.10), we write Taking into account the formula (2.6), we transform (2.11) as follows: Comparing (2.12) with (2.5) and again taking into account (2.6), we get The formula (2.13) means that d 2 is a quadratic form similar to the inertia tensor. We call it the non-flatness form and denote Q(n, n): (2.14) Like the inertia form (2.9), the non-flatness form (2.14) is positive, i. e.
If the inertia tensor is brought to its primary axes, i. e. if it is diagonalized in some orthonormal basis, then the form (2.14) diagonalizes in the same basis.
Theorem 2.1. A plane is an optimal root mean square plane for a group of points if and only if it passes through the center of mass of these points and if its normal vector n is directed along a primary axis of the non-flatness form Q of these points corresponding to its minimal eigenvalue.
The proof is derived immediately from the definition 2.1 due to the formula (2.7) and the formula d 2 = Q(n, n). Theorem 2.2. An optimal root mean square plane for a group of points is unique if and only if the minimal eigenvalue λ min of their non-flatness form Q is distinct from two other eigenvalues, i. e. λ min = λ 1 < λ 2 and λ min = λ 1 < λ 3 .
Defining an optimal circle.
Having found an optimal root mean square plane for the points r [1], . . . , r[N ], we can replace them by their projections onto this plane: (3.1) Our next goal is to find an optimal circle approximating a group of points lying on some plane (1.1).
Like in the case of (2.1), we denote by d the root mean square of the quantities (3.2). Then we get the following formula: The quantity d 2 in (3.3) is a function of two parameters: R and ρ 2 . With respect to ρ 2 it is a quadratic polynomial. Indeed, we have Being a quadratic polynomial of ρ 2 , the quantity d 2 takes its minimal value for Substituting (3.5) back into the formula (3.4), we derive (3.6) Upon expanding the expression in the right hand side of the formula (3.6) we need to perform some simple, but rather huge calculations. As result we get We see that the above expression is not higher than quadratic with respect to R. The fourth order terms and the cubic terms are canceled. Note also that the quadratic part of the above expression is determined by the form Q considered in previous section. For this reason we write d 2 as The vector L and the scalar M in (3.7) are given by the following formulas: (3.9) The quantity d 2 takes its minimal value if and only if R satisfies the equation where Q is the symmetric linear operator associated with the form Q through the standard Euclidean scalar product. The equality which should be fulfilled for arbitrary two vectors X and Y, is a formal definition of the operator Q (see [2] for more details).
In general case the operator Q is non-degenerate. Hence, R does exist and uniquely fixed by the equation (3.10). However, if the points r [1], . . . , r[N ] are laid onto the plane (1.1) by means of the projection procedure (3.1), then the operator Q is degenerate. Moreover, one can prove the following theorem. In this flat case provided by the theorem 3.1 one should move the origin to that plane where the points r [1], . . . , r[N ] lie and treat their radius-vectors as two-dimensional vectors. Then, using (2.14), (3.8), and (3.9), one should rebuild the two-dimensional versions of the non-flatness form Q, its associated operator Q and the parameters L and M . If again the two-dimensional non-flatness form is degenerate, this case is described by the following theorem. In this very special case we say that straight line approximation for the points r [1], . . . , r[N ] is more preferable than the circular approximation. Note that the same decision can be made in some cases even if the points r [1], . . . , r[N ] do not lie on one straight line exactly. If two eigenvalues of the three-dimensional nonflatness form Q are sufficiently small, i. e. if they both are much smaller than the third eigenvalue of this form, then we can say that λ min ≈ λ 1 , λ min ≈ λ 2 .
Taking two eigenvectors n 1 and n 2 of the form Q corresponding to the eigenvalues λ 1 and λ 2 , we define two planes (r, n 1 ) = D 1 , (r, n 2 ) = D 2 . (3.11) The constants D 1 and D 2 in (3.11) are given by the formula (2.7). The intersection of two planes (3.11) yields a straight line being the optimal straight line approximation for the points r [1], . . . , r[N ] in this case. | 2007-05-02T12:41:44.000Z | 2007-05-02T00:00:00.000 | {
"year": 2007,
"sha1": "0f71f49226c90f0c1d6e6b9387590a99434d2810",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "94f84307fd949a62202c7d3d939142d111a66f02",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
201020201 | pes2o/s2orc | v3-fos-license | Occupational swine exposure and Hepatitis E virus, Leptospira, Ascaris suum seropositivity and MRSA colonization in Austrian veterinarians, 2017–2018—A cross‐sectional study
Abstract We investigated the prevalence of Hepatitis E Virus (HEV), Leptospira and Ascaris suum (A. suum) seropositivity, and of nasal methicillin‐resistant Staphylococcus aureus (MRSA) colonization among Austrian practising veterinarians, and assessed the association with occupational swine livestock exposure. The 261 participants completed a questionnaire on demographics, intensity of occupational swine livestock contact and glove use during handling animals and their secretions. Participants' blood samples were tested for HEV, Leptospira and A. suum seropositivity and nasal swabs cultured for MRSA. We compared swine veterinarians (defined as >3 swine livestock visits/week) to non‐swine veterinarians (≤3 swine livestock visits/week) with regard to the outcomes through calculating prevalence ratio (PR) and 95% confidence interval (CI). Furthermore, the relationship between occupational swine livestock contact and the study outcomes was examined by age (</≥55 years) and glove usage. The prevalence of nasal MRSA colonization was 13.4% (95% CI: 9.3–17.6), of HEV seropositivity 20.8% (95% CI: 15.8–25.7) and A. suum seropositivity 44% (95% CI: 37.7–50.2). The highest anti‐leptospiral antibodies titres were 1:200 (L. hebdomadis) and 1:100 (L. autumnalis, L. caicola) found in three non‐swine veterinarians. Compared to non‐swine veterinarians, swine veterinarians were 1.9 (95% CI: 1.0–3.4) and 1.5 (95%CI: 1.0–2.3) times more likely HEV seropositive and A. suum seropositive, respectively, and 4.8 (95%CI: 2.5; 9.3) times more likely nasally colonized with MRSA. Among glove‐using veterinarians, occupational swine contact was no longer a determinant for HEV seropositivity (PR 1.6; 95% CI: 0.8–2.9). Similar was found for A. suum seropositivity, which was no longer associated with occupational swine livestock contact in the subgroup of glove using, ≥55‐year‐old veterinarians (PR: 1.07; 95% CI: 0.4–3.3). Our findings indicate that >3 occupational swine livestock visits per week is associated with HEV and A. suum seropositivity and nasal MRSA colonization and that glove use may play a putative preventive role in acquiring HEV and A. suum. Further analytical epidemiological studies have to prove the causality of these associations.
The HEV is classified into four human pathogenic genotypes (gt1-4), with gt1 and gt2 exclusively infecting humans (Mushahwar, 2008). Acute HEV infection is usually self-limiting and probably less than 5% infected, develop symptoms of acute hepatitis. Domestic swine and wild boars are the main animal reservoir for HEV gt3 and gt4 (Lewis, Wichmann, & Duizer, 2010). Berto et al. (2012) found in commercial swine farms in six European countries, other than Austria, a faecal HEV prevalence in growers of 20%-44% and in fatteners of 8%-73%. An increasing number of locally acquired HEV infections in humans, primarily due to gt3, have been reported in Europe (European Association for the Study of the Liver. Electronic address & European Association for the Study of the, 2018; Kamar, Dalton, Abravanel, & Izopet, 2014;Lewis et al., 2010), mainly by zoonotic transmission, in particular, from domestic swine and wild boars or deer (Purcell & Emerson, 2010). This occurs through direct contact with HEV positive swine faeces (Lewis et al., 2010) or consumption of raw meat products, such as liver, from HEV-infected swine and wild boars (Di Bartolo et al., 2012;Berto et al., 2012). In Austria, the number of yearly reported cases rose from 17 cases in 2014 to 87 in 2017 (Bundesministerium für Arbeit Soziales Gesundheit und Konsumentenschutz, 2018).
Leptospirosis is one of the most common zoonosis worldwide.
The manifestation of human infection with Leptospira ranges from subclinical infection to severe clinical disease with multi-organ failure (Weil's disease) and high case fatality rates (Heymann, 2015). Rodents, cattle, horses, sheep, goat and pigs, and unvaccinated dogs as companion animals are considered common reservoirs for Leptospira (Bharti et al., 2003). The transmission to humans occurs through contact of non-intact skin and intact mucous membranes of eyes, nose and mouth with urine, blood or tissue from infected animals or contaminated water (Heymann, 2015). Occupational risk groups are mineworkers, farmers, agriculture workers, sewer workers, slaughterhouse workers, animal caretakers, fish workers, dairy farmers, military personnel and veterinarians. Exposure to Leptospira can also occur during recreational activities such as water sports (Haake & Levett, 2015). In Austria, cases of Leptospirosis are rare (Bundesministerium für Arbeit Soziales Gesundheit und Konsumentenschutz, 2018).
Ascaris suum is a parasitic nematode that causes ascariasis in swine following faecal-oral transmission of its eggs (Nejsum et al., 2005). A. suum is transmitted to humans through direct contact with eggs in swine faeces and swine manure, in water and soil due to fertilization with swine manure. Food-borne transmission can occur through consumption of raw, unwashed food contaminated with infective A. suum eggs or through consumption of raw pork meat (liver) containing A. suum larvae (Deutz, 2017). Most human cases of A. suum infection tend to be asymptomatic; typical symptomatic presentation is the larva migrans visceralis (VLM) syndrome (Yoshida, Hombu, Wang, & Maruyama, 2016). Serum samples from patients with VLM syndrome in the Netherlands and Austria showed an A. suum seroprevalence of 33% and of 13%, respectively (Pinelli, Herremans, Harms, Hoek, & Kortbeek, 2011).
Livestock-associated (LA-) MRSA causing human disease was first reported in 2003, when a MRSA strain typically related to swine was isolated from a cohort of 6/23 patients in the Netherlands (de Neeling et al., 2007). This strain belonged to the multilocus sequence type (MLST) 398. Therefore, colonization of swine and calf livestock with LA-MRSA has been reported in Europe and Northern America (Mroczkowska et al., 2017;Sharma et al., 2016). Swine farmers and swine veterinarians are at increased risk of exposure to LA-MRSA (Walter et al., 2016). Transmission occurs through physical contact with colonized animals or through inhalation of LA-MRSA contaminated dust (Schulz et al., 2012). colonization and that glove use may play a putative preventive role in acquiring HEV and A. suum. Further analytical epidemiological studies have to prove the causality of these associations. • Swine veterinarians are more likely to be HEV and A. suum seropositive, and nasally colonized with MRSA, compared to non-swine veterinarians.
K E Y W O R D S
• Glove use during handling swines and their secretions may play a preventive role in acquiring HEV and A. suum.
| Study design
We conducted a descriptive and analytical cross-sectional study among practising veterinarians in Austria. The aim of the descriptive study was to estimate the prevalence of HEV, Leptospira and A. suum seropositivity and of nasal colonization with MRSA among Austrian practicing veterinarians. The aims of the analytical study were to investigate the association of occupational swine livestock exposure with HEV, Leptospira and A. suum seropositivity and nasal MRSA positivity and to assess the potential effect of glove use on the association of occupational swine livestock exposure with HEV, Leptospira, A. suum seropositivity and nasal MRSA positivity.
| Study population
In 2017, we recruited a convenience sample of Austrian veterinarians at the three largest Austrian veterinary scientific conferences, which are usually attended by the majority of practicing veterinarians (registered at the Austrian veterinarian chamber), including also most of the Austrian swine veterinarians. At each of these three conferences, the study was announced at the beginning of the main lectures. A booth was available during the session breaks to explain the study and recruit study participants. Inclusion criteria were having been residing and practicing in Austria at least since 2016, consenting to participate to the study and to provide a serum sample and nasal swab and no signs of acute infection with HEV, Leptospira and A. suum and Staphylococcus aureus. The study power was retrospectively calculated using OpenEpi (Dean, Sullivan, & Soe, 2013).
| Definition of the outcomes of interest and laboratory testing
The outcomes of interest were past history of infection with HEV, A. suum seropositivity was defined as the detection of anti-A. suum IgG antibodies by using an in-house A. suum immunoblot (As-IB) based on larval secreted products as antigen, as previously reported (Schneider, Obwaller, & Auer, 2015). Additionally, serum samples were tested for the presence of anti-leptospiral antibodies using the microscopic agglutination test (MAT). A panel of 16 live cultures of Leptospira reference serovars as antigens and a cut-off MAT titre for seropositivity of ≥1:100 was used. This test was performed in two steps: (a) two doubling dilutions of each serum, 1:25 and 1:50, were used in an initial screening test; and (b) sera, tested positive in the first step, was titrated up to dilutions of 1:1,600. A positive and a negative control were included for each serovar in each test. The end-point was the highest dilution of serum, at which 50% agglutination occurred. A nasal MRSA colonization was indicated by a positive MRSA nasal swab culture. Additionally, we spa-typed the recovered MRSA isolates as described elsewhere (Cuny, Layer, Strommenger, & Witte, 2011).
| Data collection and definition of exposure factors
Using a self-administered questionnaire, we obtained information on the occupational swine livestock exposure status accordingly to five categories of contact intensity, which were defined by the weekly average number of swine livestock visits. This categorization was based on former studies (Wright, Jung, Holman, Marano, & McQuiston, 2008) and adapted to personal experiences from Austrian large animal veterinarians. The five categories of contact intensity were as follows: no contact (0 visit), extreme low contact intensity (>0-1 visit/week), low (>1-3 visits/week), moderate (>3-5 visits/week) and high contact intensity (>5 visits/week). In addition, we collected through the self-administered questionnaire information on demographics (i.e. age, sex, duration of practice) and factors described to be associated with the risk of HEV, Leptospira and A. suum infection and MRSA colonization (i.e. putative risk factors of the study outcomes) as potential confounding factors or effect modifiers. These factors were as follows: usage of personal protective equipment (defined as use of gloves and facemask during handling with animals and their secretions), occupational slaughterhouse meat inspection, and occupational swine contact abroad, farming or hunting activity and dietary behaviour (consumption of raw innards, vegetarian, vegan diet). We asked for factors explicitly associated with MRSA colonization: chronic skin disease, skin-soft tissue infections within the previous 6 months, hospital stay more than 3 days within the past 12 months, recent intake of antibiotics and presence of a healthcare worker among household members. HEV relevant information collected were travel history to HEV-endemic countries and alcohol consumption, and Leptospira relevant information such as camping, freshwater sport activity, and past or present military service.
| Data analysis
We described the study population by the five categories of swine livestock contact intensity (as defined above: no contact, extreme low, low, moderate, and high contact intensity). We calculated the prevalence of HEV, Leptospira and A. suum seropositivity and of nasal MRSA colonization as the proportion of positives among the study population, and the 95% confidence intervals (95% CI), using Wald method (Rosner, 2000). We defined a binary exposure variable for occupational swine livestock contact. For this purpose, we compared participants of each exposure subgroup (i.e., high, moderate, low and extreme low swine livestock contact intensity) to those without occupational swine livestock contact, as the reference group, with respect to the study outcomes (HEV, Leptospira and A. suum seropositivity and nasal MRSA colonization) through calculating the exposure subgroup-specific prevalence ratios (PR) and 95% CIs by using univariable Poisson regression models (Martinez et al., 2017).
A PR with a 95% confidence interval not including 1 was considered as significant measure of association. These exposure subgroups, significantly associated with the study outcomes, were merged into the exposure group. The veterinarians occupationally exposed to swine livestock as defined and were also referred as to swine veterinarians and the unexposed as to non-swine veterinarians.
First, we calculated the frequency of the putative risk factors for HEV, Leptospira, A. suum infection and nasal MRSA colonization among the swine veterinarians, compared to the non-swine veterinarians through calculating proportion differences and their 95% CIs using the STATA -cs-command. Second, we tested the association of the occupational swine livestock exposure with the study outcomes (HEV, Leptospira, A. suum seropositivity and nasal MRSA colonization), and, in addition, the relationship of the putative risk factors with the study outcomes through calculating the prevalence ratios (PR) with their 95% CIs, using univariable Poisson regression models. Third, we analysed the association of occupational swine livestock exposure with the study outcomes by the putative risk factors, which were found to be associated with the study outcomes, in order to identify confounders and effect modifiers. We calculated strata-specific PRs (95% CI) of the outcomes and tested for homogeneity of the strataspecific PRs to determine whether these measures of association are significantly different by using the STATA -csinter-command. In case of significant difference, then strata-specific PRs along with their 95% CI were presented, otherwise, we calculated the Mantel-Haenszel (M-H) PR as adjusted measure of association and compared it with the crude PR. At least 20% change in the measure after adjusting for the stratifying variable was considered indicative of confounding. Fourth, we calculated the prevalence of the study outcomes across the subgroups of occupational swine livestock contact intensity and tested the significance of a potential dose effect by using chi-square test for a trend. All data analyses were performed using Stata/SE 13.1. the sample size of 261 participants including 47 exposed and a prevalence of MRSA colonization of 8% among the unexposed and a significance level of 5% (alpha. 0.05), we were able to identify at least a prevalence ratio of 2.8 with a power of 80%. With a sample size of 256 participants, an unexposed to exposed ratio of 4.4 and a prevalence of HEV seropositivity of 18% among the unexposed, we were able to identify at least a prevalence ratio of 2.1 with a power of 80%. Among the 248 participants including 45 with occupational swine exposure with a prevalence of A. suum seropositivity of 40% among the unexposed, we were able to detect at least a prevalence ratio of 1.6 with a power of 80%. Out of the 261 study participants, 173 (66.3%) had no swine livestock contact at all, 21 (8.0%) were allocated to the subgroup of extreme low contact intensity, 20 (7.7%) to the subgroup of low contact intensity, 8 (3.1%) and 39 (14.9%) to the subgroups of moderate and high swine livestock contact intensity, respectively.
| Prevalence of HEV-, Leptospira-and A. suum seropositivity and nasal MRSA colonization
Estimates of HEV, Leptospira and A. suum seroprevalence and nasal MRSA colonization prevalence are given in Table 1. Three participants, all non-swine veterinarians, were Leptospira seropositive: one participant with a single MAT titre of 1:200 against L. hebdomadis and two with a single titre of 1:100 against L. autumnalis and L. caicola, respectively.
| Defining the binary exposure variable for occupational swine livestock contact
Compared to no occupational livestock contact, the two contact intensity categories >0-1 and >1-3 visits/week showed no association with the study outcomes (HEV, A. suum seropositivity, MRSA colonization). Based on these findings (data not shown), we defined the study participants with >3 swine livestock visits/week as occupationally exposed (i.e., swine veterinarians), and participants with ≤3 swine livestock visits/week as unexposed (i.e., non-swine veterinarians). Accordingly, 47 participants fulfilled the criteria of the exposure group. Table 2 shows the distribution of putative risk factors of the study outcomes, of female sex and older age (>55 years) between the swine vet-
A Table S1 shows the prevalence ratios (95% CI) of nasal MRSA colonization and HEV seropositivity by their putative risk factors, which were not presented in Table 3. None of these putative risk factors showed an association with MRSA colonization and HEV seropositivity in our study population.
| Association of occupational swine livestock exposure with the study outcomes stratified by older age and glove usage
When testing the association of occupational swine contact with HEV, and A. suum seropositivity, respectively, by age as the stratifying variable, at the levels, <55 and the ≥55years, we found no evidence of effect modification (test of homogeneity, p = .3 for HEV; p = .1 for A. suum). Among the age group <55 years, the PR of HEV seropositivity among the swine veterinarians, compared with the non-swine veterinarians was 1.8 (95% CI: 1.0-3.4) and among the
| Dose effect of the occupational swine livestock exposure
The prevalence of nasal MRSA colonization increased with swine contact intensity (>0-1, >1-3 and >3 visits per week), compared to the reference group, no occupational swine exposure. The same holds for the A. suum seroprevalence, when using the contact intensities >0-1, >1-5 and >5 swine livestock visits per week, compared to the reference group (Table 4).
| D ISCUSS I ON
This is the first study in Austria estimating the seroprevalence of HEV, Leptospira and A. suum and the prevalence of nasal colonization with MRSA among veterinarians. We aimed at assessing the association of selected zoonotic diseases with occupational swine livestock contact as we were interested in their potential as occupational diseases among Austrian swine veterinarians.
Our study detected a HEV seropositivity of 21% among all participating veterinarians and of 18% among the non-swine veterinarians. Previous seroprevalence studies from Austria found among blood donors and soldiers a HEV IgG seropositivity of 13.6% and 14% (Fischer et al., 2015;Lagler et al., 2014), and a recent study in the general population from southern Germany a HEV IgG seroprevalence of almost 18% (Mahrt et al., 2018). We detected an almost two times higher HEV IgG seroprevalence among the swine veterinarians, compared to the non-swine veterinarians. This is in accordance with findings from the United States and Germany, in which veterinarians and farmers with close and frequent occupational swine contact were 1.5 and 2 times more likely to be HEV seropositive, compared to blood donors TA B L E 2 Frequency distribution of female sex, age ≥55 years, and presence of putative risk factors for the study outcomes between exposed (swine veterinarians) and unexposed (non-swine veterinarians), N = 261, and proportion difference with 95%CI (Krumbholz et al., 2012). In our study, veterinarians ≥55 years old were more likely to be HEV seropositive than the <55 years old. This is in accordance with findings from an HEV prevalence study among Finnish veterinarians (Kantala et al., 2016) and may be explained by a higher cumulative occupational risk associated with increasing number of working years. However, older age was neither found as confounder nor as an effect modifier for the association between occupational swine livestock contact and HEV seropositivity. We found, that occupational swine exposure was no longer associated with HEV seropositivity among the veterinarian subgroups, which usually use gloves. But among the veterinarian subgroups working with bare hands, the swine veterinarians were two and a half times more likely HEV seropositive relative to the non-swine veterinarians.
A Leptospira seropositivity, detected only in three non-swine veterinarians without clinical manifestation, were at low single microagglutination titres (1:200 and 1:100) to L. hebdomadis, L. autumnalis and L. caicola. A cross-sectional study from the US among 511 veterinarians, found a seroprevalence of 2.5% (Whitney, Ailes, Myers, Saliki, & Berkelman, 2009). In an Austrian study from 1997, 2.9% of 137 veterinarians tested were Leptospira seropositive (Nowotny et al., 1997). Two of those had a single high titre of 1:800 to Leptospira bataviae and of 1:1,600 to Leptospira saxkoebing. Higher Leptospira seroprevalence estimates, namely of 10% and 23%, were found in Austrian hunters and in Austrian professional soldiers including military short-service volunteers. This might be due to more frequent unprotected contact with animal secretions and tissue or contaminated water (Deutz, 2007;Poeppl et al., 2013). (Walter et al., 2016). A Dutch prospective cohort study in livestock veterinarians detected that veterinarians with a cumulative duration of at least 3 months of livestock swine contact were at five times higher risk of persistent MRSA carriage, relative to the other livestock veterinarians (Verkade et al., 2013). Wearing a face mask can considerably lower LA-MRSA carriage in swine farmers (van Cleef et al., 2015). However, we did not find a protective effect of nose-mouth mask-use with regard to LA-MRSA positivity among the swine veterinarians.
| Study limitation
The study participants were recruited at the three largest Austrian conferences in 2017, which are usually attended by the majority of the practicing veterinarians in Austria, according to the Austrian Veterinarian chamber (K. Fürwith, personal communication, September 16, 2017). However, a selection bias cannot be excluded as the recruitment among these attendees was on a voluntary basis.
Secondly, as the study was underpowered, it has failed to detect associations, in particular, when analysing our primary association by further variables. Third, using seropositivity as an outcome of interest in an analytical cross-sectional study is prone to the antecedent-consequent bias. Therefore, our findings should be interpreted with caution, but used to generate hypotheses on the veterinary occupational risk of infection with HEV, A. suum, Leptospira, and of colonization with MRSA.
We detected a low prevalence of Leptospira seropositivity among the Austrian veterinarians, compared to previous findings among this occupational group. Our findings indicate that Austrian veterinarians with frequent occupational swine livestock contact are more likely to be HEV and A. suum seropositive and nasally colonized with MRSA. Glove use during handling swines and their secretions may play a preventive role in acquiring HEV and A. suum. Analytical epidemiological studies have to prove the causality of these findings to make evidence-based recommendation on glove usage for swine veterinarians.
ACK N OWLED G EM ENTS
The authors want to thank the Austrian veterinarians who participated in the study and therewith provided the possibility to conduct the introduced data.
CO N FLI C T O F I NTE R E S T
The authors declare that they have no conflict of interest.
E TH I C A L A PPROVA L
The institutional review board of the city of Vienna studied the protocol and decided on 17.01.2017 under EK 17-003-VK-NZ that the study did not require formal ethical review. | 2019-08-17T13:04:17.907Z | 2019-08-16T00:00:00.000 | {
"year": 2019,
"sha1": "6546015341ab6a1e6d6863c23341e125bf9bed82",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/zph.12633",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f26653ddda301e95bc7fefbcf897b8c345a56e81",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263630285 | pes2o/s2orc | v3-fos-license | The Effect of Virtual Education in Parenting Skills on the Parenting Sense of Competence in First-time Mothers with a 0-2-year-old Baby: A Quasi-experimental Study
ABSTRACT Background: Parental competence is a key aspect of parenting. Since they have no previous experience of having a baby, first-time mothers should acquire certain skills to be competent enough in caring for their baby. The present study aimed to investigate the effect of virtual education in parenting skills on the parenting competence of first-time mothers with a 0-2-year-old baby. Methods: This quasi-experimental study was conducted through convenience sampling; 72 first-time mothers were selected from 12 healthcare centers, 62 of whom met the criteria for entering the study, and divided into an experimental (n=31) and a control (n=31) group. The mothers in the experimental group received virtual education in parenting skills in six sessions, each lasting 10 minutes for two weeks. The data were collected using a demographics questionnaire and Gibaud-Wallston’s parenting sense of competence scale. Sense of competence was assessed in three stages: before, immediately after, and one month after the completion of the intervention. The collected data were analyzed using SPSS v. 22 at a significance level of less than 0.05. Results: Results showed a statistically significant increase in the experimental group’s parenting competence mean score immediately and one month after the intervention (P<0.001). There was a statistically significant difference between the mean scores of the study groups as measured immediately after (P=0.043) and one month after the intervention (P<0.001). Conclusion: Virtual education of parenting skills could have a positive impact on the mothers’ parenting competence. It is suggested that first-time mothers should be educated in parenting skills on a face-to-face basis in maternity wards and online after discharge.
Introduction
The quality of infant care and mother-infant interactions is influenced by a variety of factors, one of the most important of which is parental competence.Women with a strong sense of competence and high satisfaction with their role as a mother have a secure attachment style and display responsible and sensitive parenting behaviors which facilitate the growth and development of their infants. 1Moreover, women with a stronger sense of competence are more determined to perform their duties as a mother, avoid self-reproach, and achieve higher levels of achievement and satisfaction in their mothering. 2In Iran, approximately 70000 babies are born to primiparous mothers every year. 3 variety of factors impact the development of a sense of mothering competence, and different studies have reported different results regarding the influence of those factors. 4,5 uch characteristics in mothers as age, marital status, education, depression, number of pregnancies, perceived social support, pleasure from child labor, and perception of the nature of their infants are among the influential factors in their development of a sense of competence. 6arenting competence includes parents' knowledge, skill, and experience in raising children and enables them to successfully fulfill their duties as parents, thereby preventing crises or coping with them if they come up. 7ne of the factors which determine the parents' influence on their children is parenting style, which is classified into four types: authoritarian, authoritative, permissive, and uninvolved.Parenting styles reflect the nature of parents' communication with their children. 8,9 a very important obligation which is not comparable to humans' other responsibilities, parenting is a skill which can be improved through education.Parenting sense of competence is defined as parents' self-efficacy and perceived satisfaction with their parenting role, which reflects their conviction that they are capable of effectively performing their parenting role. 10It is essential that parents, especially mothers, be aware of the impact of different parenting styles on their children's mental and behavioral states and personality development.In a wide spectrum of clinical interventions, parents are regarded as the key factor in changing their children's anti-social behaviors. 11Studies show that most first-time mothers have a poor sense of competence in performing their duties as a mother, which is largely because of their lack of experience.Educating mothers in parenting skills results in better mother-child outcomes, including elevated self-efficacy and reduced anxiety and stress. 12,13 search shows that parenting competence is a crucial matter which deserves more attention.The spread of COVID-19 resulted in the development of online learning and employment of e-learning systems. 14During the pandemic, face-to-face education was limited, and learners had to be educated by virtual means.Thus, virtual education, which allows for distance learning at any time and place and management of prevention of the infection, became very popular. 15][18] Accordingly, face-to-face education has been restricted and replaced by virtual education. 19ince the role of parenting skills and its impact on parenting competence of new mothers in Iranian society has received little attention, the current study aimed to investigate the effect of virtual education in parenting skills on the parenting competence of first-time mothers with a baby under 2 years of age.
Materials and Methods
This is a quasi-experimental study conducted from June to September 2021 at 12 healthcare centers in Shiraz University of Medical Sciences.These centers were selected via cluster random sampling.Given a 95% confidence level and 80% power, and based on the results of Azmoude et al.'s study 12 and considering a loss-to-follow-up rate of 20%, the sample size was estimated 72 individuals, using the following formula: ijcbnm.sums.ac.irSeventy-two mothers were selected using convenience sampling (six mothers from each center), and 62 of them were included in the study based on the inclusion criteria.The inclusion criteria were being literate, living with one's spouse, being a first-time mother with an only child aged 0-2 years, not having a medical or psychological disorders, not having a history of hospitalization before or after childbirth (except for delivery), not having a baby with congenital abnormalities, not attending parenting workshops before the study, being available by phone, having access to the Internet and social media apps, and being willing to participate in the study.The mothers who had one absence during the education period or were not willing to continue the education during the study were excluded.After explaining the study goals, we asked the subjects to complete a written informed consent form at the site of their healthcare center.Subsequently, they completed electronic versions of a demographics questionnaire, and Gibaud-Wallston's parenting sense of competence scale on online software in the Persian language (Porsline) was used to design an electronic web-based questionnaire for collecting the data.Mothers were enrolled in the study using convenience sampling, and then divided into an experimental (n=31) and a control group (n=31).
The mothers in the experimental group received routine care for their children and also were added to a WhatsApp group administered by one of the researchers; they attended educational training on parenting in six 10-minute sessions for two weeks (3 sessions at every week) using content creation (audios and podcasts) on various main topics: parenting styles, characteristics of different parents, principles of child raising, parents' treatment of children, and children's mental health.The educational content was derived from valid sources on parenting skills and verified by a panel of experts. 20At the end of each week, the researchers conducted group discussions to evaluate the participants' comprehension of the educational content, answer their questions, and ask for feedback.The mothers in the control group did not receive any education in parenting and were only introduced to some routine exercises for the development and learning of infants aged 0-2 years, including communication, gross motor skills, fine motor skills, problem solving, and personal-social issues.
Immediately and one month after the end of the intervention, the participants in both groups completed Gibaud-Wallston's parenting sense of competence scale online again.Since the participants were selected from different healthcare centers and education was provided virtually, there was no contact between the members of the experimental and control groups and, therefore, there was little chance of information transfer between the groups.
The personal characteristics such as age, level of education, marital status, number of children, employment status, place of residence, and the family's average monthly income were collected using a demographic questionnaire.
Mothers' sense of competence was evaluated using Gibaud-Wallston's Parenting Sense of Competence Scale (PSOC).It is a 17-item scale which developed by Gibaud-Wallston in 1978.In 1989, Mash and Johnston revised the questionnaire and reduced the number of items to 16.In the present study, the 16-item version was used.Each item is scored on a 6-point Likert scale from strongly disagree (6) to strongly agree (1).Scoring for seven items of this questionnaire including questions 1, 6, 7, 10, 11, 13, 15 is reversed, so that, for all questions, higher scores show greater positive parenting experience.Mash and Johnston (1989) reported the Cronbach's alpha of internal consistency of the entire scale 0.79. 21In Iran, Sarabi et al. (2011) translated the scale into Persian and had its content validity verified by a panel of experts. 22In a study by Parenting skills on the parenting sense of competence Azmoudeh et al. ( 2014), the content validity of the scale was measured qualitatively and quantitatively.In the qualitative stage, the scale was translated and given to a panel of experts, along with the original English version, and their suggestions were used to revise the instrument.In addition, the content validity index and content validity ratio of the instrument were calculated and verified.The reliability of the scale was calculated in terms of its internal consistency, and Cronbach's alpha was reported 0.71. 12he collected data were analyzed using SPSS version 22.0 for windows (IBM SPSS Inc., Chicago, IL, USA).The significance level was set at P<0.05.Descriptive statistics including mean, standard deviation, and frequency were used.Friedman and Mann-Whitney tests were used for comparison of parenting sense of competence mean scores within and between the groups.
The present article was extracted from a master's thesis in nursing.Ethical approval was obtained from the Ethics Committee of Shiraz University of Medical Sciences (IR. SUMS.REC.1400.136).Before the study, all the participants in both groups gave their written informed consent.The participants were assured that their information would remain confidential and would only be used for research purposes.Also, the participants were free to withdraw from the study at any point without any effect on their children care.
Results
In this study, 62 mothers were enrolled, most of them were housewife and had a fair monthly income level.Before the intervention, there were no statistically significant differences between the experimental and control groups in terms of their demographic features (P>0.05)(Table 1).The mean age of the participants in the experimental and control groups was 29.20±5.91 and 28.10±4.63years, respectively.
Table 2 shows the impact of education in parenting skills on the experimental and control groups.The results showed a significant increase in the experimental group's parenting competence mean scores as measured immediately after and one month after the intervention (P<0.001).On the other hand, the pretest and posttest mean scores of the control group were not significantly different (P=0.191)(Table 2).The findings showed that there was not a significant difference between the two groups' mean scores in the pretest stage (P=0.977).However, the posttest mean scores of the two groups were significantly different immediately after (P=0.043) and one month after the intervention (P<0.001).In the experimental group, the mean score of parenting sense of competence was significantly higher than the control group after the intervention (Table 2).
Discussion
The findings of the present study showed that the first-time mothers who were trained for parenting skills had significantly higher parenting sense of competence mean scores than pretest, immediately and one month after the intervention.It can show the positive impact of the intervention.
Competence in parenting and communication with the child is one of the most important qualities in a mother.A mother's competence depends on her knowledge of the aspects of a mother's role and her ability to play that role. 6According to Gordo's study, parents who have been trained in and have better knowledge of how to communicate with their children and the vulnerability of their children are more competent in parenting. 13Accordingly, it is essential that new parents be introduced to and educated in parenting skills.A study on the impact of a positive parenting educational program on parental stress in mothers who had a child with autism found that education in parenting skills reduced parental stress in the mothers and improved the quality of care they provided to their children. 22Another study investigated the impact of self-efficacy-based training on the maternal sense of competence of first-time mothers in caring for their infants and concluded that the experimental group who had been trained in self-efficacy skills obtained a higher maternal competence mean score than the control group after the intervention. 12Since self-efficacy is regarded as an important component of parenting skills, the results of this study are consistent with the findings of the present study.Educating mothers in these skills makes a significant contribution to mothers' competence in caring for their children. 12The results of a study showed that virtual education had a positive impact on the nurses' parental competence, 23 which is consistent with the findings of the present study.
In the present study, after the intervention, the experimental group's parenting competence mean score was higher than that of the control group.However, the difference between the two groups' pretest parenting competence mean scores was not significant, which indicates that education in parenting skills had a positive impact on the first-time mothers' parenting competence.Another study which aimed to investigate the impact of training in parenting skills on violence against children in Spain showed that development of these skills enhanced the parents' knowledge of parent-child communication, reduced violence against children and, consequently, improved the parents' relationship with their children. 24Similarly, another study reported that educating parents in parenting skills improves the parent-child relationships. 25urthermore, training mothers in parenting skills improves self-confidence in mothers and their children and decreases the symptoms of depression in children. 26Various studies have highlighted the significance of education in parenting skills and considered it to be integral to improving parent-child communication. 25,26 nother study found that this kind of education is effective in reducing the parents' stress and can, therefore, improve communication between mothers and their children. 27Zandipour's study showed that educating first-time mothers in parenting skills improved their perception of parenting. 28he findings of all the mentioned studies are in the same line with the results of the present research.Educating parents in parenting skills and methods can empower parents in many areas.Improved parenting skills improve the parent-child relationship and promote the parents' competence in caring for their child and fulfilling their parenting role.Educating mothers in parenting skills raises their knowledge of their child and the way they should communicate with him/her.Also, educating mothers results in a positive change in their attitude to parenting and better performance in interacting with their child, which in turn contributes to their parenting sense of competence.
One of the strengths of the present study is that the mothers were provided with extensive education.Also, the mothers were educated during weeks, a relatively short period which can result in better educational outcomes.
One of the limitations of the study is that due to the COVID-19 pandemic, the participants received virtual education.The researchers tried to keep the quality of education high and present the educational content effectively by asking and answering questions.Another limitation of this study is that we do not educate the fathers, and we only assess the mother's parenting competency.
Conclusion
The findings of the study showed that virtual education of parenting skills for first-time mothers could have a positive impact on mothers' parenting competence.In view of the important role of parent-child communication and the need for improving the parents' parenting competence, it is essential that parents, especially mothers, be educated in parenting skills.Therefore, it is suggested that first-time mothers should be educated in parenting skills on a face-to-face basis in maternity wards and online after discharge.
Parenting skills on the parenting sense of competence IJCBNM October 2023; Vol 11, No 4
Table 1 :
Frequency distribution of the demographic variables
Table 2 :
Parenting sense of competence mean scores in the experimental and control groups before and after ijcbnm.sums.ac.ir | 2023-10-04T20:57:07.758Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "fead78af6efa9f07bbd022832f0ddbc333942e76",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b633f9b1d4417aa2846de361644c9910fc891e84",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3652848 | pes2o/s2orc | v3-fos-license | Novel gramicidin formulations in cationic lipid as broad-spectrum microbicidal agents
Dioctadecyldimethylammonium bromide (DODAB) is an antimicrobial lipid that can be dispersed as large closed bilayers (LV) or bilayer disks (BF). Gramicidin (Gr) is an antimicrobial peptide assembling as channels in membranes and increasing their permeability towards cations. In mammalian cells, DODAB and Gr have the drawbacks of Gram-positive resistance and high toxicity, respectively. In this study, DODAB bilayers incorporating Gr showed good antimicrobial activity and low toxicity. Techniques employed were spectroscopy, photon correlation spectroscopy for sizing and evaluation of the surface potential at the shear plane, turbidimetric detection of dissipation of osmotic gradients in LV/Gr, determination of bacterial cell lysis, and counting of colony-forming units. There was quantitative incorporation of Gr and development of functional channels in LV. Gr increased the bilayer charge density in LV but did not affect the BF charge density, with localization of Gr at the BF borders. DODAB/Gr formulations substantially reduce Gr toxicity against eukaryotic cells and advantageously broaden the antimicrobial activity spectrum, effectively killing Escherichia coli and Staphylococcus aureus bacteria with occurrence of cell lysis.
Introduction
Peptides used in therapy should be resistant to the action of proteases. 1 Many peptides have been used in a number of biotechnological applications, ranging from neutralizing toxins to antitumor or antimicrobial agents. 2 Antimicrobial peptides (AMPs) are considered an important class of molecules requiring development of suitable antibacterial formulations. AMPs have been known for many years, yet very few have been used extensively in the clinic due to their considerable toxicity and high manufacturing costs. 2 Further, the development of resistance by pathogenic bacteria requires multiple lines of urgent action, such as the design and synthesis of model peptides with potent antimicrobial activity and low toxicity 3 and the invention of novel supramolecular assemblies for the available peptides, allowing combination of different mechanisms of action. [4][5][6] For example, small cationic peptides were recently shown to delocalize peripheral membrane proteins, thereby impacting important cellular processes such as respiration and cell wall biosynthesis. 7 Lipid bilayer disks 8 have been used as carriers for AMPs, leading to novel, nontoxic, and efficacious formulations which fully protect the peptide against degradation but preserve its activity against bacteria. 9 Some polysaccharides have also been evaluated as suitable vehicles for AMPs, with confirmation of the advantages of prolonged retention of peptides at the site of application due to bioadhesion and slowing of proteolytic degradation. 10,11 Gramicidin A (Gr) is an antimicrobial peptide with a unique mechanism of hampering the function of bacterial plasma membranes, thereby preventing development 3184 ragioto et al of bacterial resistance. Gr is a 15-residue linear peptide extracted from Bacillus brevis, that acquires a β-helix secondary structure with an internal pore and forms transmembrane channels consisting of two Gr molecules associated head-to head. 12,13 This channel allows permeation of cations across the membrane, changing the ionic balance 14 and ultimately generating the observed antibiotic activity of Gr. 15,16 However, Gr is insoluble in water and highly toxic to mammalian cells 17 over a range of bactericidal concentrations. 18 Further, Gr was reported to be ineffective against Gram-negative bacteria. 19,20 On the other hand, some cationic lipids assembling as artificial membranes in aqueous solution have been described as potent microbicidal agents against Gram-negative bacteria, but are much less efficient against Gram-positive bacteria [21][22][23] and fungi. [24][25][26] In the present work, a broad-spectrum combination of Gr and dioctadecyldimethylammonium bromide (DODAB) as closed bilayers (LV) or bilayer disks (BF) 27 was characterized with regard to its physical properties, antibacterial activity, and toxicity in a eukaryotic cell model, ie, Saccharomyces cerevisae. 28 Such novel combinations have the advantages of low toxicity and broad-spectrum activity via a mechanism that does involve bacterial cell lysis.
Preparation of lipid dispersions
LV were obtained from hydration and vortexing of DODAB films with 1 mM sodium chloride aqueous solution (60°C) until the dispersions became homogeneous at a final DODAB concentration equal to 0.002 M. 29,30 The ultrasonic dispersion of LV with a macrotip (85 W nominal output for 20 minutes at 70°C) followed by centrifugation (10,000 g for 60 minutes at 15°C) to eliminate titanium particles has been previously described as a procedure for yielding a BF dispersion. 29 Disruption of LV in this manner generates the BF. DODAB quantitative analysis was performed by bromide microtitration, as described elsewhere. 31 Preparation and characterization of DODaB/gr assemblies Aliquots of a stock solution of Gr (6.4 mM) in TFE were added to the DODAB bilayers before incubation (one hour at 70°C) at a DODAB to Gr molar ratio of 10:1. The size distribution, hydrodynamic diameter, polydispersity index, and zeta potential of the dispersions were obtained by photon correlation spectroscopy or dynamic light scattering at 90° using the Brookhaven Zeta Plus-Zeta Potential apparatus (Brookhaven Instruments Corporation, Holtsville, NY, USA). The log-normal fitting function of the apparatus software was used to calculate the mean diameters of the size distributions. 32 The zeta potential was given as ζ = μη/ε, where μ, η, and ε are the electrophoretic mobility in 1 mM NaCl solution (25°C), the medium viscosity, and the dielectric constant, respectively. evaluation of gr incorporation in DODaB lV or BF from fluorescence spectra LV/Gr and BF/Gr (0.2 mM DODAB) were filtered using polycarbonate membranes (with a cutoff of 0.2 μm). The total area under the Gr fluorescence spectra before (A before ) and after (A after ) filtering gave %Gr =100× A after /A before . The fluorescence emission spectra for Gr were determined at 25°C using a F4500 fluorescence spectrofluorometer (Hitachi, Tokyo, Japan) at λ exc =280 nm. Excitation and emission slits were fixed at 2.5 nm. The molar extinction coefficient for Gr at 280 nm is 20,700 m -1 cm -1 . 33 circular dichroism spectra for gr in different types of medium Spectra were acquired at 25°C using a 720 spectropolarimeter (Jasco Inc, Tokyo, Japan) in a 0.1 cm quartz cell with 0.5 nm wavelength increments and a 4-second response in the 200-280 nm range (100 nm per minute). Each spectrum is the average of five scans, with a full-scale sensitivity of 10 m deg. All spectra were corrected for background by subtraction of appropriate blanks in the absence of Gr (DODAB dispersion or Gr solvent). Spectra smoothing kept the overall spectral shape. The ellipticities θ (in deg dmol -1 cm 2 ) were plotted as a function of wavelength.
3185
Microbicidal gramicidin formulations in cationic lipid temperature. No hyperosmotic solution was added to the reference cuvette; instead, water was added. Absorbances were normalized to the initial value. The turbidity (400 nm) kinetics was followed after adding 20 mM KCl or 40 mM D-glucose to LV in water.
Microbial cultures
Escherichia coli (ATCC 25922; American Type Culture Collection, Manassas, VA, USA) and Staphylococcus aureus (ATCC 25923) were incubated for 3 hours at 37°C under shaking in tryptic soy broth (Merck, Darmstadt, Germany), spread onto Mueller-Hinton agar plates ( Hi-Media Laboratories Pvt, Mumbai, India), and incubated for 24 hours at 37°C. Some colonies from the plates were shaken in tryptic soy broth (160 rpm at 37°C for 2 hours) to reach the exponential phase of growth before pelleting (8,000 rpm for 15 minutes) and washing with a 0.264 M D-glucose solution. Washing was performed twice, and the bacteria were then mixed with the DODAB BF or LV with or without Gr in the same D-glucose solution. Tube 0.5 of the McFarland scale at 625 nm (about 1.5×10 8 colony-forming units [CFU]/mL) was used as the reference for preparation of the bacteria suspension. Thereafter, bacteria and the DODAB, Gr, or DODAB/ Gr dispersions were allowed to interact for counting of CFU, as described in the next section.
S. cerevisae was cultured in yeast extract peptone dextrose medium for both broth and solid agar cultures as previously described. 28 Some isolated colonies from a fresh S. cerevisae culture on solid medium were transferred to yeast extract peptone dextrose broth, and the yeast suspension was then incubated at 32°C under shaking (120 rpm for 5 hours) before centrifuging (10,000 rpm for 10 minutes), washing the pellet three times with 0.264 M D-glucose solution, and adjusting the cell concentration to 3-4×10 5 cells/mL as determined by CFU counting. 28
Determination of effects of DODaB or DODaB/gr on cell viability
Microbes and dispersions were mixed and interacted for one hour before being diluted up to 20,000-fold for plating of 0.1 mL of each in triplicate and incubation (24 hours for 37°C) for CFU counting. Cell survival (%), taken as the mean ± standard deviation, was plotted against DODAB and/or Gr concentration. As a control for cell viability in the absence of DODAB or DODAB/Gr dispersions, a standard bacterial suspension was added to 0.264 M D-glucose solution, diluted, and spread on the agar plate.
S. cerevisae and the dispersions (Gr, BF, LV, BF/Gr or LV/Gr) interacted for one hour. The control sample in the absence of the antimicrobial agent was a mixture of the standard S. cerevisae suspension and 0.264 M D-glucose solution. Aliquots of each diluted mixture (dilution up to 1,000 fold) were plated onto yeast extract peptone dextrose agar. CFU counting was performed after incubation for 48 hours at 32°C. Cell viability (%) was taken as the mean ± standard deviation and plotted against DODAB and/or Gr concentration. The minimum bactericidal concentration (MBC) is the concentration resulting in 99% cell death.
Determination of leakage of phosphorylated compounds from bacteria
Bacteria were grown on plates of Mueller-Hinton agar and incubated for 18-24 hours at 37°C to prepare a cell suspension in 1 mM NaCl solution with turbidity at 625 nm adjusted to 0.400. Aliquots were then transferred to Eppendorf tubes and pelleted (6,000 rpm for 15 minutes) before resuspension in the DODAB, Gr, or DODAB/Gr dispersions in 1 mM NaCl. Bacteria and the assemblies were allowed to interact for one hour, centrifuged (6,000 rpm for 15 minutes), and used to determine the inorganic phosphorus concentration in the supernatant (ie, Pi supernatant), as previously described. 34 As a positive (Pi) control, inorganic phosphorus was determined for bacteria in NaCl 1 mM. As a negative control, inorganic phosphorus was determined in the supernatant of bacterial suspensions after centrifugation. Final concentrations in the interaction mixtures ranged from 3.7×10 8 to 4.8×10 9 CFU/mL (E. coli) and from 5.2×10 9 to 5.1×10 10 CFU/mL (S. aureus). Bacterial lysis is related to the concentration of phosphorylated compounds in the supernatant as percent leakage =100× Pi supernatant /(Pi control ). 21
evaluation of cell morphology by scanning electron microscopy
Bacteria interacted with DODAB/Gr dispersions above the MBC (37°C for 2 hours) before being pelleted (12,000 rpm for 10 minutes), progressively dehydrated by ethanol as previously described, 35 and then placed on a silicon wafer for evaporation of ethanol at room temperature, gold sputtering, and imaging by a JSM-6460LV scanning electron microscope (JEOL Ltd, Tokyo, Japan) at 20 kV.
Results
sizing, zeta potential, polydispersity index, and gramicidin conformation and functionality in DODaB/gr assemblies Figure 1 shows the steps involved in preparing the DODAB/ Gr dispersions. Preparation of BF/Gr and LV/Gr with insertion of Gr in LV and adsorption of Gr at the borders of BF are schematically illustrated. The open nature of the BF in the absence of Gr has been often reported in the literature, both by electron microscopy 36 and by cryotransmission electron microscopy. 37 The sonication procedure not only disperses the cationic lipid to promote its self-assembly as bilayer vesicles but also disrupts the closed vesicles, producing the bilayer fragments or disks. 8 Incorporation of the peptide in BF caused an increase in hydrodynamic diameter, whilst for LV, incorporation of Gr decreased the hydrodynamic diameter (Table 1). Gr possibly induced a slight aggregation of BF due to a small degree of intertwining between adsorbed Gr molecules at different BF. On the other hand, insertion of Gr in the LV improved the colloidal stability of the LV, possibly due to the presence of Gr tryptophan residues at the bilayer/water interface. Incorporation of Gr in DODAB BF slightly affected the zeta potential, which changed from 55±3 to 43±4 mV (Table 1). For DODAB LV, incorporation of Gr substantially increased the zeta potential from 46±2 mV to 61±3 mV. When Gr becomes incorporated in the LV bilayer, the charge density at the level of the polar heads increases for the LV/Gr dispersion but this does not occur for the BF upon incorporation of Gr: in the latter case, the charge density and zeta potential remain practically unchanged. For BF/Gr, Gr induced some aggregation of BF, as suggested by the increase in hydrodynamic diameter from 61±0 nm to 104±1 nm. This would be consistent with interactions between Gr molecules at the border of different BF causing a certain extent of aggregation. The polydispersity of the assemblies was reduced for both BF/Gr and LV/Gr as compared with the nonloaded bilayers, suggesting some (Table 1), indicating reduced aggregation for LV assemblies with Gr. The intrinsic fluorescence of Gr due to tryptophans was used to determine incorporation of the peptide in the DODAB BF or LV dispersions. The filtration procedure for the BF/Gr dispersion yielded in the filtrate a mean molar percentage of 38%±11% for DODAB and 28%±1% for Gr. Within the limits of the experimental error, these two values are practically equal. The mean percentage of DODAB and Gr in the filtrate of the LV/Gr dispersion was 0%, indicating retention of LV or LV/Gr by the filter ( Table 2). The 200 nm cutoff filter retained the majority of the LV/Gr but permitted flow of smaller BF/Gr assemblies. The results shown in Table 2 also demonstrate quantitative incorporation of Gr in the DODAB bilayers. When DODAB from BF is present in the filtrate, so is Gr. When DODAB from LV is retained by the filter, so is Gr, and in the same proportion ( Table 2).
The fluorescent tryptophans in the Gr molecular structure may act as intrinsic labels for the peptide. Figure 2A shows the fluorescence spectra for Gr in different types of medium, such as TFE, ethanol, and DODAB dispersions. Although the four spectra share similar shapes and peaks, the intensity of fluorescence for Gr in BF is more similar to that for Gr in ethanol than for Gr in TFE (Figure 2A). For DODAB LV/Gr, tryptophan residues sense a microenvironment similar to the one in TFE. Thus, Gr perceives a more polar medium in BF than in LV. Figure 2B shows the circular dichroism (CD) spectra of Gr in ethanol, TFE, BF, and LV. The Gr spectrum in LV has a general shape that resembles the spectrum of Gr in TFE. In BF, Gr displays a CD spectrum with the negative peak at about 230 nm, which is similar to that for Gr in ethanol ( Figure 2B). These data support the conclusion that the Gr β-helix in BF senses a microenvironment more like that in ethanol than that in TFE, whereas in LV, the Gr β-helix perceives a medium more like that in TFE than that in ethanol. In the literature, the negative peak around 230 nm and the positive peak around 197 nm in ethanol typically show intertwining of Gr molecules. 38 The CD spectrum for Gr in BF is more similar in shape to that of Gr in ethanol than that of Gr in TFE. However, in the case of Gr in BF, one should not expect to see the same extensive intertwining of Gr molecules in ethanol since the intertwining event may take place at a lower frequency at the BF borders than it does in ethanol. For example, the intertwining between Gr molecules adsorbed on different BF may be the reason for aggregation of BF with the observed increase in size (Table 1). However, this increase in size is not so large as to be consistent with extensive aggregation of BF.
The effect of adding hypertonic solutions to LV is shown in Figure 3. Turbidity at 400 nm increased with time when the water from the inner vesicle compartment flowed in accordance with the solute gradient, resulting in shrinkage of Figure 3). It has been previously reported for vesicles of similar size that turbidimetry allows monitoring of vesicle shrinkage or swelling, which corresponds respectively to an increase or decrease in turbidity kinetics at 400 nm. 29,39 In fact, the osmotic behavior of LV depended on the osmolarity of the outer medium, but in the presence of Gr, this behavior changed significantly, ie, instead of shrinking, the LV swelled on addition of the hypertonic external medium (Figure 3). The Gr channel conformation in LV modifies the usually low permeation of cations through the DODAB bilayer. Permeation of D-glucose through the DODAB LV/Gr bilayer also increases substantially in comparison with permeation through the DODAB LV bilayer. The entry of D-glucose or KCl solutes via the Gr channel from the outer to the inner vesicle compartment is accompanied by entry of water, which causes the vesicle to swell (Figure 3).
antimicrobial activity, mechanism of action, and differential cytotoxicity of DODaB/gr assemblies The antimicrobial activity of DODAB, Gr, and DODAB/ Gr assemblies was evaluated against E. coli, S. aureus, and S. cerevisae in order to gain further insights into the differential cytotoxicity of the combined formulations ( Figure 4). There was poor Gr activity against E. coli and S. cerevisae as shown from the survival of microbes (%) over a range of Gr concentrations, in contrast with the action of Gr on S. aureus, consistent with previous reports in the literature. [18][19][20] The activity of DODAB as shown in Figure 4 was also consistent with previous data in the literature reporting its high activity against E. coli 21,22 and poor activity against the yeast. [24][25][26] The novelty of these combinations was the broadening of antimicrobial activity to encompass E. coli and S. aureus as representatives of Gram-negative and Grampositive bacteria. Of note, the microbicidal activity occurred over a range of low Gr and DODAB concentrations which are not toxic to S. cerevisae (Figure 4).
The MBCs taken from the viability curves for all dispersions are shown in Table 3. The MBCs are lower for DODAB and Gr in combination in comparison with those obtained for the separate DODAB and Gr dispersions ( Table 3). The lowest MBCs were obtained for DODAB BF/Gr, which revealed superior performance for delivering Gr to bacteria in comparison with DODAB LV/Gr. The high DODAB concentrations needed to kill S. aureus reconfirm the emergence of resistance of Gram-positive bacteria to cationic antimicrobial agents. 40 In this respect, delivering Gr becomes very important since Gr does not bear cationic moieties and acts by a different mechanism. The excellent activity of DODAB against Gram-negative bacteria complements the poor activity of the peptide against these bacteria. The excellent activity of Gr against Gram-positive bacteria complements the poor activity of DODAB against these bacteria. The combined formulations barely affected the S. cerevisae cells, suggesting limited toxicity if used for further therapeutic application. In order to shed some light on the mechanism of antimicrobial action for the assemblies, leakage of phosphorylated compounds from bacteria was determined over a range of DODAB concentrations ( Figure 5), and no cell rupture was seen over a range of low DODAB concentrations. This remained so up to the MBC, as also seen in Table 3. Above the MBC, the leakage increased, suggesting some important lytic events taking place in the cells. The bacterial morphology examined by scanning electron microscopy before and after interaction with DODAB/Gr at DODAB and Gr concentrations above the MBCs also suggested some cellular distortions departing from the morphology of untreated cells (Figure 6).
Discussion
The physical behavior of Gr in the DODAB dispersions was very similar to that previously reported for Gr in composite bilayers of phospholipid and DODAB at a molar proportion of 1:1. 39 Fluorescence and CD data reported the Gr microenvironment. For DODAB BF, the Gr helical moiety perceives the hydrophobic borders of the bilayer disks and the aqueous medium surrounding the disks. The intertwined Gr conformation in BF was depicted in the CD spectra. The Gr beta-helix in LV or TFE was demonstrated by the CD positive peak around 225 nm and associated with the functional channel present in LV ( Figure 2B). Tryptophans at the membrane/ water surface help to ensure appropriate conformation and activity in the Gr channel. 12,38,39 Gr fluorescence is similar in LV and TFE, showing localization of the tryptophans in a hydrophobic medium (Figure 2A). In contrast, Gr fluorescence is also similar in BF and ethanol, showing localization of the tryptophans in a more polar medium (Figure 2A). Gr possibly attaches to the edges of the disks at low concentrations, but with increasing Gr concentration, other Gr molecules from the outer solution can intertwine with those already attached at the disk border.
For LV, shrinking or swelling causes changes in the turbidity of the dispersions. 29 In accordance with the Joebst equation, the turbidity of the spheres in the dispersion varies with 1/R 2 , where R is the mean particle radius and is useful for inferring changes in vesicle size taking place upon establishment of osmotic gradients through the LV bilayers. DODAB LV shrinkage and swelling would correspond to the kinetics of increasing and decreasing turbidity, respectively ( Figure 3). The Gr inserted in LV form channels, that change the LV permeation to solutes, such as D-glucose and KCl consistent with previous reports of similar vesicular dispersions in the presence of Gr. 39 Hydrophobic drugs solubilize at the hydrophobic borders of BF, in contrast with the absence of drug solubilization in LV. 4 Given the hydrophobic nature of the Gr beta-helix, the behavior of Gr in the DODAB bilayers is similar to that of hydrophobic drugs in good organic solvents and in DODAB bilayers. 4 However, delivery of Gr to the bacterial cells seems to be favored by its combination with BF, possibly due to peripheral localization of Gr in the BF (Table 3). The antimicrobial activity of the DODAB cationic lipid observed at a very low DODAB concentration is related to the quaternary nitrogen in its molecular structure. The differential cytotoxicity of DODAB has been systematically studied, eg, at 0.5 mM, DODAB kills about 50% of fibroblasts cultured as a subconfluent monolayer. 1 The antimicrobial activity of Gr over the micromolar range of concentrations against Gram-positive bacteria has also been well established, 18 as well as its differential cytotoxicity. 17,18 The minimum inhibitory concentration of Gr against S. aureus is 2.5 μM. 18 The survival of mammalian HeLa cells at this concentration is 15%, indicating that the peptide is highly toxic. 18 In addition, there is about 50% survival of cultured renal cells at 2.5 μg/mL Gr, 17 reconfirming its high toxicity at concentrations that kill the bacteria. Here the MBC for Gr against S. aureus is about 4 μM ( Figure 4, Table 3) and is consistent with the previously reported minimum inhibitory concentration of 2.5 μM. 10 Gr is indeed very toxic in mammalian cells, 17,18 and also against eukaryotic cells, as exemplified by its activity against S. cerevisae ( Figure 4). However, in formulations containing DODAB, this toxicity is substantially reduced, as indicated by comparison of the viability curves for S. cerevisae using Gr alone and using DODAB/Gr (Figure 4). Cell viability remained very high for both DODAB BF/Gr and DODAB LV/Gr, clearly showing improvement in delivery of Gr by the DODAB/Gr combinations. Further, DODAB is not only a vehicle but also displays good activity against E. coli, thus broadening the spectrum of antimicrobial activity for Gr.
The mechanism of action of these novel formulations involves bacterial cell lysis since leakage of phosphorylated compounds took place from the MBC values ( Figure 5, Table 3). Some lysis was detected at doses above the MBC, and visualization of cells by scanning electron microscopy indeed showed distortions in cell morphology ( Figure 6). The detergent-like mechanism often ascribed to the interaction between antimicrobial peptides and surfactants seems to be important for determining antimicrobial activity. For the DODAB/Gr combinations, the mechanism involved affects membrane function and selectivity in the transport of ions and nutrients and ion distribution in the cell. Death of bacterial cells possibly takes place with some substantial membrane rupture and leak of intracellular bacterial compounds. Eventually, damage to certain transmembrane proteins may also be involved in the death mechanism. | 2017-06-19T19:55:10.829Z | 2014-06-30T00:00:00.000 | {
"year": 2014,
"sha1": "69b1b2beab56ff2c8a063039a9b8294f4ed6a4bb",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=20649",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc0a72f3aee85366a102f903fbf64aea713527f0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
118356677 | pes2o/s2orc | v3-fos-license | Measurement of beam energy dependent nuclear modification factors at STAR
The nuclear modification factors RAA and RCP have been used to measure medium-induced suppression in heavy-ion collisions at = 200 GeV which was among the earliest evidence for the existence of a strongly interacting medium called a quark-gluon plasma (QGP). Nuclear modification factors for asymmetric collisions (RdA) have measured the Cronin Effect, an enhancement of high transverse momentum particle yields in deuteron-gold collisions relative to proton-proton collisions. A similar enhancement is observed in data presented in these proceedings and competes with the quenching caused by partonic energy loss in the QGP. In these proceedings we will present charged-hadron RCP at mid-rapidity for = 7.7 – 62.4 GeV as well as identified π+, K+, and p RCP. Comparisons to HIJING motivate possible methods for disentangling competing modifications to nuclear transverse momentum spectra.
Introduction
The RHIC beam energy scan (BES) is a program to collide Au+Au ions at various collision energies in order to explore the QCD phase diagram; searching for a possible critical point and for a phase boundary marked by the disappearance of key signatures for the formation of a QGP [1]. The nuclear modification factor provides one of these signatures. The ratio of transverse momentum (p T ) differentiated spectra from central over peripheral collisions and scaled by the mean number of binary p+p-like collisions in each event is called the nuclear modification factor and is denoted by R CP . If p+p collisions are used for the reference instead then the nuclear modification factor is denoted by R AA . In the presence of a QGP, high-p T particles are quenched, transferring energy to lower momentum particles, causing the nuclear modification factor to be less than unity at high p T , or suppressed [2][3][4][5][6]. Quenching competes with any effects that would cause enhancement, such as radial boosts or the Cronin Effect [7]. The Cronin Effect has also been observed by STAR as the enhancement of the nuclear modification factor in asymmetric d+Au collisions at √ s NN = 200 GeV [8]. These spectra can be influenced by the spectators, non-interacting nucleons in the collision system, so that a peripheral collision does not equate directly to a p+p collision due to cold nuclear matter (CNM) effects. The goal of this analysis is to determine at what beam energy suppression turns off, and to begin disentangling the causes and relative effects of quenching and enhancement. contamination. Trigger efficiency, tracking efficiency, and acceptance corrected charged-hadron spectra from mid-rapidity Au+Au collisions at √ s NN = 7.7, 11.5, 19.6, 27, 39, and 62.4 GeV were produced for 0-5% central and 60-80% peripheral collisions in the STAR detector. The spectra were scaled by binary collisions with the scale factors obtained from a Monte Carlo Glauber model [9]. Taking the ratio of these scaled spectra for each energy gives the R CP ( √ s NN , p T ) shown in Fig. 1 (left) along with STAR's published 200 GeV result [2]. These spectra were not feed-down corrected and were taken from -0.5 < η < 0.5. The global systematic uncertainty is dominated by the uncertainty in the centrality selection for the peripheral bins, which is used in the Glauber calculation and presents as an uncertainty on the binary collision scale factor. The same methods that produced the lower beam energy results were used to produce a 200 GeV result from 2010 data. This measurement disagreed with the feed-down corrected result from 2003 ( Fig. 1 left) by 20% and so this was folded into the overall systematic uncertainty of the other results ( Fig. 1 gray box). The cause of this discrepancy is under investigation. The efficiency correction is based on single particle embedding in the 39 GeV data set for π ± , K ± , and p ± separately which were than combined and weighted by their relative yields for a charged hadron efficiency. The efficiency correction was extrapolated to the other data sets by making the assumption that the efficiency is the same for each particle at the same p T and from the same multiplicity bin. This assumption was tested by producing the efficiencies from two data sets, 39 GeV from 2010 and 27 GeV from 2011, and ensuring that the predicted efficiency from the 39 GeV data set matched the 27 GeV efficiency. Then differences in acceptance due to detector performance between beam energies were accounted for by using stable portions of the detector as a reference. HIJING 1.35 [10] with jet quenching turned off was used to produce more than 10M collisions for each beam energy. The motivation for running the simulator with jet quenching turned off was the expectation that at sufficiently low beam energies, where medium-induced jet quenching has minimal effect, there would be a quantitative agreement between the charged hadron R CP from simulation and data. If the simulation and the data agreed at low beam energies but deviated at higher beam energies then the beam energies where the deviation occurred could be considered candidates for the beam energies where a QGP is formed. Centrality selection was done using the same method as for the data; namely, counting the number of final state charged hadrons in -0.5 < η < 0.5 and then determining the 0-5% most central data as being the 5% of events with the highest multiplicity, and so forth for the other centralities. The result is shown in Fig. 1 on the right. Again, R CP at lower beam energies is enhanced, although we do not see a quantitative agreement with charged hadron R CP from data. We do not see suppression at higher collision energies, as expected since quenching was turned off. By running HIJING with jet quenching on and off and comparing with AMPT and other models we hope to disentangle the relative contributions of jet quenching, CNM effects, and possible contributions from radial flow or final state scattering. The results in Fig. 1 (left) are consistent with suppression for √ s NN ≥ 39 GeV. This does not preclude medium induced energy losses at lower energies since other effects could be overwhelming this signature. The advantage for using charged hadron R CP is that you can measure spectra to higher transverse momenta that you could not reach with particle identification. It was also considered that plotting R CP vs. x T rather than p T might reveal trends in the data that were independent of collision energy.
x T is defined as x T = 2 * p T / √ s NN . This sort of scaling was applied to spectra previously [8] where it revealed √ s NN independent trends at high p T . Such a scaling is shown in Fig. 2, using the data from Fig. 1, and does not reveal any such trends.
Identified hadron R CP
Particle yields were extracted from a simultaneous fit to dE/dx distributions measured in the STAR Time Projection Chamber and time of flight distributions measured in the STAR Time of Flight detector for each centrality and p T bin at each beam energy. The functions used to extrapolate fit parameters for particle identification were varied in order to obtain the systematic errors for the high p T bins. Efficiency corrections were obtained through track embedding and have a 5% systematic error associated with them. The differences between the published 62.4 GeV results [11] and those presented here were taken as a point by point systematic error and applied to all beam energies. The cause of this discrepancy is under investigation. The result (Fig. 3) is qualitatively consistent with published results [11] in that pions are less enhanced than protons, suggesting that pions may serve as a better gauge for jet quenching within the p T range available through particle identification. Considering 2.5 GeV/c < p T < 4 GeV/c hadrons, the results from Fig. 3 show that protons are not suppressed at any beam energy and pions go from being suppressed at higher beam energies to being enhanced at lower beam energies with a transition near √ s NN = 27 GeV.
Summary
Charged hadron R CP has been measured at mid-rapidity for a range of beam energies in order to determine at what beam energy the suppression of high p T charged hadrons, a QGP signature, Figure 3. (Color online) R CP for identified p, K + , and π + for RHIC BES energies. The boxes are p T dependent systematic uncertainties due to particle identification while the p T independent uncertainty is from N coll scaling.
turns off. Suppression is seen to turn off near 39 GeV for unidentified charged hadrons, but due to unquantified sources of enhancement it is currently unclear where medium-induced jet quenching turns off. Identified R CP is qualitatively similar and promotes pions as a probe that is less effected by sources of enhancement. For pions, suppression at high p T is seen to turn off near 27 GeV. A comparison to the HIJING event generator with jet quenching turned off did not reveal an energy at which the data and the model agree quantitatively, which precludes measuring the energy at which data and simulation deviate. This motivates the exploration of additional tunes of HIJING and other models in order to disentangle the competing effects which lead to suppression or enhancement. | 2013-03-28T21:36:13.000Z | 2013-03-29T00:00:00.000 | {
"year": 2013,
"sha1": "07985af718d242b2fa253b87c0fe5bdd677c1781",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/446/1/012017",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9846a5d4ccbc850c707d4ad8ea0ae0d5019363ff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
143562310 | pes2o/s2orc | v3-fos-license | Assessment of information literacy skills among first year students
The development of research and information literacy skills in first year students is essential, but challenging. Approaches to developing these skills that are embedded within subject design, and use a blended approach between online and face-to-face delivery are considered best practice in this area. However research has yet to identify the most appropriate form of assessment of these skills. We used constructive alignment to embed research skills in a first year subject. Students were assessed on their research skills using a diagnostic online quiz in week one, and then in week six, their application of their skills in their assignment was assessed using a rubric. We created a matched sample of the results on these two forms of assessment that included 227 students. Our main aim was to determine whether there was a relationship between quiz and rubric scores, and to assess the practical relevance of the quiz in terms of identifying students who might be in need of additional support. We found a small, but significant, positive correlation between quiz and rubric results and conclude that both the quiz and the rubric are useful forms of assessment, and that there are benefits to using both within an embedded curriculum.
Introduction and background literature
Graduate skills and capabilities incorporate the skills and knowledge that undergraduates should develop beyond discipline-specific content traditionally associated with a university education (Barrie, 2007).Research and information literacy form a crucial part of these capabilities, as they contribute to students' writing and critical thinking (Andrews & Patil, 2007;Grafstein, 2002).It is particularly important that students are able to use these skills in their first year at university, but there is little continuity in the expectations, teaching or assessment of these skills from high school to tertiary settings (Willison & O'Regan, 2005).Universities are under increasing pressure to measure and report levels of graduate attribute type skills during the first year at university (Barrie, 2007), and then to demonstrate improvement of those skills over the course of a degree.In order to do this, library and teaching staff must utilise best practice in terms of the direct teaching, and assessment of information literacy and research skills.
In the tertiary education environment, an embedded approach to the development of research skills has been acknowledged as superior to providing stand-alone workshops (Price, Becker, Clark, & Collins, 2011).This approach provides greater opportunities for students to learn, practice and receive feedback on their skills (Treleaven & Voola, 2011).A study previously published in this journal described an embedded approach to the development of generic academic skills using online tutorials to teach information literacy skills (Cassar, Funk, Hutchings, Henderson, & Pancini, 2012).Instruction for the development of information literacy skills is now commonly provided online in order to provide services to an increasing number of students (Anderson & May, 2010;Zhang, Watson, & Banfield, 2007).Blended instruction formats (with a combination of online and face-to-face delivery) emerge as favourable in a review of studies comparing these approaches (Zhang et al.).
The two dominant forms of assessment of information literacy skills include the use of short online quizzes, or the assessment of skills as demonstrated in students' assignments.Online quizzes are often favoured as universities come under pressure to provide diagnostic and summative assessment of graduate skills (Barrie, 2007).
Academics are often sceptical of the capacity of a brief, online, multiple-choice quiz to accurately assess these higher order skills.However they do acknowledge that using such methods are the easiest option, and might facilitate cross-institutional comparisons (Scharf, Elliott, Huey, Briller, & Joshi, 2007).Many Australian universities have therefore developed short online multiple-choice tests to assess information literacy (Price et al., 2011) and large, standardised forms of online testing to examine these skills have been used in the USA (Educational Testing Service, 2004;Kent State University Libraries and Media Services, 2007).Assessment of applied information literacy skills in student's written work, using rubrics that scaffold assessment criteria and indicate where students could improve, is often considered to be a more authentic method of assessment (Knight, 2006).However this option is resource intensive and not particularly viable with large class sizes.Nevertheless many authors have described their use of portfolio-based assessment to determine information literacy skills using either rubrics or checklists as a grading framework (e.g.Knight; Scharf et al., 2007).
In this paper, we aim to contribute to the debate regarding the assessment of research and information literacy skills.We compare the scores of the same students on a multiple-choice assessment and a rubric-based assessment of information literacy, and use statistics to determine the relationship between the two.In addition, we provide information regarding the potential usefulness of each assessment approach in identifying students who are performing well, or those who are in need of additional support.We (the teaching team) trialled an approach to embedding the graduate capability of Inquiry/Research in Concepts of Wellbeing [EDU1CW], a large (N ~ 340) first year subject.This subject is delivered in the first semester of the first year of study for all primary and secondary Bachelor of Education students (approximately 340 each year).One of the major aims of EDU1CW is to facilitate first year students' transition to university through a content focus on their personal wellbeing, and a skills focus on their academic capabilities through formative assessment (using a model described in Taylor, 2008).Full details of the subject are published elsewhere (Yager, 2011).
In order to embed the teaching and assessment of Inquiry/Research skills, we used constructive alignment.This involves subject design where the Intended Learning Outcomes [ILO's], teaching and learning activities and assessment are all related to each other in order to encourage deep learning (Biggs & Tang, 2007).In Concepts of Wellbeing, we included the development of Inquiry/Research as an ILO of the subject and this was communicated to students in written and verbal forms.A variety of online and faceto-face teaching and learning activities were provided for direct instruction about Inquiry/Research skills.Online activities included the Inquiry/Research Quiz [IRQ], a multiple-choice assessment with automated feedback, and LibSkills online modules, which provided further information following the quiz.In class, lectures about database searching, using library resources and referencing were provided.
Students then had the opportunity to practice searching library databases to find journal articles relevant to their assessment topic in tutorials held in the computer labs.None of these activities were technically compulsory, but all students were strongly encouraged to complete all activities.
Students were initially assessed on their Inquiry/Research skills using the IRQ, and then on a rubric-based evaluation of their skills as demonstrated in their assessment.Students were encouraged to complete the IRQ in the first or second week of classes, so this formed the first learning activity that taught students about Inquiry/Research skills, but it was also an assessment of their baseline skill level.Students then practiced and demonstrated what they had learned in the first lowstakes written assessment for EDU1CW (Stage 1, described below).
Finally, students were formally assessed on whether they met the cornerstone standards for Inquiry/Research in their Stage 2 assignment (Theoretical and Background Plan, described below) due in week six, and were given formal feedback on their Inquiry/Research skills on a rubric in week eight.The rubric that we used was based on the La Trobe University Information Literacy Framework (La Trobe University, 2011b).The Framework has six standards, which articulate learning outcomes at cornerstone, midpoint and capstone levels and is based on a standardised Australian Framework (Bundy, 2004).The cornerstone outcomes from the Framework were transferred to the rubric and used to assess students' assignments in terms of meeting, not meeting, or exceeding the standard.
The major assessment in EDU1CW, the Personal Wellbeing Plan (PWP), was designed to facilitate Inquiry/Research skill development through a series of written assessments in four stages, described below.
• Stage 1: the Proposal (10%, due week four) required students to present an evidence-based plan for personal behaviour change and give APA-style references of two peerreviewed journal articles that they might use to support this plan.Feedback to students focussed on academic writing and referencing skills as well as the suitability and credibility of the articles chosen.
Referencing was required, but did not attract a grade, giving students a "free trial."• Stage 2: Theoretical and Background Information (30%, due in week six) required students to summarise their peer-reviewed journal articles and indicate how the research related to their plan for improving their wellbeing.Inquiry/Research skills were assessed using the rubric described above.An overview of the criteria and mechanisms for assessment of each criteria used in the rubric is provided in Table 1 (below).
• Stage 3: the Reflection (20%, due week 11) required students to respond to a series of structured reflective questions about their experiences of behaviour change and to demonstrate continuing improvement in their writing and referencing skills.This allowed students the opportunity to further practice and demonstrate skills after they had received formal feedback on how well they had met the cornerstone standards.• Stage 4: the Artefact (10%, due week 13) required students to provide a visual representation of their attempts at behaviour change and allowed a final attempt at referencing.
For the PWP assessment, students were also required to submit all previous stages of their work when they submitted their current piece of assessment.This allowed academic staff to refer back to students' past attempts and whether they have responded to feedback that was provided.
Grading was such that students were penalised for failing to respond to, and incorporate this feedback.
Research questions
The main aim of this research was to use statistics to determine the correlation between students' Inquiry/Research Skills as assessed in an online quiz, and as demonstrated through their written assessment.The research questions were as follows: 1) Did either demographic factors (age, gender, course enrolled in) or quiz factors (amount of time taken, week quiz was done, making more than one attempt at the quiz) impact on students' quiz results?2) Did demographic factors (age, gender, course enrolled in) impact on students' rubric results? 3) Was there a relationship between the quiz and rubric scores?and 4) Is the quiz a useful tool for identifying students who might be performing well, or in need of additional support for this graduate capability?
Participants
Participants were first year undergraduates enrolled in the first year, first semester subject Concepts of Wellbeing.
The Faculty of Education Human Ethics Committee approved a universal ethics application that covered many projects relating to the first year in the faculty.This meant that students gave informed consent to the collection of data, test scores, artefacts of assessment and a first year survey in the first week of class.
No students refused participation.A total of 320 students were enrolled in the class, but matched data for both the IRQ and rubric was only available for 227 students, which comprised the sample for this study.
Measurement
Students' research skills were assessed using the IRQ, and the rubric-based assessment in Stage 2 of their major assignment, the PWP.In the first week of semester, students were directed to the IRQ through their learning management system (Moodle).Completion of the quiz and modules was voluntary, but strongly encouraged, and students were allowed as many attempts at the quiz as they liked.Students' total score on their first attempt at the quiz and total scores of any subsequent attempts were recorded using program software.This information was exported to Microsoft Excel by library staff, and provided to teaching staff.In week 6, students submitted Stage 2 of their PWP and their Inquiry/Research skills were assessed using a rubric (described above).Marks for each of the six areas of the rubric were recorded as 1 = standard not met, 2 = standard met and 3 = standard exceeded in accordance with the university guidelines for measuring graduate capabilities, providing a total score out of 18.In addition, tutors recorded whether or not the student was considered to have met the standard (or not met, or exceeded) overall.This information was then entered into an excel database, along with details of each student's birth date, gender, course, and student number.
Raw data in excel spread sheets were obtained from teaching and library staff and sorted by surname.Data were copied into SPSS, and matched manually, by student name.A total of N = 319 first year students had results on the online quiz, and a total of N = 320 students were enrolled in EDU1CW.However, the lists of students in each database were not identical.From a total of N = 338 entries into the SPSS database, n = 90 were removed as they did not have rubric data, and n = 21 were removed as they did not have quiz data.This resulted in a final sample of n = 227 students for whom matched data for both the quiz and rubric was available.
Data analysis
Data screening and initial exploration revealed that the total scores on the first attempt of the quiz, and scores on the rubric were not normally distributed; therefore non-parametric tests were used in all analyses.Descriptive statistics were used to obtain means and frequencies in relation to demographic data and performance on the IRQ and rubric.Where data were categorical and allowed for the comparison of two groups, Mann-Whitney U tests (the non-parametric alternative to an independent samples t-test) were used to determine the differences between these groups on quiz and rubric scores.Where data were categorical and allowed for the comparison of three groups, Kruskal-Wallis tests were used (the non-parametric version of a One-Way ANOVA) to test for the differences on quiz and rubric outcomes by course, and quiz factors.
Where data were continuous, Spearman's rho was used as the non-parametric version of the Pearson's test to determine correlations between scores.This same test was used to determine whether there was a correlation between the IRQ score and the total score on the rubric.Where there were significant correlations, the relationship was explored further using Mann-Whitney U tests.
Description of the sample
Data for both the quiz and the rubric were available for 227 students.Mann-Whitney U tests demonstrated the representativeness of this sample as there were no significant differences between the total score on the online quiz of the students in the final sample and those who were excluded due to missing rubric data (z = -0.51,p = .61).There was also no difference on the total rubric scores between those included in the final sample and those who were excluded due to missing quiz data (z = -0.99,p = .32).
The sample was predominantly female (females: 71.4%, n = 162; males: 28.6%, n = 65).Most students were enrolled in a Bachelor of Education (70%, n = 159), and a smaller proportion were enrolled in a Bachelor of Physical and Health Education (10.6%, n = 24) or a Bachelor of Early Childhood (15%, n = 34).A small number of students (4.4%, n = 10) were enrolled in degrees in other faculties.Students ranged in age from 18 to 58 years of age.The median was 19 years and the mean age was 21.05 years [5.62].
Results of quiz-based assessment
On their first attempt at the quiz, student's scores ranged from 2 to 10 and the mean [SD] was 7.33 [1.53].The proportion of students who got each of the quiz items correct is provided in Table 1.The majority of students were correct in responding to the majority of quiz items on the first attempt, with the exception of question three and question one.Just under half (47.14%, n = 107) of students made a second attempt at the quiz.The mean score on second attempts at the quiz was 8.57 [1.68].A further 18.06% (n = 41) made a third attempt [mean score 9.07, SD 1.32], five (2.20%) students made a fourth [mean score 9.20, SD= 0.84], and three (1.32%) made a fifth attempt [mean score 10.00 SD = 0].The majority of students (76%, n = 174) completed their first attempt at the quiz in the first week of the semester, while 20.7% (n = 47) completed the quiz in the second week and 2.6% (n = 6) completed the quiz after week four.
Table 1: Proportion of students who chose the correct option on their first attempt at the IRQ
We were interested in determining whether there were any significant correlations between demographic factors and students' results on their first attempt at the quiz.Spearman's rho found that there was no significant correlation between students' age and their total score on the first quiz attempt (r s = .04,p = .54).Mann-Whitney U tests found that there was no significant difference between the total score on the first quiz attempt by gender (z = -1.52,p = .13).Finally, Kruskal-Wallis tests found that there was no significant difference between the total score on the first quiz attempt according to the course that students were enrolled in [X 2 (2, n = 217)= 1.66, p = .44].
We were also interested in determining whether any of the factors related to the quiz were correlated with students' total scores on their first attempt.We found that students who made more than one attempt at the quiz (n = 106) were significantly more likely to have had a lower mean score on their initial quiz attempt (mean = 6.74,SD= 1.57) than those who only made one attempt at the quiz (mean = 7.85, SD= 1.29), according to a Mann-Whitney-U test (z = -5.29,p = .00).Kruskal-Wallis tests found that there was no significant difference between the total score on the first quiz attempt according to the week that students completed the quiz [X 2 (2, 226) = 0.08, p = .95].There was also no significant correlation between the amount of time taken to complete the quiz and the total score on the first attempt (r s = 0.02, p = .72)according to Spearman's rho.
Results of rubric-based assessment
The majority of students (59.5%) were considered to have met the cornerstone standards for Inquiry/Research according to the rubric-based assessment of the second stage of their major assignment.Table 2 indicates the proportion of students who met each of the standards as provided in the Information Literacy Framework, and whether students met the standards overall.
Again, we were interested in determining whether there were any relationships between demographic factors and total rubric scores.Total rubric scores were generated by adding together the values of not meeting the standard (1), meeting the standard (2) or exceeding the standard (3) for each of the six areas of the framework.There was a significant difference between the mean total rubric scores by gender, as males were significantly more likely (z = -2.67,p = .00)to a lower score on the rubric (mean = 11.41,SD= 3.01) than females (mean = 12.61, SD= 1.52) according to the Mann Whitney U test.However there were no correlations between age and total rubric scores (Spearman's rho, rs = 0.11, p = .11).There was also no significant difference between total rubric scores according to the course that students were enrolled in [X 2 (2, 216) = 0.42, p = .81]according to a Kruskal-Wallis Test.
Relationship between quiz and rubric scores
As the IRQ and the rubric were based on the same Information Literacy Framework, and attempting to measure the same construct in very different ways, we were interested in seeing whether there was a relationship between the scores on these assessments.It is important to note that we did not consider this to be a repeated measures analysis of the change in student scores from the quiz (in week 1) to the rubric (in week 6), as this would require using the exact same measure at each timepoint to make the analysis valid.Instead, we were interested in seeing whether students' scores on the two tasks were related, and whether the quiz could be a valid instrument for determining whether students would meet the standard in their written assessment.Spearman's rho indicated that there was a significant positive correlation between scores on the initial quiz attempt, and the total grade given on the rubric (rs = 0.21, p = .001).Cohen (1988) classifies a correlation of 0.2 as within the small range (from 0.10-0.29).Although statistically significant, quiz scores only explained 4.49% of the variance on the rubric score.The dataset was then split according to other interesting groups.There was a stronger correlation between quiz and rubric scores for those who were recent school leavers [aged 18 or 19; rs = 0.25, p < .01]as opposed to others [aged 20 years or over; r s = 0.17, p < .05].In addition there was a stronger correlation between quiz and rubric scores for males (r s = 0.24, p < .05)than for females (r s = 0.19, p < .05).Finally there was a stronger correlation for those students enrolled in a Bachelor of Early Childhood (r s = 0.50, p < .01)than those in the Bachelor of Education (r s = 0.16, p = .05)or Bachelor of Physical and Health Education (r s = 0.23, p < .05).
A Kruskal-Wallis test was used to compare the initial quiz results of students who were later classified as either having met, not met or exceeded the standards according to the rubric based assessment of their written work.It was found that there was a significant difference overall [X 2 (2, 226) = 14.68, p = .00],and that scores were as expected, as those who were considered to have not met the standard (n = 44) had a mean initial quiz total score of 6.61 [1.69]; those who met the standard (n = 135) had mean quiz scores of 7.38 [1.45]; and those who exceeded the standard (n = 47) had a mean quiz score of 7. 85 [1.38].Follow up Mann-Whitney U tests found that those who did not meet the standard in the rubric had a significantly lower mean score on the quiz than those who met (z = -2.9,p = .00)and those who exceeded the standard (z = -3.65,p = .00).However those who were classified as exceeding the standard in the rubric did not have a significantly higher score on the initial quiz attempt than those who met the standard (z = -1.77,p = .08).
In order to evaluate the accuracy of the quiz in determining Inquiry/Research skills in real terms, we did some further analyses.Using the mean scores given above, we determined a cut-off score of seven as representing the midpoint between the mean quiz scores of those who met and did not meet the standard according to their rubric assessment.When a quiz score of 7 is used as a cut-off point, only 27.7% (n = 20) of the n = 112 students who had an initial quiz score of 7 or less were identified as not meeting the standard according to the rubric later on.A further 57.1% (n = 64) of these students who received a quiz score of less than seven were classified as having met the standard and 15.2% (n = 17) were classified as having exceeded the standard based on their work that was assessed in the rubric.
Discussion
In this paper, we provided details of an embedded approach to the development of Inquiry/Research skills into a first year, first semester subject using constructive alignment.We compared the scores of 227 students on two different approaches of assessment of Inquiry/Research skills.We found that there was a positive, significant correlation between students' scores on a ten question, online quiz (the IRQ) and a rubric-based assessment of their Inquiry/Research skills.However, the relative strength of this relationship was low.Correlations were stronger for students who were male, recent school leavers (aged 18 or 19) and enrolled in the Bachelor of Early Childhood course.
The IRQ identified 27.7% of students who were later classified as not meeting cornerstone standards on the rubric-based assessment of their written work.This indicates that an online quiz might be useful in terms of identifying some, but not all, students who could be offered additional workshops and resources.It was interesting that the likely cut-off score for not meeting the standard (7) was quite high, and this might reflect the difficulty of the quiz questions.An important practical finding was that the quiz was not particularly useful in determining those students who would later go on to demonstrate that they exceeded the cornerstone-level standards in Inquiry/Research.Both forms of assessment were based on the La Trobe University Information Literacy Framework but were very different in terms of the investment of staff time, and the feedback that was provided to students.
Using students' written assessment to evaluate their research skills was useful in this subject, and we found that this is the only mechanism by which students with high information literacy levels can be identified.However, it was also extremely time consuming.Although rubric-based assessment of information literacy skills is considered to be beneficial by many others (e.g., Knight, 2006), most who use this approach do so to assess information literacy and research skills at the capstone level, where class sizes may be smaller.
We suggest that, rather than choosing one form of graduate capability assessment over the other, using the quiz and rubric in tandem offers more opportunities for learning and assessment.Other authors have indicated that students' selfperceptions of their information literacy skills are particularly inaccurate, which might make them less likely to seek unprompted assistance (Dean & Cowley, 2009).Price and colleagues (2011) found that first year students initially demonstrated higher levels of confidence in their own information literacy skills than those in later year levels at university, but they revised their confidence upon the receipt of feedback in relation to their performance.Using online quizzes at the very beginning of first year may assist students in more accurately determining their capabilities in this area, and provide additional motivation for attending classes with face-to-face delivery of skills instruction, as well as the use of online materials.Rubric-based assessment that is embedded within a formative assessment process can then support students in their development of these skills, and ultimately reward them for exceeding standards and doing well.
In our attempt to evaluate two methods of assessment of research skills, we were limited by a major practical issue.Frameworks and standards generally identify information literacy processes, whereas assessment of these skills is generally limited to the outputs or outcomes of these processes (Willison & O'Regan, 2005).Some of the criteria from the framework used for the rubric referred to processes that students would use, whereas teaching staff could only provide grades and feedback on the outcomes of those processes, as demonstrated in their written assessment.This issue will persist unless researchers and university staff commit to identifying the areas of frameworks that might be practically determined using student assessments.There were some other limitations to this assessment and research.Both assessments were relatively brief considerations of students' ability in this area.Students might have had assistance from others when completing their IRQ, which may have influenced the results.There may also have been some variability in the grading of students' Inquiry/Research skills on the rubric as inter-rater reliability was not able to be calculated.
Conclusion
We found that both an online quiz, and a more complex rubric-based assessment of students' research skills were useful in the assessment of student graduate capabilities such as research and information literacy.As there was very little discipline focus, these findings have implications for all involved in teaching first year students.This includes library and other support staff as well as academics in a range of disciplines that aim to develop and assess student graduate capabilities and skills such as information literacy and research. | 2018-10-27T09:33:46.304Z | 2013-04-19T00:00:00.000 | {
"year": 2013,
"sha1": "67167d0e48158df82e298f7aa4d429135751d799",
"oa_license": "CCBY",
"oa_url": "https://fyhejournal.com/article/download/140/158/140-1-1109-1-10-20130419.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "91dabb9b6e979513ec6eadb3095b5238fb8c33ba",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
9315594 | pes2o/s2orc | v3-fos-license | Reproduction and beyond, kisspeptin in ruminants
Kisspeptin (Kp) is synthesized in the arcuate nucleus and preoptic area of the hypothalamus and is a regulator of gonadotropin releasing hormone in the hypothalamus. In addition, Kp may regulate additional functions such as increased neuropeptide Y gene expression and reduced proopiomelanocortin (POMC) gene expression in sheep. Other studies have found a role for Kp to release growth hormone (GH), prolactin and luteinizing hormone (LH) from cattle, rat and monkey pituitary cells. Intravenous injection of Kp stimulated release LH, GH, prolactin and follicle stimulating hormone in some experiments in cattle and sheep, but other studies have failed to find an effect of peripheral injection of Kp on GH release. Recent studies indicate that Kp can stimulate GH release after intracerebroventricular injection in sheep at doses that do not release GH after intravenous injection. These studies suggest that Kp may have a role in regulation of both reproduction and metabolism in sheep. Since GH plays a role in luteal development, it is tempting to speculate that the ability of Kp to release GH and LH is related to normal control of reproduction.
Introduction
Kisspeptin (Kp), also known as metastin, was first discovered and noted for its role in the inhibition of cancer cell metastasis. However, it was discovered to also stimulate gonadotropin releasing hormone (GnRH) release and subsequent secretion of luteinizing hormone. Neuroendocrine control of reproduction in ruminants culminates in the secretion of luteinizing hormone (LH) from the anterior pituitary. There has been much interest in Kp's role in regulation of reproduction in a number of species including ruminants. More recently, Kp has been implicated in the integration of metabolic control of reproduction. This review will briefly summarize KP action on reproduction in ruminants and focus on recent discoveries of Kp action beyond reproduction control in ruminants.
Kisspeptin action on luteinizing hormone and gonadotropin
Kisspeptin clearly stimulates a release of GnRH and subsequent secretion of LH. Intravenous administration of Kp-10 to ovariectomized (OVX) ewes stimulated increased circulating concentrations of LH and increased GnRH concentrations in the cerebrospinal fluid [1]. Central administration of Kp-10 increased GnRH concentrations in the cerebrospinal fluid and increased LH concentrations in the plasma of sheep [2]. Additionally, Kp-10 increased circulating concentrations of LH in prepubertal male and female Japanese Black calves [3]. Kisspeptin-10 also stimulated increased circulating concentrations of LH in Holstein cows and ovariectomized Jersey cows, and interestingly the sensitivity of LH to exogenous Kp-10 stimulation seems to be enhanced with lactation [4,5]. Central administration of the Kp antagonist, peptide 234, to ewes reduced LH pulse amplitude to the point of precluding determination of pulse frequency and reduced mean LH concentrations [6]. Central administration of p-271, another Kp receptor (Kiss1R) antagonist, also inhibited pulsatile LH concentrations in ovariectomized ewes [7]. Kisspeptin expression is regulated by steroids as the number of Kp positive cells in the ARC are increased following OVX compared to intact ewes, the opposite being found in the preoptic area (POA) Kp neurons [8]. Furthermore, the number of Kp positive cells in the arcuate nucleus (ARC) is reduced in ovariectomized ewes by treatment with estrogen or progesterone [8]. Additionally, the majority of Kp positive cells in the ARC also coexpressed the progesterone receptor [8]. Interestingly, single nucleotide polymorphisms in the Kiss1 gene were associated with increased litter size in goats [9].
Kisspeptin and the luteinizing hormone surge Kisspeptin appears to have a role in generation of the LH surge to stimulate ovulation. Constant iv infusion of Kp-10 for 8 h beginning 30 h after progesterone withdrawal stimulated an earlier LH surge and an earlier increase in circulating concentrations of progesterone than in ewes treated with vehicle [1]. Additionally, IV infusion of Kp-10 also stimulates a surge of LH in anestrous ewes [10]. Blockage of Kp action with the Kiss1R antagonist, p-271, attenuated an estradiol induced LH surge [8].
The action of Kp on LH in sheep appears to be via an effect on GnRH release from the hypothalamus and not direct action of Kp on pituitary gonadotropes. Cultured ovine pituitary cells can respond to Kp treatment with increased release of LH, but hypothalamo-pituitarydisconnected ewes do not respond to Kp-10 treatment with increased circulating concentrations of LH nor do hypophysial portal concentrations of Kp correspond with LH pulsatility and the LH surge [11]. Furthermore, expression of Kp (both number of Kp positive cells and level of expression per cell) was increased in caudal ARC during the late follicular phase of ewes [12]. Smith et al. [13] also observed an increase in the number of Kiss1 mRNA positive cells in the middle and caudal ARC as well as the POA during the late follicular phase. Additionally, the percentage of Kp positive cells expressing Fos was increased by positive estradiol feedback in the middle and caudal ARC [13]. Kisspeptin neurons in the POA showed high levels of Fos activation at the time of the LH surge [14]. The proportion of Kp neurons showing Fos activation was positively correlated with the percentage of GnRH neurons expressing Fos activation [14]. However, very few ARC Kp neurons showed Fos activation around the time of the LH surge [15]. Thus, Kp is clearly involved in generation of the GnRH, and consequentially the LH, surge, although it is not clear if Kp neurons in the ARC or POA area are more important in generating the LH surge.
Kisspeptin action in seasonally anestrous animals
The Kp response to steroid and nonsteroid cues is altered during the nonbreeding season to result in seasonal anestrous in sheep. In ewes, infusion of Kp-10 IV stimulated ovulation during the nonbreeding season [1]. Kisspeptin expression is higher in the breeding season than the nonbreeding season in sheep, and there is an increase in Kp contacts with GnRH neurons during the breeding season [12]. The number of Kp positive cells in the ARC is also higher during the breeding season than the nonbreeding season of ovariectomized ewes [8]. Additionally, the number Kp positive neurons in the ARC and the percentage of neurons positive for Kp in both the ARC and preoptic area increased in OVX, estradiol treated ewes following the transition to short day exposure [16]. The GnRH and LH response to Kp-10 is greater in seasonally anestrous ewes than in luteal ewes during the breeding season [17]. Additionally, expression of the Kp receptor, Kiss1r, mRNA in GnRH neurons is higher during the nonbreeding season than during the breeding season in ewes, and Kiss1r mRNA expression in GnRH neurons is reduced by Kp-10 treatment of ewes during the non-breeding season but not by steroid treatment of OVX ewes [17]. These results suggest alterations in Kp production or release is involved in the seasonal regulation of reproduction in sheep.
Introduction of a ram to seasonally anestrous ewes isolated from rams for at least one month will induce pulsatile LH secretion and can cause ovulation outside of the breeding season [18]. Kisspeptin has a role in this response. Indeed, De Bond et al. [19] utilized the Kp antagonist (p-271) to demonstrate Kp action is necessary for the seasonally anestrous ewes to respond to ram introduction with increased LH. Furthermore, ram introduction also increased the number of Kp positive neurons and Kiss 1 mRNA expression in cells in the rostral ARC [19]. Interestingly, Tac2 mRNA, encoding for neurokinin B, was readily detectable in cells with Kiss 1 mRNA, but was decreased in rostral ARC cells following ram introduction [19].
Kisspeptin action in prepubertal animals
Puberty in the ruminant is initiated by a decrease in negative feedback inhibition of LH by estradiol. Kisspeptin and neurokinin B, which is co-expressed with Kp in a number of hypothalamic cells, may play a role in initiation of puberty in ruminants. Injection of senktide, a neurokinin B agonist, in prepubertal ewes was immediately followed by a LH pulse [20]. However, the number of neurokinin B cells in the ARC was not different in prepubertal and post puberty ewes, although there was an increase following ovariectomy [20]. The number of Kp positive cells in the ARC increased following puberty in intact ewes and was also increased following ovariectomy in prepubertal ewes [20]. Additionally, Redmond et al. [21] observed the number of Kiss 1 positive cells in the POA increased from 25 to 35 weeks of age in ewe lambs, although the number of Kiss 1 positive cells did not appear to be related to increases in LH pulse frequency. However, Redmond et al. [21] did report an increase in Kiss 1 positive cells in the middle ARC that was associated with increased frequency of LH pulses. Furthermore, the percentage of GnRH neurons in the POA with Kp positive close contacts was higher in post puberty in ewes and increased following ovariectomy in prepubertal ewes [20]. Hourly intravenous treatment with Kp is capable of inducing a LH surge followed by elevated concentrations of progesterone, suggesting ovulation, in prepubertal ewe lambs [22]. Thus, Kp has a definite role in initiation of puberty in the sheep, but only in a defined window of receptivity.
Non-Reproductive roles for Kisspeptin Pituitary actions of kisspeptin
There is circumstantial evidence of possible actions of Kp at the level of the pituitary in sheep, possibly actions not oriented towards regulation of LH. For example, in sheep the median eminence contains neuron terminals with specific staining for Kp [23]. Kisspeptin is also released into the portal vein circulation of sheep [11] however the timing of Kp pulses was concurrent with or follow the LH pulses. This may suggest that Kp is regulating some pituitary function other than stimulating LH via GnRH action. Kotani et al. [24] described the presence of Kiss1R in human pituitary followed by studies in sheep which found the presence of Kp receptor mRNA in gonadotropes, lactotropes and somatotropes [8]. The same year, Kadokawa et al. [25] incubated pituitary cells isolated from bovine pituitaries and cultured in the presence of variable doses of Kp-10. The pituitary cells responded to the direct application of Kp-10 in two hours with a dose-dependent increase in secretion of growth hormone (GH) and prolactin. In addition, Gutierez-Pascal et al. [26] found similar results using pituitary cells isolated from rats following treatment with Kp-10 for 30 min or 4 h. The rat pituitary cells also demonstrated an increase in intracellular Ca ++ that occurred in 10 % of cultured cells. Moreover, in nonhuman primate pituitary cell culture, Kp-10 stimulated both GH and LH release through extracellular Ca ++ entry, phospholipase C, protein kinase C, MAPK, and by additional intracellular Ca ++ mobilization [27]. These data collectively suggests an intrapituitary signaling system for modification of GH release and perhaps prolactin.
Peripheral administration of Kp-10 to prepubertal heifers increased circulating concentrations of GH [28], suggesting a physiological role to regulate GH from the pituitary. Moreover, Sébert et al. [10] found that intravenous infusion of Kp-10 to seasonally acyclic ewes produced a surge of LH that was accompanied by a smaller but significant rise in plasma concentrations of both FSH and GH. However, others have reported Kp administration did not alter circulating concentrations of GH in prepubertal gilts [29], Kp-10 administration did not alter circulating concentrations of GH in goats [30], lactating dairy cows [5], or prepubertal male or female cattle [3], and Kp-10 administration did not alter plasma GH, prolactin, TSH and cortisol in rhesus monkeys [31]. In OVX cows, intravenous administration of Kp-10 stimulated increased circulating concentrations of LH regardless of supplemental steroid treatment, but only increased circulating concentrations of GH in cows treated with pharmacologic doses of estradiol cypionate and/or progesterone [32]. Even in the presence of the steroids, the GH pulse was only 2-3 fold higher than baseline plasma concentrations with a duration of only 10 min. Thus actions of peripheral Kp to regulate GH release are inconsistent and species dependent. In view of the clear actions of Kp to release GH in vitro, as well as release GH after intravenous injection under some conditions, it can be concluded that Kp has the potential to modify GH through actions at the pituitary, but is probably not a major direct regulator of GH release.
Hypothalamic actions of Kisspeptin
Infusion of Kp-10 via the lateral ventricle of sheep resulted in an increase in ARC neuropeptide Y (NPY) gene expression and a decrease in proopiomelanocortin gene expression [33]. Since leptin receptors were found in Kp-positive neurons in sheep, this finding suggested a mechanism to coordinate reproduction and metabolic control in sheep. Interestingly, a study in NPY-GFP transgenic mice found that Kiss1r mRNA was produced in NPY neurons [34]. In a hypothalamic cell line, Kim et al. [34] also found that Kp-10 directly regulates NPY neuron synthesis and release, providing more credence to a possible link for Kp to metabolic control. Since NPY is known to control GH release in ruminants [35], the results of these experiments with NPY suggest that Kp could regulate GH release through the hypothalamus. In an experiment to test this hypothesis, doses of Kp-10 of 100, 200 or 1,000 pmol/kg BW of Kp-10 were administered intravenously to ovariectomized sheep. There was an expected response to increase plasma LH concentrations, but even at the highest dose of Kp-10, there were no effects on circulating concentrations of GH. Central administration of Kp-10 at 100 or 200 pmol/kg BW increased GH release as well as the expected release of LH [4]. An example of this effect of Kp to release GH is found in Fig. 1 (unpublished data). Thus Kp, working via hypothalamic mechanisms, provides a strong stimulus to increase GH in addition to its wellknown effects on LH release in sheep.
At present there have been no published findings regarding mechanism or the physiological significance for Kp in regulating GH. The link between Kp and NPY [33] and the evidence that NPY releases GH in ruminants [35]), suggests a possible mechanism for GH regulation by Kp, though to date there have been no direct studies of this proposed pathway. In terms of physiological relevance, the findings that both LH and GH are needed for normal luteal growth in sheep suggests that GH release may be linked to reproductive success [36]. Therefore it is tempting to speculate that in addition to direct mechanism regulating GnRH and hence LH, Kp may also regulate GH through hypothalamic mechanisms and this GH regulation may be a critical component of normal reproduction in sheep. However, this hypothesis has not yet been directly examined.
Conclusions
These studies confirm a link between Kp and metabolic regulatory systems. Since adequate nutrition and GH are both needed for reproductive success, the connection of Kp to metabolic and GH systems may be a critical component of normal reproduction and should be examined in more depth. | 2016-05-12T22:15:10.714Z | 2015-05-28T00:00:00.000 | {
"year": 2015,
"sha1": "17aab09ea7b19568c54ddf8047e741c46e888fbd",
"oa_license": "CCBY",
"oa_url": "https://jasbsci.biomedcentral.com/track/pdf/10.1186/s40104-015-0021-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2bc110a33833f584f38448b4531df2907a1c368",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
232200871 | pes2o/s2orc | v3-fos-license | Pilot Scale Elimination of Phenolic Cellulase Inhibitors From Alkali Pretreated Wheat Straw for Improved Cellulolytic Digestibility to Fermentable Saccharides
Depleting supplies of fossil fuel, regular price hikes of gasoline and environmental deterioration have necessitated the search for economic and eco-benign alternatives of gasoline like lignocellulosic biomass. However, pre-treatment of such biomass results in development of some phenolic compounds which later hinder the depolymerisation of biomass by cellulases and seriously affect the cost effectiveness of the process. Dephenolification of biomass hydrolysate is well cited in literature. However, elimination of phenolic compounds from pretreated solid biomass is not well studied. The present study was aimed to optimize dephenoliphication of wheat straw using various alkalis i.e., Ca(OH)2 and NH3; acids i.e., H2O2, H2SO4, and H3PO4; combinations of NH3+ H3PO4 and H3PO4+ H2O2 at pilot scale to increase enzymatic saccharification yield. Among all the pretreatment strategies used, maximum reduction in phenolic content was observed as 66 mg Gallic Acid Equivalent/gram Dry Weight (GAE/g DW), compared to control having 210 mg GAE/g DW using 5% (v/v) combination of NH3+H3PO4. Upon subsequent saccharification of dephenoliphied substrate, the hydrolysis yield was recorded as 46.88%. Optimized conditions such as using 1%+5% concentration of NH3+ H3PO4, for 30 min at 110°C temperature reduced total phenolic content (TPC) to 48 mg GAE/g DW. This reduction in phenolic content helped cellulases to act more proficiently on the substrate and saccharification yield of 55.06% was obtained. The findings will result in less utilization of cellulases to get increased yield of saccharides by hydrolyzing wheat straw, thus, making the process economical. Furthermore, pilot scale investigations of current study will help in upgrading the novel process to industrial scale.
INTRODUCTION
In the present era, world is facing an inevitable energy crisis due to the depletion of fossil fuel deposits. With ever increasing population, the need to look for alternative energy resources has been a priority for most of the scientists around the world (Ramos et al., 2019). Recent studies indicate that the most promising alternative for non-renewable energy is the use of biofuels. For this purpose, lignocellulosic biomass which is mainly an agricultural waste, is mostly preferred (Novakovic et al., 2020). Lignin, cellulose, and hemicellulose are the main components of lignocellulosic biomass (Hasegawa et al., 2013). Among several agricultural wastes, wheat straw is considered as one of the most promising and abundant agricultural residues in the world (Ondrejoviè et al., 2020). According to Celignis analytical located in Ireland, the average yield of wheat straw is 1.3-1.4 kg/kg of wheat grain. Wheat straw being low cost and cheap agricultural by-product, coupled with the high cellulosic proportion (30-50%), makes wheat straw most suitable substrate for the production of bioethanol (Qiu et al., 2018).
The use of wheat straw for bioethanol production involves four basic steps i.e., pretreatment, enzymatic saccharification, fermentation, and down streaming of the product. Pretreatment of the feedstock is the most difficult step in the production of bioethanol using lignocellulosic biomass (Lynd et al., 2008). The complex arrangement of cellulose and hemicellulose hinders the access of enzyme to act on them due to the presence of lignin (Tareen et al., 2020). A variety of methods for pretreatment have been reported which include biological, chemical, mechanical, and thermochemical processes (Xiong et al., 2019).
Despite its primary importance in the process of biofuel formation, the pre-treatment step has certain disadvantages as it may result in the formation of inhibitory compounds (Ahmed et al., 2019). The main inhibitors produced during pretreatment are aliphatic acids such as formic acid, acetic acid, and levulinic acid, derivatives of furan including 5-hydroxymethylfurfural (HMF) and furfural, and various phenolic compounds i.e., phenol, p-hydroxybenzoic acid, and vanillin (Qi et al., 2014). These components are toxic or inhibitory to the cellulases and fermenting organisms. Therefore, these compounds must be removed or neutralized before the process of saccharification (Jönsson et al., 2013).
Various biological, physical, and chemical methods have been used for detoxification of lignocellulosic hydrolysate (Horváth et al., 2005). Most employed chemical detoxification methods include acidic and alkaline treatments (Chandel et al., 2011). Among alkalis, sodium hydroxide, aqueous ammonia, and calcium hydroxide are commonly used. Dilute acids including phosphoric acid and sulfuric acid are mostly used acids for detoxification of phenolic content produced during lysis of lignocellulose biomass (Mansour et al., 2016). However, removal of phenolic compounds from pretreated lignocellulosic biomass before enzymatic hydrolysis is rarely reported (Nawaz et al., 2017).
Recently we have demonstrated that the removal of these phenolic compounds can significantly increase the saccharification rate (Haq et al., 2018). However, application of this process at commercial level needs studies at pilot scale which will provide a better understanding of the process. Therefore, we have evaluated the challenges encountered during the up scaling of a dephenolification and subsequent saccharification of pre-treated wheat straw. In the present study, we have selected a pilot scale detoxification process for maximum saccharification under optimized conditions.
Chemicals
All chemicals used in the present study were of analytical grade and purchased from authentic suppliers of Sigma and Merck Ltd.
Biomass
Pre-treatment of wheat straw and estimation of lignocellulosic content was carried out according to our previous report by Haq et al. (2018). For pretreatment, 2.5% sodium hydroxide (NaOH) was used for 10 min at a steaming temperature of 200 • C in specialized boiler. The mesh size of biomass used was 2 mm. Cellulose and hemicelluloses content of biomass was estimated after Huang et al. (2010) and Lignin content was calculated according to TAPPI standards (TAPPI Standard T236cm-85, 1993). The lignocellulosic content and total phenolic content (TPC) of raw, pretreated and detoxified biomass is presented in Table 1.
Removal of Phenolic Compounds
Pre-treated substrate (1 kg) was treated with 5 L of 5% alkalis i.e., Ca(OH) 2 and NH 3 ; acids i.e., H 2 O 2 , H 2 SO 4 , H 3 PO 4 ; combinations (sequential addition) of NH 3 +H 3 PO 4 and H 3 PO 4 +H 2 O 2 at different temperatures i.e., 80, 90, 100, 110, 120, 130, 140, and 150 • C for incubation periods of 5, 10, 15, 20, 25, 30, 35, and 40 min. A locally manufactured double jacketed, stainless steel vessel with automated temperature, pH, and agitation controls having working volume capacity of 20 L was used for this purpose. Traditional one-factor-at-a-time approach for optimization was followed. Afterward, substrates were rinsed with distilled water thrice. For this purpose, 5 L of water was added in the biomass and stirred at 50 rpm for 10 min, after which the water was removed. The same process was repeated two times for thorough washing. The substrates were then allowed to dry under room temperature before estimating TPC.
Detection of Phenolic Content in Biomass
Folin-Ciocalteu (Folin and Ciocalteu, 1927) assay was used for the detection and estimation of TPC in wheat straw samples, before and after the detoxification of biomass. Assay was performed by adding 10 mg sample in a capped test tube with 9 ml distilled water and 1 ml Folin-Ciocalteu reagent. The contents were mixed vigorously and incubated for 5 min before addition of 10 ml 7% Na 2 CO 3 solution. Subsequently, test tubes were kept at room temperature for 90 min. The optical density of the sample was measured at 550 nm (John et al., 2014). TPCs present in the biomass were estimated using the standard curve of Gallic acid (Nawaz et al., 2017).
Enzymatic Saccharification
Enzymatic saccharification of detoxified biomass samples was initially carried out at laboratory scale using shake flask. Different parameters for the process were optimized by changing one factor at a time and keeping all the other factors constant. These parameters included temperature (50, 60, 70, 80, and 90 • C), pH(5, 6, 7, 8, and 9), reaction time (0.5, 1, 2, 3, 4, and 5 h), and inoculum size (0.5, 1, 1.5, 2, 2.5, and 3%). After optimizing the conditions at laboratory scale, detoxified wheat straw samples were analyzed for saccharification potential in a locally fabricated double jacketed stainless steel vessel (pilot scale). This vessel was equipped with heater, compressor, agitator, and digital controls for continuous monitoring of temperature and agitation change. Substrate (2.5% w/v) was added in 20 L of phosphate buffer (pH 7) along with 500 U of each of the three cellulase enzyme. The reaction was carried out at 80 • C for a period of 3 h. Samples were drawn at regular intervals of 30 min to estimate total reducing sugar and TPC. Saccharification (%) was calculated using the following formula: Where: R.S. = Sugar concentration in hydrolysate estimated as total reducing sugar (mg/ml). V = Total volume of the reaction mixture (ml). F1 = Factor used for the conversion of monosaccharide to polysaccharide due to water uptake during hydrolysis (0.9 for hexoses). F2 = Factor for carbohydrate content of substrate (total carbohydrate, mg/total substrate, mg).
Scanning Electron Microscopy of Biomasses
Samples of wheat straw displaying best saccharification results after detoxification were sent to Center for Advance Studies in Physics (CASP), Government College University Lahore, Pakistan for Scanning Electron Microscopy (SEM). Sample preparation was not required as the samples were polymeric in nature.
Statistical Analysis
The computer statistical software (SPSS 16.0) was used for the statistical analysis of the results. Significant difference among the replicates has been presented as Duncan's multiple range tests in the form of probability (p) values (Duncan, 1995).
Choice of Detoxification Method
Removal of phenolics from pretreated wheat straw using different alkalis [Ca(OH) 2 and NH 3 ], acids (H 2 O 2 , H 2 SO 4 , and H 3 PO 4 ), combination of acids (H 2 O 2 +H 3 PO 4 ), combination of acid and alkali (H 3 PO 4 + NH 3 ) was assessed. Optimization technique used was traditional one-factor-at-a-time. Subsequently, effect of phenolics removal from pretreated wheat straw on its enzymatic hydrolysis was also chronicled. TPC in control sample was found to be 210 ± 0.02 mg Gallic Acid Equivalent/gram of Dry Weight (GAE/g DW). Among all three acids used, phosphoric acid treated biomass showed maximum reduced phenolic content i.e., 102 ± 0.12 mg GAE/g DW as shown in Figure 1A. On the other hand, among alkalis, best results were obtained by aqueous ammonia that reduced TPC to 105 ± 0.04 mg GAE/g DW ( Figure 1B). Combination of acids resulted in the removal of phenolic compounds to 78 mg GAE/g DW as evident from Figure 1C. Among all treatment strategies, combination of acid and alkali showed most efficient removal of phenolic content with total reduced phenolic content of 66 ± 0.02 mg GAE/g DW. All the pretreated samples of wheat straw processed for removal of phenolic content showed better saccharifcation i. (Figure 1). Hence, it was considered that phosphoric acid used in combination with aqueous ammonia was the best method for detoxification of biomass. The removal of phenolic compounds is based on the fact that the treatment of biomass with phosphoric acid removes lignin that is present in biomass thus essentially removing phenolic contents. Moreover, cellulosic portion of the biomass remain unaffected by treatment with acid rather it provides more surface area for catalytic action of cellulases (Kim et al., 2011).
Treatment of biomass with ammonia may results in increase in internal surface area of cellulose which in turn decrease the degree of polymerization. Decrease in crystallinity or polymerization mediates the disruption of lignin content present in biomass that leads to the removal of phenolic inhibitors present in lignin portion along with lignin (Chen et al., 2020). Moreover, high cost of other alkalis hinders their application for detoxification while ammonia is volatile in nature so it is recyclable. Ammonia can be regenerated and reused again in the process of dephenoliphication (Kim and Holtzapple, 2006). By employing phosphoric acid together with aqueous ammonia, best result in terms of phenolic compounds removal was obtained because efficient fractionation of lignin and hemicellulose to remove phenolic derivatives is hard to achieve using dilute acid or alkali alone. Khobragade et al. (2004) found that removal of phenolic and other inhibitors increases under acidic conditions but detoxification further increased when alkaline conditions were provided along with acidic treatment. Wang et al. (2014) and Qiu et al. (2017) also used ammonia in combination with phosphoric acid for detoxification of wheat straw and rice straw respectively, and obtained results comparable to our findings.
Effect of Incubation Time on Dephenoliphication
Detoxification of pretreated wheat straw using a combination of H 3 PO 4 and NH 3 was analyzed for variable time period i.e., 5,10,15,20,25,30,35, and 40 min to determine the optimum time period for maximum removal of TPC. In parallel, saccharification potential was also analyzed for substrate sample treated at various time periods to reduce phenolic content. Increase in incubation time resulted in gradual decrease of phenolic content. Maximum reduction in TPC was observed after 30 min of incubation (66 ± 0.06 mg GAE/g DW). Increase in incubation time beyond 30 min did not further decrease the TPC as shown in Figure 2. Similar pattern was noticed for the saccharification studies. Maximum saccharification value of 46.88; p < 0.05% was recorded using the substrate incubated for 30 min for dephenoliphication (Figure 2). Optimization of incubation time for the process is very important as shorter duration of time may not be able to remove sufficient amount of the phenolic compounds and longer time period could possibility lead to the formation of new phenolic compounds due to the fragmentation of soluble aromatic oligomers (Nilvebrant et al., 2003).
Significance of Temperature on Dephenoliphication
Significance of temperature was assessed using different temperature range (80,90,100,110,120,130,140, and 150 • C) for reduction in phenolic content of pretreated wheat straw obtained by combination of aqueous ammonia and phosphoric acid. In addition, saccharification study of dephenoliphied substrates at different temperature was also carried out. TPC was started to decrease with increase of temperature from 80 • C (130 mg GAE/g DW) and maximum reduction in TPC (50 mg GAE/g DW) was observed at 110 • C. However, further increase in the temperature ensued gradual increase in the TPC which was maximum (103 mg GAE/g DW) at 150 • C.
FIGURE 2 | Effect of incubation time on the removal of phenlolics from pretreated wheat straw using combination of aqueous ammonia and phosphoric acid. Y-error bars represent the standard deviation (SD ≤ ± 0.05) between three replicates.
Analogous tendency was observed for enzymatic hydrolysis of dephenoliphied substrates and maximum saccharification i.e., 48.02% was determined for the substrate having least phenolic content. Substrate samples with increased phenolic content showed decreased saccharification (Figure 3). This may be due to the fact that with the increase in temperature, the hemicellulosic content is converted in to furans which can interfere with saccharification. The increase in TPC with time at higher temperature could be due the breakdown of ester bonds in lignin carbohydrate complexes and hence producing more phenolics (Canilha et al., 2008). Nawaz et al. (2017) reported maximum removal of phenolic compounds from pretreated sugarcane bagasse at 75 • C after 120 min of incubation using Ca(OH) 2 with 2.21 folds increase in saccharification. The conditions optimized are mild than used in current study might be due to difference in substrate used. However, no ample data is available on phenolic compounds removal from solid biomass for comparison.
Optimal Concentration of Alkali and Acid Combination
Different concentrations of ammonia [5, 10, 15, 20, 25, and 30 % (v/v)] and phosphoric acid [0.5, 1, 1.5, 2, 2.5, 3, and 3.5% (v/v)] were investigated, in a one constant one variable manner, to find out the best concentrations for the combination in order to achieve maximum removal of phenolics from pretreated wheat straw. All samples treated for phenolics removal were also subjected to saccharification. Among all concentrations used, maximum reduction in phenolic content was observed at 5% (v/v) ammonia i.e., 53.12 mg GAE/g DW and 1% (v/v) phosphoric acid i.e., 48 mg GAE/g DW as evident from Figures 4A,B, respectively. Biomass treated at these concentrations for TPC reduction also showed maximum saccharification with a values of 48.37 and 55.06%, respectively. Low dose of acid and base maximally removed the phenolic content. No previous reports are available for utilization of FIGURE 3 | Studies of temperature influence on reduction in phenolic content from pretreated wheat straw. Y-error bars represent the standard deviation (SD ≤ ± 0.05) between three replicates. ammonia and phosphoric acid in combination for removal of phenolic compounds from wheat straw.
Scanning Electron Microscopy of Detoxified Wheat Straw
Scanning electron micrographs of pretreated wheat straw samples, before and after detoxification under optimum conditions are shown in Figure 5. Before detoxification, the sample is in compact form, showing crystalline structure. On the other hand, detoxified sample shows loosely bound fibers and less crystallinity. The higher saccharification yield could be attributed to the structural changes in biomass thus making cellulose more assessable. However, evident from the previous study of Rajput et al. (2018), the heat treatment above 180 • C increases its digestibility of wheat straw due to degradation of hemicellulose. Hemicellulose degradation due to heating biomass at higher temperature (170 • C) has also been reported by Santucci et al. (2015). Since, the detoxification in present study is carried out at 110 • C, much less than 180 • C, it could be inferred that the higher saccharification reported is mainly the result of decreased TPC instead of simultaneous removal of hemicellulose due to detoxification conditions. Although changes in cellulose crystallinity and porosity cannot be ruled out, lignin and phenolic content can be considered here as the main discriminating parameter. However, the exact nature of structural changes is need to be studied in detail.
CONCLUSION
It was concluded from current study that pilot scale removal of TPC from solid biomass has a significant effect on improved action of cellulases on pretreated wheat straw. Furthermore, treatment strategies and optimization parameters were found to have appreciable impact on removal of phenolic compounds. As the pilot scale studies related to current research are not previously available, there is a strong need to explore different strategies and different biomass for removal of phenolics and their assessment as a potential substrate for proficient conversion to fermentable saccharides enzymatically at industrial scale.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. | 2021-03-12T14:04:48.396Z | 2021-03-12T00:00:00.000 | {
"year": 2021,
"sha1": "a78a6b8eb412b8c806a146e4659ee5f88c23e903",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2021.658159/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a78a6b8eb412b8c806a146e4659ee5f88c23e903",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271568751 | pes2o/s2orc | v3-fos-license | The Impact of the New Families Home Visiting Program on Depressive Symptoms Among Norwegian Fathers Postpartum: A Nonrandomized Controlled Study
Becoming a parent is a vulnerable life transition and may affect parents’ mental health. Depressive symptoms may occur in fathers, as well as mothers, during pregnancy and the postpartum period. The health service is expected to have a family perspective, aiming to support both parents. Despite this goal, mothers traditionally receive more support than fathers. Home visiting programs may provide enhanced guidance for new fathers and increased mental health support. The aim of this study was therefore to assess possible differences in level of depressive symptom in fathers receiving the New Families home visiting program compared with those receiving standard care from the Norwegian Child Health Service. A prospective nonrandomized controlled study with a parallel group design was performed. The Edinburg Postnatal Depression Scale (EPDS) was used to measure depressive symptoms in fathers (N = 197) at 28 weeks of their partners’ pregnancy (T1), at 6 weeks (T2), and 3 months postpartum (T3), in the intervention and the control group. The results indicate a prevalence of depressive symptoms (EPDS score ≥ 10) in Norwegian fathers of 3.1% at T1, 3.9% at T2, and 2.2% at T3 for the full sample. No significant EPDS score differences were found between the intervention and the control group at six weeks and three months postpartum. This suggests that the intervention had no clear impact on depressive symptoms during this time-period.
Introduction
The transition to parenthood involves major life changes in roles, demands and expectations (Baldwin et al., 2019;Shorey & Chan, 2020).During this period, fathers may experience challenges and vulnerability related to their mental health (Darwin et al., 2017;Philpott et al., 2020).Depressive symptoms in fathers that occur during pregnancy and the first year after birth is often referred to as paternal postpartum depression (PPD) (Cameron et al., 2016;Paulson & Bazemore, 2010).Meta-analyses of international studies indicate a prevalence of PPD of 8.4% to 10.4%, with the highest rates identified 3 to 6 months after birth (Cameron et al., 2016;Paulson & Bazemore, 2010).PPD not only affects fathers' health, but is identified to have a negative impact on parenting behavior, family relationships, and the health of mother and child, including the child's risk of distress (O'Brien et al., 2017;Ramchandani et al., 2008;Ramchandani et al., 2011).Studies have reported that fathers often require support in the transition to parenthood (Hrybanova et al., 2019).While the Child Health Service (CHS) has traditionally been aimed at supporting the mother and child, today's service providers are expected to have a family perspective that also includes the father (Norwegian Directorate of Health, 2017).
Background
The Norwegian CHS is a part of the Norwegian health service at a municipal level.It is a voluntary, universal, free of charge service, used by 98% of all families (Norwegian Directorate of Health, 2017;Statistics Norway, 2021).The service is focused on health promotion and primary prevention, aimed at pregnancy, families with new-borns and children up to 5 years of age.The CHS offers a standard Child Health Program (CHP), which includes one home visit after birth and 13 subsequent clinical consultations at specific time points.They cover monitoring of the child's growth and development, vaccinations and parental guidance and support.The Public Health Nurse (PHN) plays a key role in the service (Norwegian Directorate of Health, 2017).
A supplement to the standard CHP offered by the CHS is the New Families home visiting program (NF), which is a universal intervention, initiated and developed by the City of Oslo between 2013 and 2016 (Leirbakk et al., 2018(Leirbakk et al., , 2019)).It offers the parents' home visits from a PHN from late pregnancy until the child is 2 years old, in addition to the standard program.
The PHNs in the Norwegian CHS routinely meet almost all expectant and new parents (Statistics Norway, 2021), making it an arena for universal health care to couples in a vulnerable life transition.Relative to selective strategies, universal strategies are perceived as less stigmatizing, and more likely to be used (Fisher et al., 2018).Home visits are considered a good method to develop a relationship between the PHN and the parents in a safe environment, being more tailored to the parents' need for support (Ba¨ckstro¨m et al., 2021;Solberg et al., 2022).The mandate of both the Norwegian CHS and NF is to provide parental support, for both ongoing and new changes and challenges, including support for parents' mental health (Norwegian Directorate of Health, 2017;Oslo Municipality, 2018).This is recommended by the WHO standard of new-born care, which specify that parents should receive emotional support that is sensitive to their needs and aim to strengthen their capability (World Health Organization, 2022) Traditionally, new mothers receive more support from health care professionals than fathers (Goldstein et al., 2020;Hrybanova et al., 2019).Men often hesitate to seek psychological help (Goldstein et al., 2020).If fathers are invited to home visits and visits at the CHS, this may increase their opportunity for receiving professional support (Solberg et al., 2022;Wells & Aronson, 2021).Research has identified that fathers appreciate home visits as a contribution to more tailored services and an arena for mental health support (Solberg et al., 2022).In addition, although fathers have reported less depressive symptoms when they receive professional support, both pre-and postnatally (Wells & Aronson, 2021), there is limited knowledge related to the effects of parental home-based support programs in the perinatal period (Minckas et al., 2023;O'Brien et al., 2017).
Controlled studies have reported that home visits can be an effective way to prevent, detect, and support postpartum depression in women (Milani et al., 2017), but there is a lack of knowledge about the impact of home visits and increased professional support on PPD.The aim of this study was to assess possible differences in the level of depressive symptoms between fathers receiving NF and those receiving standard care at the CHS.The main outcome was depressive symptoms assessed with the Edinburgh Postnatal Depression Scale (EPDS) and measured during their partners' pregnancy, at 6 weeks, and 3 months postpartum.
Design
This is a prospective nonrandomized controlled study with a parallel group design.The study is a part of and used data from the New Families research project, which evaluate the experiences and impact of the NF home visiting program.The NF research project is registered on Clinicaltrial.gov (ClinicalTrial.govidentifier: NCT04162626).
We report in accordance with the Transparent Reporting of Evaluation with Nonrandomized Design (TREND) statement (Des Jarlais et al., 2004).
Ethical Considerations
The study was conducted in accordance with the Helsinki Declaration (World Medical Association, 2013) and approved by the Regional Committees for Medical and Health Research Ethics in Norway (reference no: 2018/1378), and the Norwegian Agency for Shared Services in Education and Research (SIKT) (project number: 735207).
The participants received written and oral information about the study and its purpose.They were informed that all participation was voluntary and that they could withdraw at any time without consequences.The data were anonymized, treated confidentially, and stored in accordance with the Norwegian Personal Data and Health Research Acts using the Service for Sensitive Data platform (University of Oslo, 2016).Due to General Data Protection Regulations, we were not allowed to collect any information about the study's nonparticipants (The Personal Data Act, 2018).The authors have no known conflict of interest to disclose.
Participants and Recruitment
The NF study participants were recruited from five, of totally 15, city districts in Oslo, the largest city in Norway.The districts were selected by the municipality of Oslo, with the aim of ensuring the demographic and socio-economic representativeness of the population.Three districts were defined as intervention districts and two as control districts.Randomization of districts into intervention and control areas was not possible because the NF program had already been implemented in several districts and services when the research project started (described below).
In the intervention districts, the NF program was fully implemented, and the program had been running for at least two years.Each intervention district was carefully matched with a control district with the aim of similarity, in terms of population composition, sociocultural factors, birth statistics, immigrant proportion and work participation.The three intervention districts received the NF home visiting program in addition to the standard CHS program, while the control districts received the standard CHS program only (standard care).The participants' allocation was determined based on their place of residence.
Pregnant first-time mothers, and the fathers of their expecting child, residing in the municipality of Oslo were invited to participate in the NF study.Expectant mothers were recruited by midwives or clinical secretaries when they attended antenatal consultations at the CHS.The women who expressed verbal interest in participating in the project were sent written information and a consent form by mail to return.The fathers were invited to participate through the mother's involvement in the study and were sent a similar written, but separate, invitation and consent form.The inclusion criteria were being the father of the child of first-time pregnant women and living in one of the five districts in Oslo.Recruitment was from October 2018 to December 2019.
Based on a power analysis of depressive symptoms as measured with the EPDS with 0.80, an alpha level of .05 and an effect size of .5, we calculated a need for 64 participants in each of the two groups.To reduce the risk of losing power due to withdrawals, we set a goal of recruiting as many fathers as possible within the recruitment timeframe.
Study Procedures
Description of the Intervention: The New Families Home Visiting Program.The NF home visiting program is a universal intervention initiated and developed by the City of Oslo between 2013 and 2016 (Leirbakk et al., 2018(Leirbakk et al., , 2019)).It is offered as a supplement to the standard CHP and aims to strengthen the CHS's health promotion and prevention work.Thus, it targets couples expecting their first child together, couples having their first child together in Norway and vulnerable multiparous parents.The program is universal and offers parents repeated home visits by a PHN from the 28th week of pregnancy until the child is two years old, while being individually tailored in terms of content and scope, with number of visits determined by each family's needs.Both parents are encouraged to be present during the home visits, which last approximately 1 to 1.5 hours.The parents' mental health is one of the recommended topics for the visits.Notably, NF is based on a salutogenic approach (Oslo Municipality, 2018) and aims to strengthen parenting skills and mobilize resources, focusing on change, motivation, and coping.It aims to establish early a supportive relationship between the PHN and the family, for best possible guidance.By using a primary nurse model, both home visits and the standard CHP are provided by the same PHN.The parents are provided with the direct mobile number of their PHN, giving them an opportunity for regular and direct contact (Oslo Municipality, 2018).
The PHN conducting home visits received training, which was organized by the CHS.The training workshops included descriptions of the NF program, theory such as salutogenic theory and the concept of selfefficacy, and guidance in conversational techniques, such as ''Motivational interviewing'' and ''Empathic communication.''The PHNs training included mentored home visits and self-reflection.The theoretical foundation and implementation of the NF program are presented in a separate program manual (Oslo Municipality, 2018).
Description of the Control Group: Standard Child Health
Program.The standard CHP is offered as a universal program, including one home visit after birth and 13 clinical consultations at specific time points from the child's birth and up to 5 years of age, provided by a PHN.The content is regulated by national guidelines and cover monitoring of the child's growth and development, vaccinations and parental guidance and support (Norwegian Directorate of Health, 2017).Until 3 months postpartum, the CHP provides one home visit 7 to 10 days postpartum and three clinical consultations.The timeline of the NF home visiting program in the context of the standard CHP is presented in Figure 1.
Data Collection
Data were collected using self-reported questionnaires sent to the participating families by mail at the 28th week of pregnancy (T1), at 6 weeks (T2) and at 3 months (T3) postpartum.All questionnaires were sent to the families to be returned in pre-paid envelopes with advanced notification of distribution via Short Message Service (SMS).We sent up to two reminders by SMS if questionnaires were not returned.
All study information and questionnaires were available in nine different languages in addition to Norwegian (English, Arabic, Lithuania, Pashto, Polish, Somali, Tamil, Turkish, and Urdu).Available languages were chosen based on ethnicity in the five city districts.Data were collected between October 2018 and June 2020.
Outcome Measure
The main outcome of the present evaluation was depressive symptoms in fathers as measured with EPDS (Cox et al., 1987).EPDS is a questionnaire originally developed to screen for depressive symptoms among postpartum women, but it is validated for use to assess fathers (Berg et al., 2022).The questionnaire consists of 10 self-report items addressing feelings experienced over the previous 7 days, using a 4-point Likert-type scale (0-3) with an overall score between 0 and 30.A higher score indicates more severe symptoms of depression (Cox et al., 1987).
Validation studies recommend different cut-off scores, depending on sample size, time of completion, cultural differences and gender (Berg et al., 2022).The originally suggested cut-off for women is 10 ( Cox et al., 1987;Eberhard-Gran et al., 2001).Recommended cut-off scores for men differ, ranging from 5 to 13, with 10 as the most frequent (Berg et al., 2022).
EPDS has good internal consistency, with a reported Cronbach's alpha coefficient ranging from .73 to .88 in pregnancy, and .60 to .88 at 0 to 6 months postpartum (Berg et al., 2022).In this study, the Cronbach's alpha coefficient was .74 at T1, .80 at T2, and .75 at T3.Thus, we consider the instrument to have good internal validity in our sample.
Statistical Analysis
Continuous variables are presented with median (minimum-maximum) and categorical variables as counts and percentages.To compare groups, we performed Mann-Whitney U test for continuous variables and chi-square test for categorical variables.We report As the participants, were not randomized, we compared the groups at baseline (T1) with regards to age, nationality, number of children, marital status, education, family income, employment, and previous and present mental illness.
For the analysis of between-group differences in the outcome measure, a general linear model (GLM) for repeated measures was fitted.The model consisted of two covariates: measurement time and group (intervention/control).In addition, we constructed the interaction term Time 3 Group (intervention/control).This was entered in the model as a covariate to assess if changes in EPDS mean score over time were different in the intervention and control group.There were no statistically significant differences between the intervention and the control group in demographic related variables at baseline; therefore, no other covariates were included in the GLM.
Between-group differences were computed as the difference in change in the intervention group and the control group assessed from baseline to 6 weeks and baseline to 3 months.All estimates are presented with 95% confidence intervals (95% CI).
All analyses were conducted according to intention-to-treat principle (ITT), thus all participants in both groups were included irrespective of the number of home visits they had received from the NF intervention.
To assess possible selection bias, we tested for possible baseline differences between responders and nonresponders at T2 and T3 regarding age, education, and family income, performing Mann-Whitney U test or chi-square test, as appropriate.
Descriptive analyses were performed to determine the prevalence of EPDS 10 reported as counts and percentages.The results are presented as point estimates and raw number.Internal consistency and reliability were assessed by calculating Cronbach's alpha for the total scale of EPDS.All statistical analyses were performed with the Statistical Package for the Social Science (SPSS), release 28 and Stata ver.17.A statistician (MCS) was consulted for planning of the study; she provided supervision concerning the choice of statistical methods and participated in data analysis.All tests were two-sided.p-value \ .05 was considered statistically significant.
The EPDS consists of 10 self-reported items.The proportions of responders with missing values at T1 were 0.9% for both the intervention group and the control group, at T2 0.2% for both groups, and at T3 0.1% for the intervention group and 0.2% for the control group.Based on the small number of missing values, these are handled as missing data and not imputed.
A sensitivity analysis was conducted for the outcome measure based on validation studies (Berg et al., 2022) to assess whether a different cut-off score (11 and 12) would affect the estimate of the prevalence of depressive symptoms in fathers postpartum.
Sample Description
Of the 405 fathers invited to participate in the study, 197 were included (T1).The number of participants at each timepoint, the distribution between the intervention and the control group, along with the known reasons for drop-out, are presented in the flow diagram in Figure 2. The proportions of participants who dropped-out from T1 to T3 did not differ between the groups, with 30.6% drop-out in the intervention group and 32.9% in the control group.
The participants completed the questionnaires in three of nine available languages, where 187 (94.9%) fathers answered in Norwegian, 8 (4.1%) in English, and 2 (1.0%) in Arabic.
The intervention and control group did not differ statistically with respect to demographic data reported at baseline (age, nationality, number of children, marital status, education and family income, employment), as described in Table 1.When comparing responders and nonresponders at T2 and T3 with baseline, our data did not reveal any statistically significant differences between responders and dropouts regarding demographic variables.
Depressive Symptoms
Very few fathers in our study sample reported depressive symptoms.In the intervention group, depressive symptoms (10 on EPDS) were reported in 3 (2.5%) of the fathers at T1, 4 (4.1%) at T2%, and 2 (2.4%) at T3.These rates were similar in the control group, with 3 (3.9%) at T1, 2 (3.6%) at T2%, and 1 (2.0%) at T3.In total, the numbers indicate a low prevalence at all time points in both groups, with the lowest rate 3 months postpartum, as described in Table 2.
In sensitivity analyses, see Table 3, cut-off values were set at 11, and 12 (Berg et al., 2022).This resulted in rates of depressive symptoms being 1.5%, and 1.0%, respectively for the total sample at T1, 3.9% and 3.0% at T2, and 1.3% and 1.5% at T3.Thus, our sensitivity analyses showed that irrespective of cut-offs applied, there are few fathers with depressive symptoms, a slight increase from T1 to T2 and a decrease toward T3.
In total, 10 fathers (5.1%) reported having a history of previous mental illness, and 6 (3.0%) stated presently suffering from mental illness at T1.Among the fathers with a previous mental illness, one scored 10 on EPDS at all timepoints, while for the six fathers who reported having a present mental illness, five of them scored 10 at T2, and all at T3.This indicates an association between present mental illness and depressive symptoms.
Between-Group Differences
There were no statistically significant differences in between-group changes of estimated marginal means in depressive symptoms (EPDS score 10) between the intervention group receiving NF and the control group receiving standard care from baseline to T2 (six weeks) or T3 (three months postpartum).The difference between the groups at T1 was 0.31 (-0.55, 1.17), 0.22 (-0.73, 1.16) at T2, and 0.08 (0.08, 1.15) at T3.The differences are described in Table 4.
Program Use
Almost one-third of the included fathers in the intervention group reported not receiving any additional home visits.In the families who received home visits, 66 fathers (77.6%) reported being present at the visit.
Impact of the Program
The aim of this study was to assess the impact of the NF home visiting program on self-reported depressive symptoms among fathers postpartum.The data indicate no statistically significant differences in EPDS score between the intervention and the control group at 6 weeks and 3 months after birth.The NF program is universal, with a comprehensive offer of parental support.However, the program does not specifically include depression support, but rather focuses on general support, including mental health.This may have influenced the program's lack of impact on PPD.Universal approaches supporting fathers have a clear value in general (Ba¨ckstro¨m et al., 2021), but regarding paternal perinatal mental health challenges, there seems to be a need for more targeted interventions (Raminov et al., 2016).
It is suggested that when evaluating interventions targeting father's mental health, a wider range of mental health outcomes should be considered (Raminov et al., 2016).In a systematic review, only one of five interventions with the aim to reduce and prevent PPD showed a significant reduction in PPD (Raminov et al., 2016).Even if studies indicate an association between home visit interventions and better mental health in fathers (Burcher et al., 2021), research highlights the need for more comprehensive and validated instruments when evaluating pre-and postnatal care, focusing more on measuring parents' experiences and satisfaction (Minckas et al., 2023).Further research with a qualitative design, such as in-depth interviews, may supplement intervention studies by giving more insight into fathers' experiences with, in this case, the NF program and its impact on depressive symptoms.
The measurement time in this study was 6 weeks and 3 months after birth, the period when the prevalence of PPD in fathers generally is low (Cameron et al., 2016).Most fathers develop PPD between 3 and 6 months and up to 1 year after birth (Cameron et al., 2016), which means that the NF intervention might have had an impact if measurements were taken later in the first year postpartum.
The NF intervention is not standardized but tailored to the parents' needs.Only 70.2% of the fathers reported that the family received NF home visits during pregnancy, and 77.6% of them were present at this visit.These findings indicate that the intervention did not reach all the parents or fathers.Studies suggest that fathers are not always informed or invited to participate during home visits (Høgmo et al., 2021), and, therefore, might need a specific invitation to attend home visits and the CHS (Wells et al., 2023).Early service use might influence their further engagement with the service and thus increase their opportunity for receiving support (Finlayson et al., 2023).It is known that PHNs are more aware of and supportive when it comes to mental health issues in new mothers, compared with fathers (Wells et al., 2017), and fathers often feel side-lined and unimportant (Leahy-Warren et al., 2023).Our results question to what degree the fathers were invited to the NF home visits, and if the program was adequately supportive of fathers' mental health.The results should be interpreted with regard to possible biases of the sample.The majority of participants had a high level of education, were in stable relationships, had good financial status and were actively employed.Many support programs target ''high risk parents.''The NF program has a universal approach which includes apparently well-functioning parents (Leirbakk et al., 2018).That we did not find a statistically significant difference in EPDS score between the intervention and the control group might therefore indicate that the standard CHS program is satisfactory for our study population, as regards mental health support for depressive symptoms in the early postpartum period.However, NF enable PHNs to identify support needs among all parents, reduce stigma around visits, and deliver services at a level proportionate to the parents' actual needs (Solberg et al., 2022)
Depressive Symptoms
The primary outcome in this study was depressive symptoms as measured with EPDS among fathers in pregnancy, six weeks, and three months postpartum.Relative to many other studies, we found a lower rate of PPD in our sample.In meta-analyses the rate of depression in fathers from pregnancy until one year postpartum is reported to be 8.4% to 10.4% (Cameron et al., 2016;Paulson & Bazemore, 2010).Notably, the meta-estimates include studies from countries on five continents, with the largest number of studies conducted in the United States of America and Asia.North American studies report higher levels of depression in general, while European report the lowest.The studies used different measurement tools (Cameron et al., 2016).Rao's et al. (2020) metaanalysis lends support to the low prevalence of PPD in European fathers, 5.52%, based on EPDS measures.
Compared with studies on PPD conducted in a European context, not targeting ''high-risk parents'' and measuring PPD within the same timeframes as us by self-reported EPDS with cut off 10 to 12, the rates of PPD are similar to our findings.In the total study sample, we found depressive symptoms during pregnancy in 3.1% of the fathers, while a comparable study with a similar sample from the United Kingdom identified the prevalence of PPD to be 3.9% (Ramchandani et al., 2008).In our study, 6 weeks postpartum, PPD increased to 3.9% and decreased to 2.2% 3 months after birth.Similar studies have reported prevalence rates between 3.6% to 5.0% 6 to 8 weeks after birth (Madsen & Juhl, 2007;Ramchandani et al., 2008), and 5.1% to 6.3% 3 months postpartum (Escriba-Agu¨ir & Artazcoz, 2011;Massoudi et al., 2013).
The prevalence of depressive symptoms for the general Norwegian male population between the age of 20 to 49, is 10.2% to 11.6% (Krokstad et al., 2022).In light of the findings in our study, this may suggest that pregnancy and the first months postpartum, overall is a favorable time period for men with respect to depressive symptoms.
As previously mentioned, this study includes the period from pregnancy until 3 months postpartum.Studies report an increased rate of depressive symptoms in fathers 3 to 6 months after birth (Cameron et al., 2016;Paulson & Bazemore, 2010), and symptoms may even develop after the first year postpartum (Goodman, 2004;Kiviruusu et al., 2020).Lower rates have been observed during the second trimester and 0 to 3 months postpartum (Cameron et al., 2016).A systematic review and meta-analysis of studies validating EPDS in fathers identified the lowest range in EPDS score in fathers six-seven weeks postpartum (Shafian et al., 2022), Rao et al. (2020) found the lowest prevalence one to three months after birth, which supports our findings.
It is important to consider the results considering the screening tool used in the study.EPDS is developed for screening postnatal depressive symptoms in women (Cox et al., 1987).It has been validated for men postpartum (Edmondson et al., 2010;Loscalzo et al., 2015;Massoudi et al., 2013), but not in a Norwegian sample.Furthermore, no study has validated EPDS for men in the antenatal period (Berg et al., 2022).EPDS may be more sensitive in assessing female symptoms of depression.Men may be less expressive about their feelings and therefore score lower using a tool like EPDS (Shafian et al., 2022).Hence, further research should examine the content validity of EPDS used in fathers postpartum, to demonstrate the degree to which EPDS provides an adequate reflection of depressive symptoms in fathers.
Although EPDS is the most frequently used instrument for detecting depressive symptoms in men postpartum (Berg et al., 2022), there is a lack of agreement regarding cut-off for EPDS used in men, ranging from 5 to 11 in different studies (Berg et al., 2022).In this study we chose a cut-off 10.Our sensitivity analyses showed the results were robust with higher cut-off scores.
Limitations
This study has limitations.We conducted a nonrandomized trial and selection bias is possible and difficult to assess with limited information about nonparticipants due to General Data Protection Regulations (The Personal Data Act, 2018).The participants were a highly educated, homogeneous group and thus the representativeness of our sample for the general Norwegian fathers' population may be limited.Furthermore, depressive symptoms were self-reported and while EPDS is a valid instrument for measuring PPD, the instrument has not been validated for men in a Norwegian population of pre-and postpartum fathers.The study is limited to pregnancy and the first 3 months postpartum.The outcome measures may have been different if measurements had been taken later in the postpartum period.Studies have reported on several factors associated with PPD.Maternal perinatal depression has been found to be the strongest predictor for PPD (Goodman, 2004).This study does not include depression scores for maternal depression.It might have strengthened the study if prevalence of depression in mothers had been compared with the depression scores of fathers.Finally, the study may have been under-powered, with too few participants in the control group at T2 and T3.
Conclusion
This study evaluated the impact of the NF home visiting program on self-reported depressive symptoms among fathers postpartum by examining differences in EPDS score between fathers receiving the NF home visiting program and fathers receiving the standard program from the CHS.We found no statistically significant differences in PPD between the intervention and the control group at 6 weeks and 3 months postpartum, indicating that the intervention had no clear impact on depressive symptoms during this timeperiod.In general, relative to the general male population, the prevalence of depressive symptoms in Norwegian fathers during pregnancy and the first 3 months postpartum appears low.This indicates that their partners' pregnancy and the first months postpartum may be a favorable period for men with regards to depressive symptoms.
In general, health promotion interventions are often complex and difficult to evaluate.We recommend further research regarding mental health in fathers, with a wider range of outcome measures and supplementing quantitative studies with qualitative.
Relevance for Clinical Practice
Paternal postpartum depression not only affects fathers' health but also risk adversely affecting their parenting behavior, familial relationships, and the overall health of mother and child.Mental health in mothers and fathers are strongly correlated, and this calls for mental health support being given to both parents during pregnancy and the postpartum period.To meet this call a family perspective that includes fathers, in services provided by midwifes and public health nurses is needed.Research on the impact of home visits and increased professional support targeting fathers' mental health is scarce.Therefore, this study seeks to contribute to new knowledge on the prevalence of depressive symptoms in fathers pre-and postnatally and the impact of increased support through a home visiting program in the primary health care service.
Figure 1 .
Figure 1.Timeline of the New Families Home Visiting Program in the Context of the Standard Child Health Program
Figure 2 .
Figure 2. Flow Diagram of fathers at T1, T2 and T3, with reasons for dropouts
Table 1 .
Sample Characteristics, Self-Reported Measures at Baseline We report p-values as continuity correction for 2X2 tables and as Fisher's exact test if counts in any cells were less than 5. *Comparison:
Table 2 .
Prevalence of Depressive Symptoms Measured by EPDS 10 | 2024-08-01T06:16:32.590Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "e3c799f16efdcbb8e8b396a02a090f66cb0d9d3d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/15579883241255188",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "54bb809eb68d9451067306bb918d221bf7511c6b",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238811644 | pes2o/s2orc | v3-fos-license | Familial secondhand smoke: Tobacco use and adoption of smoke-free home and car rules among US parents
INTRODUCTION Secondhand smoke (SHS) causes disease and death. We assessed US parents’ tobacco use and their attitudes towards smoking within private environments where children might be present. METHODS A national sample of 44626 parents collectively reporting 83782 children aged 0–17 years was analyzed from the 2014–2015 Tobacco Use Supplement to the Current Population Survey. Unit of analyses was both parents and children. Among parents, we assessed tobacco use prevalence, smoke-free home rule adoption, and opposition to smoking in cars. Logistic regression was used to measure associations between smoke-free home rule adoption and parents’ cigarette smoking initiation (never smokers); quit attempts (current smokers); and sustained cessation (former smokers). Population counts of children living with a smoking parent were extrapolated from sampling weights. RESULTS Of parents, 14.3% currently smoked combustible tobacco; approximately 9.7 million children lived with a smoking parent. While most parents opposed smoking in cars with children (95.0%), significantly fewer were opposed when a child was not specified as being present in the car (75.4%). Overall, 91.3% of parents had smoke-free home rules; this percentage was highest among parents of infants/ toddlers (92.3%) and lowest among parents of teens aged 14–17 years (89.0%; p<0.05). Presence of smoke-free home rules was associated negatively with smoking initiation among never smokers (AOR=0.21) and positively with quit attempts among current smokers (AOR=1.59) and sustained quitting among former smokers (AOR=1.67) (all p <0.05). CONCLUSIONS Parental smoking can expose children to SHS. Pediatricians can educate parents on the dangers of smoking around children, and the benefits of quitting. INTRODUCTION Private settings such as residential areas and vehicles are major sources of secondhand smoke (SHS) exposure, being areas where children live, sleep, play, and transit1. Overall, 29.0% of US parents in single-parent families, and 14.8% of those in two-parent families reported being current cigarette smokers in 20131. Aggregated estimates from 27 states with available data in the 2010 Pregnancy Risk Assessment Monitoring System showed that 23.2% of US pregnant women smoked in the three months before pregnancy, 10.7% smoked during pregnancy, and 15.9% smoked after delivery2. The basis for concern about SHS exposure among children lies in the fact that SHS causes sudden infant death syndrome, lower respiratory illness, respiratory symptoms, impaired lung function, and middle ear disease3. SHS exposure has also been shown to be associated with poor academic performance among children4,5. Addressing SHS exposure in the home and car, therefore, aligns closely with public health priorities on early brain development and overall adolescent development, with further implications for public health programs and policies. Besides these health-related considerations, tobacco use by parents can have a negative effect on social norms by renormalizing tobacco use. Youths whose parents smoke are more likely to smoke themselves and to start smoking at an earlier age6. Furthermore, children who are exposed to secondhand aerosol from electronic cigarettes have increased curiosity and susceptibility to using both Research Paper| Population Medicine Popul. Med. 2021;3(August):22 https://doi.org/10.18332/popmed/140059 2 e-cigarettes and traditional cigarettes compared to those unexposed7. Proximal social contacts such as family and friends are frequently cited among youth for various tobacco-related behaviors, such as reason for starting certain tobacco product use, and usual source of accessing tobacco products8. Presence of tobacco products around the house can provide visual and olfactory cues that could potentially lead to relapse among those attempting to quit9. The impact of smoking in the home is not only limited to those within the immediate confines of that household given that SHS can infiltrate into neighboring living units and other shared areas; an estimated one-third of US multi-unit housing residents experience SHS infiltration in the units each year10. Progress has been made in recent years in protecting youth from SHS exposure in different public and private settings where children are typically present. Nine US states, Guam, the Northern Mariana Islands and Puerto Rico have prohibited smoking in cars with a child passenger11. In 2016, the U.S. Department of Housing and Urban Development finalized a rule that prohibits smoking in public housing, protecting the nation’s 2 million public housing residents, including 0.76 million children, from SHS in their homes12. As of November 2017, 27 states, the District of Columbia, and over 900 local municipalities had implemented comprehensive smoke-free laws13; nearly 60% of US residents are currently covered by comprehensive smoke-free laws at the state or local level. However, millions of Americans are still exposed to SHS, and disparities in exposure exist across subpopulations8. There is paucity of recent, nationally representative data on tobacco use among US parents, as well as parental adoption of smoke-free rules in the home and car, and the benefits of such voluntary policies on smokingrelated outcomes. In addition, while previous research has estimated the number of US school-going children in grades 6–12 exposed to SHS within the home14,15, there is paucity of data on potential exposures among children of all ages between 0–17 years regardless of their schoolenrollment status (i.e. both school-going and non-schoolgoing children). To fill these gaps in knowledge, we analyzed nationally representative data from the 2014–2015 Tobacco Use Supplement to the Current Population Survey (TUS-CPS), using both parents and their children as units of analyses. The objectives of the present study are to: 1) describe the tobacco product use patterns among US parents, their attitudes towards smoking within the home or a car where children might be present; 2) investigate the potential benefits of having smoke-free rules within private environments on parental smoking behaviors; and 3) estimate the number of US children who are exposed to SHS within private environments. METHODS Data source TUS-CPS is a survey of the civilian, non-institutionalized US adult population aged ≥18 years conducted as part of the U.S. Census Bureau’s Current Population Survey. The sampling frame for the Current Population Survey is 119 million US households from the civilian non-institutionalized population. Households are randomly selected by the Bureau of the Census on the basis of mailing addresses to represent the nation as a whole, individual states, and other specified areas. One to three individuals were randomly selected for self-interview from each household according to the size of the household. The 2014–2015 TUS-CPS had a total of 163920 self-respondents; yielding an overall response rate of 54.2%. Interviews were conducted in July 2014, January 2015, and May 2015, four to six months apart. TUS-CPS provides estimates critical to understanding the burden of tobacco use in the US and the extent to which it is concentrated in particular subgroups. The large sample size of TUS-CPS particularly equips it to provide smaller subsample estimates (e.g. parents) with greater precision. In this study, our target population was parents of children under 18 years of age related by birth, marriage, or adoption who reported the presence of their own children in their household. A total of 44626 parents reported a combined number of 83782 own children aged 0–17 years. For each type of family unit identified in the CPS, the count of children aged 0–17 years was limited to single (never married) children. Measures Home and car smoke-free rules and opinions among parents Participants were asked: ‘Which statement best describes the rules about smoking INSIDE YOUR HOME? (Note: “Home” is where you live. “Rules” include any unwritten rules and pertain to all people whether or not they reside in the home or are visitors, workmen, etc. Smoking includes cigars regular and hookah as well as cigarettes.)’. A response of ‘No one is allowed to smoke anywhere INSIDE YOUR HOME’ (vs ‘Smoking is allowed in some places or at sometimes INSIDE YOUR HOME’; or ‘Smoking is permitted anywhere INSIDE YOUR HOME’) was classified as having a complete smokefree home rule. Attitudes towards smoking inside a car were assessed under two scenarios: 1) Without specifying the presence of a child passenger, ‘Inside a car, when there are other people present, do you THINK that smoking SHOULD ... ?’; and 2) Specifically indicating that a child was present, ‘IF children are present inside the car, do you think that smoking SHOULD ...?’. Response options to both questions were: ‘Always be allowed’, ‘Be allowed under some conditions’, and ‘Never be allowed’. The last response was classified as completely opposing smoking in cars for the specific scenario assessed. Tobacco use and sociodemographic characteristics among parents Six tobacco product types were assessed in TUS-CPS: Research Paper| Population Medicine Popul. Med. 2021;3(August):22 https://doi.org/10.18332/popmed/140059 3 cigarettes; cigars/cigarillos/filtered little cigars; smokeless tobacco products; regular pipes; water pipes; and e-cigarettes. Current users were persons who reported ever use (≥100 cigarettes in lifetime, or ≥1 time in lifetime for all other products) and reported the use of the respective products ‘every day’ or ‘some days’ at the time of survey. Any tobacco product use was defined as use of any of the six assessed tobacco product types, and any combustible tobacco product use was defined as using any of cigarettes, cigars/cigarillos/filtered little cigars, regular pipes, or water pipes. We further classified respondents based on exclusive use patterns, as non-users of any tobacco product; users of only combustible tobacco products; users of only smokeless tobacco products; users of only e-cigarettes; and users of a combination of tobacco products. Household characteristics reported by parents included number of own children; age of youngest/only child (0–2; 3–5; 6–13; or 14–17 years); and family structure (married, only one parent present; married, both parents present; widowed/divorced/separated; never married). Other sociodemographic characteristics included parents’ race/ ethnicity, educational attainment, U.S. Census region, metropolitan status, annual household income, veteran status (whether having served in the US military or not), and status of combustible tobacco use (never/former/current [some days or every day]). Smoking-related outcomes Smoking-related outcomes were assessed among current cigarette smokers (smoked ≥100 cigarettes in lifetime and smoke now), former cigarette smokers (smoked ≥100 cigarettes in lifetime but no longer smoke), and never cigarette smokers (smoked <100 cigarettes, or never, in lifetime). Among current cigarette smokers, we assessed past-year quit attempt, as well as intentions to quit smoking in the next 30 days and 6 months, respectively. Among former smokers, we assessed sustained quitting, defined as having stopped smoking for 6 months or longer. Among never smokers, we assessed past 5-year cigarette smoking initiation; numerator was persons who started smoking within the past 5 years; denominator was those who started smoking within the past 5 years as well as those who had never smoked cigarettes in their lifetime. Analyses The unit of analyses included both parents (primary unit) and children aged 0–17 years (secondary unit). Among parents, we computed tobacco use prevalence, and the percentage who reported adopting voluntary home smokefree rules and opposing smoking in a car with or without children specified as being present. Within-group differences were assessed using Pearson’s chi-squared tests. Logistic regression models were fitted to measure the relationship between adoption of smoke-free home rules and various smoking-related outcomes, adjusting for number of children, age of youngest/only child, family structure, and parental age, sex, race/ethnicity, non-cigarette tobacco product use, annual household income, and education level. To check potential correlations between covariates, we computed variance inflation factors (VIFs) for each of the independent variables in the models, and confirmed all VIFs were <10. With children aged 0–17 years as the unit of analysis, we estimated the total count of those who were potentially exposed to secondhand smoke by virtue of living with a parent that used tobacco products. Probability weights were used to extrapolate population counts of children in different types of households based on tobacco usage. To ensure that estimates generated were nationally representative, TUS-CPS self-response weights derived from the Census Bureau were applied. All analyses were performed with R.V.3.5.1. using the ‘survey’ statistical package. Statistical significance was set at p<0.05. RESULTS Of all US adults who completed the 2014–2015 TUS-CPS, 28.3% were parents; total number of own children reported ranged from 1 to 12, nationwide. Overall, 41.6% of parents had one child, 37.8% had two children, and 20.6% had ≥3 children. The age of the only/youngest child was 0–2 years for 16.3% of parents, 3–5 years for 21.2% of parents, 6–13 years for 43.8% of parents, and 14–17 years for 18.7% of parents. Parental tobacco use behavior Prevalence of current any tobacco use among parents was 16.3%; prevalence of current any combustible tobacco smoking was 14.3% (Table 1). By specific tobacco product type, prevalence was as follows: cigarettes (13.1%); cigars (1.7%); hookahs/water pipes (0.3%); regular pipes (0.1); smokeless tobacco products (1.7%); and e-cigarettes (2.4%). Any current tobacco use prevalence among parents in two-parent families (12.9%) was about half that in all other family structures, including those married, but only one spouse present (23.8%); those never married (25.7%); and those widowed, divorced, or separated (25.1%). Parents with one child only had the highest prevalence of current any tobacco product use (17.5%) compared to those with two (15.4%) or ≥3 children (15.6%). By US census region, parental tobacco use prevalence was highest in the Midwest (21.5%) and lowest in the West (11.1%). Prevalence was almost two-fold higher among parents residing in nonmetropolitan areas (27.7%) than those in metropolitan areas (14.6%); as well as among veterans (26.7%) compared to non-veterans (14.6%). By education level, prevalence of current any tobacco use was highest among those with a high school diploma (24.6%), and lowest among those with ≥ college (6.5%). Similarly, prevalence was highest among those with annual household income (US$) of <20K (25.9%) and lowest among those earning ≥100K (8.8%). By race/ ethnicity, prevalence was highest among non-Hispanic Research Paper| Population Medicine Popul. Med. 2021;3(August):22 https://doi.org/10.18332/popmed/140059 4 Table 1. National estimates of current tobacco use by parents, by selected sociodemographic characteristics, Tobacco Use Supplement to the Current Population Survey, 2014–2015
INTRODUCTION
Private settings such as residential areas and vehicles are major sources of secondhand smoke (SHS) exposure, being areas where children live, sleep, play, and transit 1 . Overall, 29.0% of US parents in single-parent families, and 14.8% of those in two-parent families reported being current cigarette smokers in 2013 1 . Aggregated estimates from 27 states with available data in the 2010 Pregnancy Risk Assessment Monitoring System showed that 23.2% of US pregnant women smoked in the three months before pregnancy, 10.7% smoked during pregnancy, and 15.9% smoked after delivery 2 . The basis for concern about SHS exposure among children lies in the fact that SHS causes sudden infant death syndrome, lower respiratory illness, respiratory symptoms, impaired lung function, and middle ear disease 3 . SHS exposure has also been shown to be associated with poor academic performance among children 4,5 . Addressing SHS exposure in the home and car, therefore, aligns closely with public health priorities on early brain development and overall adolescent development, with further implications for public health programs and policies.
Besides these health-related considerations, tobacco use by parents can have a negative effect on social norms by renormalizing tobacco use. Youths whose parents smoke are more likely to smoke themselves and to start smoking at an earlier age 6 . Furthermore, children who are exposed to secondhand aerosol from electronic cigarettes have increased curiosity and susceptibility to using both e-cigarettes and traditional cigarettes compared to those unexposed 7 . Proximal social contacts such as family and friends are frequently cited among youth for various tobacco-related behaviors, such as reason for starting certain tobacco product use, and usual source of accessing tobacco products 8 . Presence of tobacco products around the house can provide visual and olfactory cues that could potentially lead to relapse among those attempting to quit 9 . The impact of smoking in the home is not only limited to those within the immediate confines of that household given that SHS can infiltrate into neighboring living units and other shared areas; an estimated one-third of US multi-unit housing residents experience SHS infiltration in the units each year 10 .
Progress has been made in recent years in protecting youth from SHS exposure in different public and private settings where children are typically present. Nine US states, Guam, the Northern Mariana Islands and Puerto Rico have prohibited smoking in cars with a child passenger 11 . In 2016, the U.S. Department of Housing and Urban Development finalized a rule that prohibits smoking in public housing, protecting the nation's 2 million public housing residents, including 0.76 million children, from SHS in their homes 12 . As of November 2017, 27 states, the District of Columbia, and over 900 local municipalities had implemented comprehensive smoke-free laws 13 ; nearly 60% of US residents are currently covered by comprehensive smoke-free laws at the state or local level. However, millions of Americans are still exposed to SHS, and disparities in exposure exist across subpopulations 8 .
There is paucity of recent, nationally representative data on tobacco use among US parents, as well as parental adoption of smoke-free rules in the home and car, and the benefits of such voluntary policies on smokingrelated outcomes. In addition, while previous research has estimated the number of US school-going children in grades 6-12 exposed to SHS within the home 14,15 , there is paucity of data on potential exposures among children of all ages between 0-17 years regardless of their schoolenrollment status (i.e. both school-going and non-schoolgoing children). To fill these gaps in knowledge, we analyzed nationally representative data from the 2014-2015 Tobacco Use Supplement to the Current Population Survey (TUS-CPS), using both parents and their children as units of analyses. The objectives of the present study are to: 1) describe the tobacco product use patterns among US parents, their attitudes towards smoking within the home or a car where children might be present; 2) investigate the potential benefits of having smoke-free rules within private environments on parental smoking behaviors; and 3) estimate the number of US children who are exposed to SHS within private environments.
Data source
TUS-CPS is a survey of the civilian, non-institutionalized US adult population aged ≥18 years conducted as part of the U.S. Census Bureau's Current Population Survey. The sampling frame for the Current Population Survey is 119 million US households from the civilian non-institutionalized population. Households are randomly selected by the Bureau of the Census on the basis of mailing addresses to represent the nation as a whole, individual states, and other specified areas. One to three individuals were randomly selected for self-interview from each household according to the size of the household. The 2014-2015 TUS-CPS had a total of 163920 self-respondents; yielding an overall response rate of 54.2%. Interviews were conducted in July 2014, January 2015, and May 2015, four to six months apart. TUS-CPS provides estimates critical to understanding the burden of tobacco use in the US and the extent to which it is concentrated in particular subgroups. The large sample size of TUS-CPS particularly equips it to provide smaller subsample estimates (e.g. parents) with greater precision.
In this study, our target population was parents of children under 18 years of age related by birth, marriage, or adoption who reported the presence of their own children in their household. A total of 44626 parents reported a combined number of 83782 own children aged 0-17 years. For each type of family unit identified in the CPS, the count of children aged 0-17 years was limited to single (never married) children.
Measures
Home and car smoke-free rules and opinions among parents Participants were asked: 'Which statement best describes the rules about smoking INSIDE YOUR HOME? (Note: "Home" is where you live. "Rules" include any unwritten rules and pertain to all people whether or not they reside in the home or are visitors, workmen, etc. Smoking includes cigars regular and hookah as well as cigarettes.)'. A response of 'No one is allowed to smoke anywhere INSIDE YOUR HOME' (vs 'Smoking is allowed in some places or at sometimes INSIDE YOUR HOME'; or 'Smoking is permitted anywhere INSIDE YOUR HOME') was classified as having a complete smokefree home rule.
Attitudes towards smoking inside a car were assessed under two scenarios: 1) Without specifying the presence of a child passenger, 'Inside a car, when there are other people present, do you THINK that smoking SHOULD ... ?'; and 2) Specifically indicating that a child was present, 'IF children are present inside the car, do you think that smoking SHOULD ...?'. Response options to both questions were: 'Always be allowed', 'Be allowed under some conditions', and 'Never be allowed'. The last response was classified as completely opposing smoking in cars for the specific scenario assessed.
Tobacco use and sociodemographic characteristics among parents
Six tobacco product types were assessed in TUS-CPS: cigarettes; cigars/cigarillos/filtered little cigars; smokeless tobacco products; regular pipes; water pipes; and e-cigarettes. Current users were persons who reported ever use (≥100 cigarettes in lifetime, or ≥1 time in lifetime for all other products) and reported the use of the respective products 'every day' or 'some days' at the time of survey. Any tobacco product use was defined as use of any of the six assessed tobacco product types, and any combustible tobacco product use was defined as using any of cigarettes, cigars/cigarillos/filtered little cigars, regular pipes, or water pipes. We further classified respondents based on exclusive use patterns, as non-users of any tobacco product; users of only combustible tobacco products; users of only smokeless tobacco products; users of only e-cigarettes; and users of a combination of tobacco products.
Household characteristics reported by parents included number of own children; age of youngest/only child (0-2; 3-5; 6-13; or 14-17 years); and family structure (married, only one parent present; married, both parents present; widowed/divorced/separated; never married). Other sociodemographic characteristics included parents' race/ ethnicity, educational attainment, U.S. Census region, metropolitan status, annual household income, veteran status (whether having served in the US military or not), and status of combustible tobacco use (never/former/current [some days or every day]).
Smoking-related outcomes
Smoking-related outcomes were assessed among current cigarette smokers (smoked ≥100 cigarettes in lifetime and smoke now), former cigarette smokers (smoked ≥100 cigarettes in lifetime but no longer smoke), and never cigarette smokers (smoked <100 cigarettes, or never, in lifetime). Among current cigarette smokers, we assessed past-year quit attempt, as well as intentions to quit smoking in the next 30 days and 6 months, respectively. Among former smokers, we assessed sustained quitting, defined as having stopped smoking for 6 months or longer. Among never smokers, we assessed past 5-year cigarette smoking initiation; numerator was persons who started smoking within the past 5 years; denominator was those who started smoking within the past 5 years as well as those who had never smoked cigarettes in their lifetime.
Analyses
The unit of analyses included both parents (primary unit) and children aged 0-17 years (secondary unit). Among parents, we computed tobacco use prevalence, and the percentage who reported adopting voluntary home smokefree rules and opposing smoking in a car with or without children specified as being present. Within-group differences were assessed using Pearson's chi-squared tests. Logistic regression models were fitted to measure the relationship between adoption of smoke-free home rules and various smoking-related outcomes, adjusting for number of children, age of youngest/only child, family structure, and parental age, sex, race/ethnicity, non-cigarette tobacco product use, annual household income, and education level. To check potential correlations between covariates, we computed variance inflation factors (VIFs) for each of the independent variables in the models, and confirmed all VIFs were <10.
With children aged 0-17 years as the unit of analysis, we estimated the total count of those who were potentially exposed to secondhand smoke by virtue of living with a parent that used tobacco products. Probability weights were used to extrapolate population counts of children in different types of households based on tobacco usage. To ensure that estimates generated were nationally representative, TUS-CPS self-response weights derived from the Census Bureau were applied. All analyses were performed with R.V.3.5.1. using the 'survey' statistical package. Statistical significance was set at p<0.05.
RESULTS
Of all US adults who completed the 2014-2015 TUS-CPS, 28.3% were parents; total number of own children reported ranged from 1 to 12, nationwide. Overall, 41.6% of parents had one child, 37.8% had two children, and 20.6% had ≥3 children. The age of the only/youngest child was 0-2 years for 16.3% of parents, 3-5 years for 21.2% of parents, 6-13 years for 43.8% of parents, and 14-17 years for 18.7% of parents.
Any current tobacco use prevalence among parents in two-parent families (12.9%) was about half that in all other family structures, including those married, but only one spouse present (23.8%); those never married (25.7%); and those widowed, divorced, or separated (25.1%). Parents with one child only had the highest prevalence of current any tobacco product use (17.5%) compared to those with two (15.4%) or ≥3 children (15.6%). By US census region, parental tobacco use prevalence was highest in the Midwest (21.5%) and lowest in the West (11.1%). Prevalence was almost two-fold higher among parents residing in nonmetropolitan areas (27.7%) than those in metropolitan areas (14.6%); as well as among veterans (26.7%) compared to non-veterans (14.6%). By education level, prevalence of current any tobacco use was highest among those with a high school diploma (24.6%), and lowest among those with ≥ college (6.5%). Similarly, prevalence was highest among those with annual household income (US$) of <20K (25.9%) and lowest among those earning ≥100K (8.8%). By race/ ethnicity, prevalence was highest among non-Hispanic
(3.4-7)
Results in bold type indicate responses vary significantly (p<0.05) by the assessed sociodemographic characteristics. Estimates with Relative Standard Error >30% were suppressed (-). Current users were persons who reported ever use (≥100 cigarettes in lifetime, or ≥1 time in lifetime for all other products) and reported the use of the respective products 'every day' or 'some days' at the time of survey. Any tobacco product use was defined as use of any of the six assessed tobacco product types, and any combustible tobacco product use was defined as using any of cigarettes, cigars/cigarillos/filtered little cigars, regular pipes, or water pipes. Whites (20.3%) and lowest among Hispanics (8.7%). By exclusive use patterns, 83.6% (56.5 million) of parents used no form of tobacco at all; 12.3% (8.3 million) used combustible tobacco products exclusively, 1.3% (857,000) used smokeless tobacco products exclusively, 0.7% (0.5 million) used e-cigarettes exclusively, and 2.0% (1.36 million) used a combination of products.
An estimated 56.50 million US children aged 0-17 years lived with a parent that did not use any tobacco product; 11.04 million lived with a parent that used any form of tobacco; while 9.7 million children lived with a parent that smoked a combustible tobacco product. The number of children by exclusive tobacco product usage by parents is shown in Table 2.
Parental rules and opinions on smoking in the home and in a car
Overall, 91.3% of parents reported adopting complete smoke-free home rules (Table 3). Prevalence was: highest among two-parent families living together (93.5%) and lowest among those never married (85.5%); highest among those with two children (92.6%) and lowest among those with one child (90.3%); highest in households with an infant/toddler (92.3%) and lowest among households where the oldest child was 14-17 years old (89.0%). Relatively lower prevalence of adoption of home smoke-free rules was seen among the following parent groups: military veterans (89.8%); earning <20K/annum (82.2%); daily smokers (63.8%), and residents of non-metropolitan areas (85.9%). By exclusive use patterns, adoption of complete smoke-free home rules was as follows: non-tobacco users (95.1%), combustible-only users (69.7%); smokeless tobacco-only users (92.7%); e-cigarette-only users (89.3%); and users of a combination of products (69.1%).
Almost all parents (95.1%), including those who smoked daily (83.3%) or some days (93.5%), were opposed to smoking in a car with a child present. However, significantly fewer parents were opposed to smoking in a car when a child was not specified present (75.4%). Only 43.2% of parents who smoked daily, and 59.2% of those who smoked on some days, opposed smoking in a car when a child was not specified present. By exclusive use patterns, percentage
Association between smoke-free home policies and smoking-related outcomes
Smoke-free home rules had beneficial effects on smoking-related behaviors and attitudes among US parents. Among never cigarette smokers, odds of initiating cigarette smoking were higher among those with than without complete smoke-free home rules (AOR=0.21; 95%CI: 0.13-0.33) (Figure 1). Among those who were current cigarette smokers, odds were higher among those with than without complete smoke-free home rules of making a past-year quit attempt (AOR=1.59; 95% CI: 1.37-1.84), intending to quit in the next 30 days (AOR=1.72; 95%CI: 1.43-2.08), or in the next 6 months (AOR=1.55; 95% CI: 1.34-1.80). Among former Logistic regression models were adjusted for number of children, age of youngest/only child, family structure, and parental age, sex, race/ethnicity, non-cigarette tobacco product use, and education level.
Figure 1. Adjusted odds ratios of smoking-related outcomes among parents with voluntary smoke-free home rules compared to those without voluntary home rules, Tobacco Use Supplement to the Current Population Survey, 2014-2015
Logistic regression models were adjusted for number of children, age of youngest/only child, family structure, and parental age, sex, race/ethnicity, non-cigarette tobacco product use, and education level. Smoking initiation (never smokers) Intention to quit smoking in the next 6 months (current smokers) Intention to quit smoking in the next 30 days (current smokers) Quit attempt in the past 12 months (current smokers) Sustained cessation for ≥6 months (former smokers) smokers, the odds of reporting sustained quitting were also higher among those with than without complete smoke-free home rules (AOR=1.67; 95% CI: 1.08-2.58).
DISCUSSION
One in four US parents used a tobacco product, and 11.04 million children lived with a parent that used any form of tobacco. The overwhelming majority of US parents reported having complete smoke-free home rules (91.3%) and were also opposed to smoking in a car with children present (95.1%). However, significantly fewer parents (75.4%) were opposed to smoking in a car when a child was not specified present. SHS can adhere to surfaces and possibly expose youth to tobacco smoke even after active smoking has ceased (i.e. thirdhand smoke) 16,17 . Despite the high support for smoke-free home/car rules by parents, a recent study indicates that 29.0% (7.5 million) of US middle and high school students were exposed to SHS within their homes or cars during 2016 1 , which might be due to incomplete implementation of voluntary smoke-free rules. To protect the health of both parents and children, coordinated efforts are needed to target lifestyle-changing interventions in parents who smoke, in concert with comprehensive tobacco prevention and control efforts aimed at reducing the availability, accessibility, and affordability of tobacco products. The American Pediatric Association's Clinical Practice Policy recommends that pediatricians address parent/caregiver tobacco dependence as part of pediatric healthcare 18 . Child wellness visits could be used as opportunities to screen for SHS exposure in the home, and provide parents with education on dangers of SHS exposure and with assistance on quitting tobacco smoking. Parents of infants/toddlers were more likely to implement home and car rules than parents of teens aged 14-17 years. The adverse health effects of SHS cut across the entire span of childhood and adolescence. Hence, protection of children of all ages is critical. Adoption of smoke-free home and car rules benefits not only children in the household, but also parents themselves 19 . Our results show that parents that adopted complete smoke-free home rules were less likely to initiate smoking if they have never started, as well as more likely to attempt to quit or quit successfully. Adoption of comprehensive smoke-free policies at the state and local level might encourage voluntary adoption of car and home smoke-free rules 20 .
While prevalence of any combustible tobacco use observed among US parents in this study was slightly lower than that of US adults overall (14.3% vs 15.1% among the general population in 2015) 21 , patterns of tobacco use disparities among parents were consistent with those observed among all US adults overall 21 . Tobacco use prevalence in our study was disproportionately higher among parents who had lower education level, annual household income of <20K, veterans, living in nonmetropolitan areas, and in single-parent families. Disparities were also seen in adoption of smoke-free home and car rules. For example, non-Hispanic Black parents had the lowest prevalence of adoption of home smoke-free rules. This agrees with previous research documenting higher prevalence of SHS among non-Hispanic Black youth 22 . Intensified implementation of population-based interventions including increasing tobacco product prices, implementing and enforcing comprehensive smoke-free laws, and warning about the dangers of tobacco use can help reduce tobacco use 23 . Several public health campaigns are targeted at parents to help them quit. For example, since 2012, CDC's Tips From Former Smokers' Campaign has educated the public about the dangers of SHS, including asthma attacks among children of smokers triggered by SHS exposure 24 .
Strengths and limitations
The strength of this study is the use of a nationally representative dataset of US parents to measure use of a diverse range of tobacco products. Furthermore, the ability to generate sub-national estimates with TUS-CPS provides implicative data which can inform policy and practice tailored to specific populations. Nonetheless, this study has some limitations. First, parental smoking behavior might be under-reported due to social desirability bias. Furthermore, adoption of smoke-free home rules might be over-reported in TUS-CPS because of social desirability bias and potential differences in perceptions about 'smokefree' rules. A previous qualitative study indicates that some people who smoke on an indoor balcony or out the window do not consider themselves as smoking inside 25 . Second, the cross-sectional nature of the survey does not allow us to establish causal relationships between the presence of smoke-free rules and smoking-related outcomes. Third, we could not assess the number of smokers in the household since parents reported only their tobacco use status, and not of their spouses, partners, grown up children, or other household members. The number of children living with a smoker might be underestimated if the surveyed parent is a non-smoker, but the other parent or household member(s) is. Similarly, the totals include never married children living away from home in college dormitories, in which case, the number of children living with the surveyed parent might be overestimated. Finally, these estimates do not include parents or their children who are in the military, civilians stationed overseas, or those in other institutionalized settings.
CONCLUSIONS
One in four US parents used a tobacco product, and 11.04 million children lived with a parent that used any form of tobacco. Overall, 91.3% of parents reported completely prohibiting smoking in their home; smoke-free home rules had a beneficial effect on parents in terms of preventing smoking initiation among never smokers, encouraging quit attempts among current smokers, and promoting sustained quitting among former smokers. Adoption of voluntary home and car rules, in concert with implementation of comprehensive tobacco-free laws and consultation to educate parents on the dangers of tobacco use around children can reduce SHS exposure among US youth. | 2021-09-27T20:35:38.975Z | 2021-08-04T00:00:00.000 | {
"year": 2021,
"sha1": "f0aa7497417171f8aec5feb1737f6b5bca88531f",
"oa_license": "CCBYNC",
"oa_url": "http://www.populationmedicine.eu/pdf-140059-67551?filename=Familial%20secondhand.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8ac131d61a7557db5ad7a4f9de40a0c7cb7e5347",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5824206 | pes2o/s2orc | v3-fos-license | AN NLTOOLSET-BASED SYSTEM FOR MUC-6
For a little over two years, Sterling Software ITD has been developing the Automatic Templating System (ATS) [1] for automatically extracting entity and event data in the counter-narcotics domain from military messages. This system, part of the Counter Drug Intelligence System (CDIS), was built around the NLToolset [2], which was originally developed by GE and is now being developed and supported by Lockheed-Martin. Early results showed that the system was performing better than the human analysts in all aspects.
INTRODUCTION
For a little over two years, Sterling Software ITD has been developing the Automatic Templatin g System (ATS) [1] for automatically extracting entity and event data in the counter-narcotics domain from military messages. This system, part of the Counter Drug Intelligence System (CDIS), was built around the NLToolset [2], which was originally developed by GE and is now being developed and supported b y Lockheed-Martin . Early results showed that the system was performing better than the human analyst s in all aspects.
ATS was in its final delivery phase at the same time as our MUC-6 development . We elected to participate despite this conflict, but it did limit us to 4 person-weeks on MUC-6, forcing us to scale bac k from our original plans and only participate in the NE and TE tasks . The results were more tha n gratifying. Our MUC-6 system ( Figure 1) consists of 5 major components, applied in sequence : Lexical Analysis, Reduction, Extraction, Merging, Postprocessing. It was designed to share as much of the processin g sequence between tasks as possible . The processing for NE followed the identical sequence of step s (Lexical Analysis, and Reduction) as was followed for the TE and ST tasks, then diverged to its ow n Postprocessing component to write the NE file . The Reduction steps taken to identify portions of text for marking in NE also filled the slots with the appropriate text for the TE task. The processing specific to S T diverged after all the phrase-level Reductions for NE and TE had been performed .
TE expectations expectation s Extraction
Merging toke r Reduction ~s equen 1 Extraction I
ST expect
The heart of the system is a sophisticated pattern-matcher, which is used repeatedly in the course o f processing to identify text for Reduction or Extraction. While the NLToolset also provides a parser, afte r some initial development we abandoned it on ATS, and did not use it on MUC-6.
Lexical Analysis
The Lexical Analysis component has several subcomponents . First, a tokenizer converts the input string for the entire article into a sequence of tokens . We modified the NLToolset-supplied tokenizer to try to prevent it from reordering or dropping text in ways that made it difficult to map back to th e original text when writing the NE output file; we also modified it to preserve upper-vs lower-cas e information.
The second step in Lexical Analysis is the actual lexicon lookup, which attaches information from th e lexicon to the tokens . This includes morphological analysis, which was useful primarily for determinin g the root form of nationalities, such as "Canadian" -> CANADA. It also includes finding multi-token lexicon entries, such as "New York" and "Coca-Cola" . Since we weren't using the parser, the part-ofspeech obtained by a lexical lookup was of interest mainly if it was something like city-name or orgname ; we did also try to prevent the inappropriate inclusion of verbs, prepositions, etc in names, wit h mixed results .
The third step in Lexical Analysis is the insertion of special marker tokens to indicate capitalize d words . This was needed to be able to usethat information in name recognition, since there did not appea r to be any good way to get the pattern matcher to use the capitalization information contained in th e original tokens .
Finally, Lexical Analysis splits the token sequence into sentences, including one each for headline , dateline, and date.
Reduction
The Reduction components each consist of one or more stages of applying the NLToolset's pattern matcher to phrases . Any phrase matched is "reduced", usually but not always to a single multi-token, o r "mtoken" . In each stage, all the patterns appropriate to that stage are tried on each sentence in turn.
The very first reduction stage is a "junk" reduction to delete tables so they are not seen by subsequent reduction stages.
Each subsequent reduction has two useful side-effects : 1) identifying which tokens form the heart of the reduction and therefore should be marked for the NE task, and 2) filling the slots of the mtokens wit h appropriate pieces of the text that was reduced, for the TE task . Note that these two purposes often conflict --for example, city, state references and date ranges were supposed to have pieces marke d separately, but were reduced to single mtokens with one set of slot fillers . This called for some carefu l engineering.
The applications of reduction patterns are done in sequence rather than all at once for a number o f reasons : First, some references to a person, organization, or location may not be recognizable b y themselves, but other references to the same thing may be easier to spot . Therefore, every new thing reduced is added to a temporary lexicon, and another reduction step is applied to look for othe r references (with certain allowed variations) to those same things ; for example, relatively easy-torecognize references to "Mr. Jones" or "Robert L . James" would enable later recognition of the more problematic "Barnaby Jones" and "James" . And when adding to this lexicon, appropriate variations in a n (organization) name are included so that they would be recognized if they occured ; for example: Name Possible variations "Paramount Pictures Corp ." "Paramount" "Paramount Pictures " "New York Post" "Post" "Kidder , Peabody & Co." "Kidder" "Kidder Peabody " "National Labor Relations Board" "NLRB " When such a "secondary" organization reference is reduced, the text is put in the org_alias slot; the full form is pulled from the lexicon and put in the org_name slot to ensure proper merging (see below) of the two referents .
Second, the results of reductions can be used to provide additional context for later reductions; for example, person reduction is done after organization, so a reduced organization can help the patter n matcher recognize a person, as in the token sequence [ARTIE MCDONALD , *ORG* 'S PRESIDENT] , where *ORG* is the mtoken produced by the earlier reduction . A reduction can also involve multiple previously-reduced mtokens, filling the slots of one with information from another ; for example, the reduction of the token sequence [*ORG* , A *LOC* -BASED MANUFACTURER] includes filling th e org_descriptor, org_locale, and org_country slots of *ORG* with the descriptive phrase and th e information from *LOC* .
Extraction
An Extraction component uses the results of a pattern match to generate an "expectation" and fill its slots with pieces of the text matched . For ST, a typical expectation represents an event, with the person, organization, date, etc mtokens in the clause that was matched being used to fill its slots . For TE, each expectation is a trivial one containing one person or organization.
Merging
The NLToolset provides a merging tool, which merges expectations of the same type (person , organization, etc) as long as the fillers of their corresponding slots do not conflict; a conflict occurs if both have a filler, the fillers are different, and the slot is not allowed to have multiple fillers . Obviously, the org_alias and org_descriptor slots were allowed to have multiple fillers and org_name was not .
During reduction, our system actually splits a person's name across slots called given_name , family_name, and suffix_name, so that the expectations for, say, "Harry L . James, Jr ." and "Mr. James " would be merged. It also carefully fills slots such as org_type and a few others added just for thi s purpose so as to prevent improper merges ; for example, it reduces the token sequence [THE *ORG * UNIT] to two *ORG* mtokens, one old and one new, with slots filled so that they could not merge wit h each other .
Initially, we relied on this merging tool to bring together separated org names and descriptors, such a s "NEC Corp . ... the giant Japanese computer manufacturer". We soon found, however, that even with careful use of slot fillers to prevent descriptors for commercial organizations from merging with, say, th e name of a government organization or a library, too many merges were incorrect . We therefore devised a separate stack mechanism which keeps track of the org mtokens for each sentence; when an or g descriptor is reduced in the final TE reduction stage, the stack is searched starting at the current sentence , to find the closest suitable referent that precedes the descriptor, and to add the descriptor text to th e mtoken for that referent. This approach worked quite well.
Postprocessing
For the NE task, the postprocessing step consists of traversing the token sequences in parallel with the original text, writing the original text and inserting markers as the reduction results attached to eac h token indicated . We had to go back to original text to include those portions of the article header whic h were not processed, and to recover from cases where the tokenizer had dropped characters despite our modifications .
For the TE task, the postprocessing step consists of traversing the list of expectations and writing a template for each, performing final clean-ups like removing duplicate aliases, combining th e person_name pieces, skipping slots used only to control merging, etc .
KNOWLEDGE ENGINEERIN G
The bulk of the time spent in knowledge engineering was spent developing the patterns for all th e Reduction and Extraction stages . These patterns were devised to take advantage of all the loca l contextual clues we could come up with, including uppervs lower-case information and descriptiv e appositives . Our results show that this approach works well ; and the modularity of the patterns makes i t easy to add coverage as we discover additional clues (such as those we discuss in the walkthrough with respect to organizations).
The reliance on case information meant that headlines were a bit of a problem ; despite giving them somewhat special treatment, our error rate was higher there than elsewhere: IL 136 142 119 0 7 16 10 0 88 84 7 11 22 6 DD 60 60 60 0 0 0 0 0 100 100 0 0 0 0 DL 52 52 52 0 0 0 0 0 100 100 0 0 0 0 TXT 2046 2024 1889 0 47 88 110 0 92 93 5 4 11 2 There was some lexicon work, as well. This included entries for all the countries, with alternat e phrases (such as "West Germany" for "Federal Republic of Germany") and irregular derivations (such a s "Dutch" for "Netherlands"), and entries for major cities and geographical regions, with their countr y information included . For organizations, we limited it to a few dozen major ones that have no reliabl e internal clues and often occur without any contextual clues (such as "White House", "Fannie Mae", "Bi g Board", "Coca-Cola" and "Coke", "Macy's", "Exxon", etc) . The results on the walkthrough article (see Table 1) compared to our overall results show that this wa s indeed a relatively difficult article. They show three issues worth discussing.
First, we had low precision on timex . Two out of the three "spurious" dates are due to our apparentl y mistaken belief that "yesterday" and "tomorrow" were supposed to be marked . This knowledge engineering error led to the worst recall or precision number on our overall NE results, a precision o n timex of 84; avoiding that error would have raised it to 94 .
Second, recall and precision on organizations was a bit low . The system missed both "Fallon McElligott" and "McCann-Erickson" . On the former, a phrase like "ad agency Fallon McElligott" woul d have caused it to be found, but the actual phrase "other ad agencies, such as Fallon McElligott" did not . On the latter, not having a pattern to cover things like "chief executive officer of McCann-Erickson" wa s an omission on our part.
Other organization errors were : getting "New York Times", which in this article is incorrect ; missing the two descriptors for "Ammirati & Puris" and the locale for "Coca-Cola" . The locale error points ou t another major cause of poor results --a next-to-last-minute change in the final TE pattern for picking u p combination of organization name plus location and/or descriptor, inadequately tested, led t o inadvertantly dropping coverage of the most basic of combinations : [*ORG* $lprep *LOC*], where $lprep is a macro for: "one of ',' 'in"of" . This unfortunate error had the following effect on total locale slot scor e on TE : POS ACT COR REC PRE actual 110 59 42 38 7 1 corrected 110 76 59 54 7 1 Third, problems with persons . The system decided "McCann" was a person, based on "the McCan n family"; since it did not recognize "McCann-Erickson" as a company, every reference to "McCann" wa s therefore marked as a person. Due to inadequate restrictions on our use of capitalization, the system als o decided "While McCann" and "One McCann" were distinct persons . It decided that "John J . Dooner, Jr." and "John Dooner" were distinct persons ; the "Jr ." would not have caused it to make that decision, but th e "J ." did . After the Lexical Analysis, the input string has been converted into a list of 52 sentences, each sentenc e containing a list of tokens; this list includes *CAP* tokens inserted in front of every capitalized token . Attached to each token is the result of the lexical lookup .
Note that at this point lexical lookup has replaced the surface representation of "Coke" and "CEO" wit h their "canonical" forms. Every token contains its original string, so we can still recover it for use in fillin g slots .
The lookup on "Atlanta" has provided the information that it is a city and that its country is the US. The initial Reduction stages take care of money, percent, date, time, and location, then "secondary " references to location . The only things worth noting here are the "yesterday" errors already discussed , that the system decided "60 pounds" was a reference to money, and that the information in the lexica l entry for "Atlanta" was used to fill the slots of the *LOC* mtoken .
The next Reduction stages take care of "primary" then "secondary" references to organizations .
The primary stage picks up "Interpublic Group", "PaineWebber", "Coca-Cola", "Coke", "Creative Artist s Agency", 'WPP Group", "Ammirati & Puris", "New York Yacht Club" and "New York Times" . It misses "Fallon McBride" and "McCann-Erickson" for reasons already noted . The only reason it get s "PaineWebber", "Coca-Cola", and "Coke" is because they are in the lexicon ; the others are all picked up by match various patterns.
In this article, the only secondary reference is "CAA" as a reference to "Creative Artists Agency" . While the system does manufacture acronyms as potential secondary references when certainpattems match , the pattern which enabled it to determine that "Creative Artists Agency" was a commercial organizatio n was unfortunately not one of them . The next Reduction stages take care of "primary" then "secondary" references to persons .
The secondary stage picks up all remaining references to "McCann" . Since "McCann-Erickson" was not recognized as an organization, all those occurrences are picked up, too . And since we failed to mak e adverbs off-limits as new first names in this stage, it decides that "While McCann" and "One McCann" (note the capitalization) are distinct persons . One of the many differences between <ENAMEX TYPE="PERSON">Robert L . James</ENAMEX>, chairman and chief executive officer of <ENAMEX TYPE="PERSON">McCann</ENAMEX>-Erickson , and <ENAMEX TYPE="PERSON">John J . Dooner Jr .</ENAMEX>, the agency's president and chie f operating officer, is quite telling: Mr . <ENAMEX TYPE="PERSON">James</ENAMEX> enjoy s sailboating, whil e Mr . <ENAMEX TYPE="PERSON">Dooner</ENAMEX> owns a powerboat .
Mr . <ENAMEX TYPE="PERSON">Dooner</ENAMEX> met with <ENAMEX TYPE="PERSON">Martin Puris</ENAMEX>, president and chief executive officer o f <ENAMEX TYPE="ORGANIZATION">Ammirati & Puris< /ENAMEX>, abou t <ENAMEX TYPE="PERSON">McCann</ENAMEX>'s acquiring the agency with billings o f <NUMEX TYPE="MONEY">$400 million</NUMEX>, but nothing has materialized . Now, NE and TE processing diverge . For NE, the system uses the original text of the article to write a copy . It traverses the token sequences in parallel with the original text, using the fact that each toke n contains information on all the reductions it was involved in to determine where to insert begin and en d brackets . It only pays attention to the final reduction except in the case of locations inside money, wher e brackets are inserted for both. For TE, there is one final Reduction stage to take care of organization descriptors and locations . Here, the system finds descriptors "the big Hollywood talent agency" and "a hot agency", but not "a qualit y operation" and "the agency with billings of $400 million" . The former omission was deliberate, due to too many spurious matches when it was included ; the latter was a construct we did not think to include . In Organization : Org_Name : "Coca-Cola" Org_Alias: "Coca-Cola" "Coke" Known: "Yes " cases where the descriptor is an appositive, the referenced organization is included in the pattern match ; otherwise, if the appositive is a definite reference, the stack of organization references is searched for th e putative antecedant. In either case, the descriptor and locale information (if any) is inserted into slots o f the organization mtoken . In retrospect, including indefinite references that are not appositives appears to have been the wrong thing to do .
RESULTS AND CONCLUSION S
NE : * * TOTAL SLOT SCORES * * * SLOT POS <enamex> 942 type time --time to test more thoroughly and isolate the causes of the biggest problems . Slowness of the system was a problem but not a major one, as it took only a minute or two per article .
After those two improvements, we turn to the problem of org descriptors --although we had th e highest f-measure, it was only 43.6, which shows that there is still room for improvement . Here, the solutions are less obvious . One step to take is to add to the patterns to allow modifier phrases after the head noun in a descriptor noun phrase, such as "the agency with billings of $400 million" . More exploration is needed on this, especially in light of the fact that both the recall and precision rates were low.
Another area where we would like to make changes is in the order of reduction stages . For example, the system currently does all person reductions after organization reductions . This meant we had to prevent the secondary organization reduction from matching what are clearly person names (eg: primary "Schecter Group" -/-> secondary "Mr. Schecter"). The solution, clearly, is to apply some of the perso n patterns before the organization patterns .
Since all the processing occurs without any regard to the types of events discussed in the articles, the system we have developed here is easily portable across domains. If a domain required a different set o f template slots than used for MUC-6, the patterns would be unchanged but the reduction code that fill s the slots, and the postprocessing code that reports them, would have to be modified slightly .
We have demonstrated, on MUC-6 and on CDIS, that we have an excellent approach to both entity an d event extraction on a range of document types . We hope to have the opportunity to continue this work , as funding permits . | 2014-07-01T00:00:00.000Z | 1995-11-06T00:00:00.000 | {
"year": 1995,
"sha1": "f454067abb1dc84394d7ade8c1f511a72da32c3d",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1072421&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "f454067abb1dc84394d7ade8c1f511a72da32c3d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
15557456 | pes2o/s2orc | v3-fos-license | The Dynamics and Light Curves of Beamed Gamma Ray Burst Afterglows
The energy requirements of gamma ray bursts have in past been poorly constrained because of three major uncertainties: The distances to bursts, the degree of burst beaming, and the efficiency of gamma ray production. The first of these has been resolved, with both indirect evidence (the distribution of bursts in flux and position) and direct evidence (redshifted absorption features in the afterglow spectrum of GRB 970508) pointing to cosmological distances. We now wish to address the second uncertainty. Afterglows allow a statistical test of beaming, described in an earlier paper. In this paper, we modify a standard fireball afterglow model to explore the effects of beaming on burst remnant dynamics and afterglow emission. If the burst ejecta are beamed into angle zeta, the burst remnant's evolution changes qualitatively once its bulk Lorentz factor Gamma<1/zeta: Before this, Gamma declines as a power law of radius, while afterwards, it declines exponentially. This change results in a broken power law light curve whose late-time decay is faster than expected for a purely spherical geometry. These predictions disagree with afterglow observations of GRB 970508. We explored several variations on our model, but none seems able to change this result. We therefore suggest that this burst is unlikely to have been highly beamed, and that its energy requirements were near those of isotropic models. More recent afterglows may offer the first practical applications for our beamed models.
INTRODUCTION
Understanding the energy requirements and event rates of gamma ray bursts is necessary for any quantitative evaluation of a candidate burst progenitor. We need to know both how many progenitors we expect, and how much energy they need to produce in a single event. Until recently, both quantities were uncertain to ∼ 10 orders of magnitude because of the unknown distance to the bursts. The afterglow of GRB 970508 effectively ended that debate, because it showed absorption lines at a cosmological redshift (z = 0.835; Metzger et al 1997). This builds on earlier results from the Burst and Transient Source Experiment (BATSE), which showed that the burst distribution on the sky is exquisitely isotropic while the distribution in flux is inhomogeneous (Meegan et al 1996). These observations are best explained if the bursts are at cosmological distances. A very extended Galactic halo distribution might also work, but it would have to be unlike any other known population of Galactic objects. The isotropy is perhaps most important now for showing that multiple-population scenarios for gamma ray bursts cannot put any substantial fraction of the bursters at Galactic distances. It thus connects the GRB 970508 redshift bound to the vast majority of the burst population.
The dominant remaining uncertainty in the bursters' energy requirements is now whether the bursts radiate isotropically or are beamed into a very small solid angle. Such beaming is allowed (though not required) by the gamma ray observations, because the ejecta from gamma ray bursts must be highly relativistic to explain the spectral properties of the emergent radiation (Paczyński 1986, Goodman 1986, with inferred minimum Lorentz factors Γ ∼ > 100 (Woods & Loeb 1995). The gamma rays we observe are therefore only those from material moving within angle 1/Γ of the line of sight, and offer no straightforward way of determining whether there are eject outside this narrow cone.
These large Lorentz factors lead naturally to predictions of afterglow emission at longer wavelengths as the burst ejecta decelerate and interact with the surrounding material (Paczyński & Rhoads 1993;Katz 1994;Mészáros & Rees 1997a). The characteristic frequency for this afterglow emission depends on the Lorentz factor of the burst remnant, and both decrease as the remnant evolves. Such models scored a recent triumph with the detection of Xray, optical, and radio afterglows from gamma ray bursts (GRBs) early in 1997 (e.g., Costa et al 1997;van Paradijs et al 1997;Bond 1997;Frail et al 1997). The observed properties of the transients are in good overall agreement with the predictions of afterglow models Waxman 1997a,b), although some worries remain (Dar 1997).
Because beaming depends on the relativistic nature of the flow, afterglows can be used to test the burst beaming hypothesis. At least two such tests are possible. First, because Γ is lower at the time of afterglow emission than during the GRB itself, the afterglow cannot be as collimated as the GRB can. This implies that the afterglow event rate should exceed the GRB event rate substantially if bursts are strongly beamed. Allowing for finite detection thresholds, where N 1 , N 2 are the measured event rates above our detection thresholds at our two frequencies; N 12 is the rate of events above threshold at both frequencies; and Ω 1 , Ω 2 are the solid angles into which emission is beamed at the two frequencies. A full derivation of this result and discussion of its application is given in Rhoads (1997a). The second test is based on differences between the dynamical evolution of beamed and isotropic bursts. Burst ejecta decelerate through their interaction with the ambient medium. If the ejecta are initially beamed into a cone of opening angle ζ m , the deceleration changes qualitatively when the bulk Lorentz factor Γ drops to 1/ζ m . Prior to this, the working surface (i.e. the area over which the expanding blast wave interacts with the surrounding medium) scales as r 2 . At later times, the ejecta cloud has undergone significant lateral expansion in its frame, and the working surface increases more rapidly with r, eventually approaching an exponential growth. Spherical symmetry prevents this transition from occurring in unbeamed bursts. A brief analysis of this effect was presented in Rhoads (1997b).
We have two major aims in this paper. First, we will present a full derivation of the late time burst remnant dynamics for a beamed gamma ray burst. We support this by calculating the emergent synchrotron radiation for two electron energy distribution models, but we do not attempt to do so for all possible fireball emission scenarios. Second, we observe that our model is not consistent with any small-angle beaming of GRB 970508. This implies a substantial minimum energy for this burst. If radiative efficiencies are lower than ∼ 10%, this limit approaches the maximum energy available in compact object merger events. We explore possible ways to evade this minimum energy requirement through other forms of beaming models, but find none. We therefore conjecture that such models cannot be constructed for GRB 970508 unless the usual fireball model assumptions about relativistic blast wave physics are substantially modified, and challenge the community to prove this assertion right or wrong.
We explore the dynamical evolution of a model beamed burst in section 2. In section 3 we incorporate a model for the electron energy spectrum and magnetic field strength and so predict the emergent synchrotron radiation. In section 4, we compare the model with observed afterglows. The early (1997) data appeared inconsistent with the beaming model, suggesting that bursts are fairly isotropic and therefore very energetic events. Finally, in section 4.1, we explore variations on our model to try to reduce the inferred energy needs of GRB 970508. We comment briefly on more recent data and summarize our conclusions in section 5.
DYNAMICAL CONSEQUENCES OF BEAMING
We explore the effects of beaming on burst evolution using the notation of Paczyński & Rhoads (1993). Let Γ 0 and M 0 be the initial Lorentz factor and ejecta mass, and ζ m the opening angle into which the ejecta move. The burst energy is E 0 = Γ 0 M 0 c 2 . Let r be the radial coordinate in the burster frame; t, t co , and t ⊕ the time from the event measured in the burster frame, comoving ejecta frame, and terrestrial observer's frame; and f the ratio of swept up mass to M 0 .
The key assumptions in our beamed burst model are that (1) the initial energy and mass per unit solid angle are constant at angles θ < ζ m from the jet axis and zero for θ > ζ m ; (2) the total energy in the ejecta + sweptup material is approximately conserved; (3) the ambient medium has uniform density ρ; and (4) the cloud of ejecta + swept-up material expands in its comoving frame at the sound speed c s = c/ √ 3 appropriate for relativistic matter. The last of these assumptions implies that the working surface of the expanding remnant has a transverse size ∼ ζ m r + c s t co . The evolution of the burst changes when the second term dominates over the first.
Each of these assumptions may be varied, but we believe the qualitative change in burst remnant evolution will remain over a wide range of possible beaming models. Removing assumption (4) is the only obvious way to turn off the dynamical effects of beaming, and even then observable breaks in the light curve are expected when Γ ∼ 1/ζ m .
There are several models in the literature that use radiative rather than adiabatic models, dropping our second assumption. The case for radiative bursts depends on the efficiency with which relativistic shocks transfer bulk kinetic energy to magnetic fields and electrons, and I regard the validity of assumption (2) as an open question. For a closer examination of this issue, I refer the reader to papers by Vietri (1997a,b) and by Katz & Piran (1997a), who advocate radiative models; and to Waxman, Kulkarni, & Frail (1998), who defend the adiabatic model. point out that the dynamical consequences (Γ ∝ r −3 ) of radiative models depend on equipartition between protons, electrons, and magnetic fields being maintained at all times. Thus, a short electron cooling time will affect the afterglow radiation, but will not necessarily result in Γ ∝ r −3 . Sari (1997) considers corrections to the adiabatic burst evolution for modest energy losses.
Models that do not use assumption (1) have been discussed by and Panaitescu, Mészáros, & Rees (1998). Finally, assumption (3) has been dropped by several authors (Vietri 1997b;Panaitescu, Mészáros, & Rees 1998) in favor of a more general power law density ρ ∝ r −g . Such models complicate the beamed burst analysis and will change the form of Γ(r) relation but will leave intact the basic conclusion that Γ(r) changes qualitatively when Γ ∼ < 1/ζ m .
Dynamical Calculations: Numerical Integrations
Given these assumptions, the full equations describing the burst remnant's evolution are Equation 4 is derived in Paczyński & Rhoads (1993) from conservation of energy and momentum, along with algebraic simplifications of equations 5 for the spherical case. The definition of t ⊕ here includes the cosmological time dilation factor (1 + z) for a source at redshift z. Equation 3 is not strictly valid when ζ m ∼ > 1, but we will accept this deficiency since the error thereby introduced is not a dominant uncertainty in our results. These equations can be solved by numerical integration to yield f (r), Γ(r), and t ⊕ (r). Figure 1 shows Γ(r) from such integrations for an illustrative pair of models (one beamed, one isotropic).
Dynamical Calculations: Analytic Integrations
The most interesting dynamical change introduced by beaming is a transition from a power law Γ ∝ r −3/2 to an exponentially decaying regime Γ ∝ exp(−r/r Γ ). We will first give a derivation of the power law behavior.
Power Law Regime
Consider the approximate evolution equations for the regime where (a) 1 The initial conditions are f = 0 and t = t co = t ⊕ = 0 at r = 0. So we can easily integrate and obtain whence By making the substitutions πζ 2 m → 4π and (1 + z) → 1 in these results, we recover the evolution of a spherically symmetric burst remnant derived by Paczyński & Rhoads (1993).
Exponential Regime
To demonstrate the exponential behavior, consider the approximate evolution equations for the regime where (a) 1/Γ 0 ∼ < f ∼ < Γ 0 , so that Γ ≈ Γ 0 /2f ; and (b) c s t co > ζ m r (corresponding to f ∼ > 9Γ 0 ζ 2 m ): By forming the ratio (df /dr)/(dt co /dr) and isolating terms with f and with t co , it follows that This is easily integrated to obtain where c 1 is a constant of integration. Using the initial conditions for the exponential regime derived below (eqn. 16 -19), one can show that the constant of integration is c 1 = −25E 0 ζ 3 m /(4πρc 5 s ), which becomes negligible once c s t co ≫ ζ m r. Equation 14 then becomes f ∝ t 2 co , and we see from equations 12 and 4 that f , Γ, t co , and t ⊕ will all behave exponentially with r in this regime. Retaining the constants of proportionality, we find Further algebra yields Γ ∝ exp(−r/r Γ ), t co ∝ √ f ∝ exp(r/r Γ ), and t ⊕ ∝ f ∝ exp(2r/r Γ ), so that Γ ∝ t −1/2 ⊕ . Thus, while the evolution of Γ(r) changes from a power law to an exponential at Γ ∼ 1/ζ m , the evolution of t ⊕ (r) changes similarly. The net result is that Γ(t ⊕ ) has a power law form in both regimes, but with a break in the slope from The initial conditions for the exponential regime are approximately set by inserting the transition condition c s t co = ζ m ct into the evolution equations for the power law regime. Denoting the values at this break with the subscript b , we have c s t co,b = ζ m ct b = ζ m r b , which we combine with equation 9 to obtain The corresponding values for Γ, t ⊕ , and t co are and The evolution in the exponential regime is then approximated by A thought experiment that will help understand the onset of the exponential decay of Γ with radius is to consider the shape of a GRB remnant in a pressureless, uniform ambient medium at late times (after all motions have become nonrelativistic). In the spherical case, the blast wave will leave behind a spherical cavity. In the beamed geometry, the cavity will be conical near the burster, but will change shape at the radius where the lateral expansion of the remnant becomes important. At this point, the cone flares, and the mass swept up per unit distance begins to grow faster than r 2 . This corresponds to the onset of the exponential Γ(r) regime. The cone continues to become rapidly wider until it reaches the radius where the remnant becomes nonrelativistic. The final cavity resembles the bell of a trumpet. It is unclear whether such remnants would survive long enough to be observed in a realistic interstellar medium.
EMERGENT RADIATION
The Lorentz factor Γ is not directly observable, and we ultimately want to predict observables like the frequency of peak emission ν ⊕,m , the flux density F ν,⊕,m at ν ⊕,m , and the angular size θ of the afterglow. To do so, we need to introduce a model for the emission mechanism. We will restrict our attention to synchrotron emission, which is the leading candidate for GRB afterglow emission. We first consider the case of optically thin emission with a steep electron energy spectrum. This emission model is used in many recent afterglow models (e.g. Waxman 1997a,b). We then repeat the calculation for the emission model of Paczyński & Rhoads (1993).
General equations: Optically thin case
Our dynamical model for burst remnant evolution gives the volume V and internal energy density u i of the ejecta as a function of expansion radius r. Detailed predictions of synchrotron emission require the magnetic field strength and the electron energy spectrum. We assume that the energy density in magnetic fields and in relativistic electrons are fixed fractions ξ B and ξ e of the total internal energy density. The magnetic field strength B follows immediately: (N.b., we use the notation of Paczyński & Rhoads 1993. Some other authors have instead defined ξ B in terms of the magnetic field strength, such that B ∝ ξ B in their models; care must therefore be taken in comparing scaling laws under these alternative notations.) The electron energy spectrum requires additional assumptions. We first follow Waxman's (1997a,b) assumptions, to facilitate comparison of our results for beamed bursts with his for unbeamed bursts. In the frame of the expanding blast wave, the swept-up ambient medium appears as a relativistic wind having Lorentz factor Γ. We assume that the electrons from the ambient medium have their direction of motion randomized in the blast wave frame. Moreover, they may achieve some degree of equipartition with the protons. The typical random motion Lorentz factor γ e for the swept-up electrons in the blast wave frame is then in the range Γ ∼ < γ e ∼ < 0.5(m p /m e )Γ. In terms of the energy density fraction in electrons, γ e ≈ ξ e (m p /m e )Γ. We further assume that the electrons in the original ejecta mass are not heated appreciably, so that the number of relativistic electrons is N e = f M 0 /(µ e m p ) (where µ e is the mean molecular weight per electron) rather than (1 + f )M 0 /(µ e m p ). We take the electron energy E to be distributed as a power law Finally, we assume that p > 2, so that the total electron energy Emax Emin EN (E)dE is dominated by electrons with E ≈ E min , and γ e,peak ≈ γ e ≈ ξ e (m p /m e )Γ ≈ E min /(m e c 2 ). The optical depth to synchrotron self-absorption is assumed to be small at the characteristic synchrotron frequency corresponding to E min = γ e,peak m e c 2 . In the comoving frame, this frequency is ν co,m = 0.29 × 3/(4π) sin α γ 2 e,peak eB/(m e c) = 0.29 × (3/16)γ 2 e,peak eB/(m e c) (Pacholczyk 1970, Rybicki & Lightman 1979, where the calculation of the mean pitch angle sin α = π/4 assumes an isotropic distribution of electron velocities and a tangled magnetic field. Wijers & Galama (1998) have integrated over the power law distribution of electron energies to show that the peak comoving frame frequency for a power law energy distribution becomes ν co,m E = 3x p /(4π) × γ 2 e,peak eB/(m e c), where x p is a function of the power law index p, and where 0.64 ∼ > x p ∼ > 0.45 for 2 < p < 3. Below this peak frequency, the flux density rises as ν 1/3 , while at higher frequencies it falls as ν −α where α = (p − 1)/2 (e.g., Rybicki & Lightman 1979, Pacholczyk 1970. Three additional breaks may occur, corresponding to the highest electron energy attained in the shock, the electron energy above which cooling is important, and the frequency where synchrotron self-absorption becomes important. We will comment on the cooling break below, and will ignore the other two breaks for the present.
We first estimate the observer-frame frequency ν ⊕,m at which the spectrum peaks. This is where the factor 1 + β cos(θ) Γ is the Lorentz transformation for frequency, and where β = √ 1 − Γ −2 is the expansion velocity as a fraction of lightspeed c. θ is the angle between the velocity vector of radiating material and the photon emitted, as measured in the frame of the emitting matter. 1 + β cos(θ) denotes an average weighted by the received intensity. We shall use the highly relativistic limit β → 1 throughout this work, which leads to the result 1 + β cos(θ) = 4/3 applied in the final lines of equation 22 (Wijers & Galama 1998).
We estimate the peak flux density following equations 19 and 25 of Wijers & Galama (1998). The basic equation is Here φ p √ 3e 3 B/(m e c 2 ) is the average comoving frame peak luminosity per unit frequency emitted by a single electron. The details of the average over pitch angle and electron energy are hidden in the factor φ p , which is a function of the electron energy distribution index p with range 0.59 ∼ < φ p ∼ < 0.66 for 2 < p < 3 (Wijers & Galama 1998). The factor Γ accounts for the Lorentz transformation of flux density (Wijers & Galama 1998). Distance and beaming effects enter through the factor 1/(Ω γ d 2 ), where d is the luminosity distance to the burst, and Ω γ is the solid angle into which radiation is beamed. Finally, redshift affects the flux density by factor (1 + z) (e.g. Weedman 1986). Our goal is to express F ν,m,⊕ purely in terms of the dynamical variables we calculated in section 2.
To obtain the magnetic field strength B, we need the volume of the ejecta cloud, which will have transverse radius ∼ rζ m + c s t co and thickness ∼ c s t co , giving comoving volume V = π(ctζ m + c s t co ) 2 (c s t co ). Under the approximation of negligible radiative losses, the internal energy is given by E i,co = E 0 /Γ. The comoving frame magnetic field strength is thus The remaining pieces are trivial. Ω γ ≈ π (ζ m + 1/Γ) 2 , and d and 1 + z are simply scale factors.
We now have all the pieces of equations 22 and 23 expressed in terms of dynamical variables from section 2. This means that we can insert these formulae into our numerical integration code and calculate F ν,m,⊕ and ν ⊕,m as a function of t ⊕ (or of f , Γ, or r). In order to determine a light curve at fixed observed frequency, we combine the broken power law spectral shape described above with the calculated frequency and flux density of the spectral peak to determine the approximate flux density at the observed frequency and time.
The cooling break and self absorption break (cf. Sari, Piran, & Narayan 1998) are additional observed features in afterglow data. We do not treat either in detail here, but do we present a derivation of the cooling break behavior for beamed gamma ray bursts elsewhere (Rhoads 1999b). These results are summarized below. We have not yet treated the self-absorption break. Self-absorption is important primarily at low frequencies, where scintillation can hamper light curve slope measurements. For a treatment of this regime in beamed bursts, see Sari, Piran, & Halpern (1999).
Finally, we consider the evolution of the apparent angular size θ. In the spherical case or the power-law regime for a beamed burst, θ = r/(Γd θ ) ∝ t 5/8 ⊕ (where d θ is the angular diameter distance to the burst). In the exponential regime, θ is determined by the physical transverse size of the ejecta cloud rather than the beaming angle, but the difference is not dramatic because the physical size in- ⊕ . If the exponential regime did not happen at all, this behavior would continue for all Γ < 1/ζ m .
Analytic Results: Optically thin case
In the limiting cases where one of the terms in the transverse size ζ m ct+c s t co is dominant and the other negligible, we can derive analytic expressions for F ν,m,⊕ and ν ⊕,m as functions of observed time t ⊕ and the physical parameters of the fireball. We begin with the early time case, and show that its light curve is observationally indistinguishable from that of an isotropic burst.
Power Law Regime
We first determine the comoving magnetic field in this regime by inserting ζ m ct ≫ c s t co into equation 25 to obtain Inserting this result into equation 22, we find where we have used equation 11 to eliminate Γ in the last line.
Turning our attention to F ν,m,⊕ , we first need the number N e of radiating electrons in terms of t ⊕ . For the power law regime, this becomes Combining this with equations 23, 26, and the appropriate limiting form of Ω γ , we find Note that this result is independent of t ⊕ . Apart from small differences in the numerical coefficients, our results for the power law regime are essentially the same as the results that Waxman (1997a,b) and Wijers and Galama (1998) obtained for isotropic bursts. Differences between our results and Waxman's are primarily because we have adopted the more precise treatment of the synchrotron peak frequency presented by Wijers and Galama (1998), while differences between our results and those of Wijers and Galama stem from a slightly different way of calculating the comoving frame magnetic field.
Exponential Regime
When c s t co ≫ ζ m ct, we are in the regime where Γ, t ⊕ , etc. all behave exponentially with radius (section 2.2.2). We first rewrite the scalings from equation 20 as We next determine the comoving frame magnetic field in the appropriate limit: Combining this result with equation 22 allows us to determine the peak frequency as Substituting for the exponential regime initial conditions from equations 16-19 then yields = (1 + z) 3 1/6 5 11/6 2 13/2 π 7/6 x p ξ 2 The observed cooling break frequency ceases to evolve in this regime: ν ⊕,cool ∝ t 0 ⊕ (Rhoads 1999b;Sari et al 1999). Turning now to the amplitude of the spectral peak, we combine and with equations 23 and 32 to obtain Substituting the initial conditions for the exponential regime, this becomes At t ⊕ = t ⊕,b , equations 34 and 40 differ from equations 28 and 30 by factors of order unity. This difference is not worrying since our analytic approximations are not expected to be particularly accurate in the transition between the two limiting cases. Numerical correction factors to the coefficients of equations 34 and 40 can be derived from numerical integrations. Such factors are presented in section 3.2.5 below.
TV Dinner Equations
We now pause a moment to consolidate our results so far and express the key equations in terms of fiducial parameter values 2 . We begin with equations 28 and 30. These become The observed time corresponding to the transition between the power law and exponential regimes is Thereafter, the frequency and flux density at the spectral peak are characterized by equations 28 and 40. Numerical integrations show that modest correction factors ǫ ν ≈ 0.74 and ǫ F ≈ 0.7 should be applied to these two equations at late times to compensate for approximations in the initial conditions (see section 3.2.4 below). These have been incorporated in the following three equations. The observed frequency of the spectral peak at the time of the break is ν ⊕,m,b = 1.7 × 10 11 1 + z ǫ ν 0.74 Hz .
The subsequent evolution is given by Finally, for completeness, we include our result for ν ⊕,cool from Rhoads 1999b: Note that this equation already interpolates over the break time t ⊕,b ; the interpolation was derived in the fashion suggested in section 3.2.5 below.
Putting the Pieces Together
An accurate description of the behavior in the transition between the power law and exponential regimes can be obtained numerically. We first note that there is a single characteristic observed time t ⊕,b given by equation 18 and flux level F ν,m,⊕,b ≡ F ν,m,⊕ (t ⊕ ≪ t ⊕,b ) given by equation 30. If we use these as our basic time and flux units, and denote the observed time and peak flux scaled to these units as t ⊕ and F ν,m,⊕ , there is a unique F ν,m,⊕ ( t ⊕ ) relation. This is plotted in figure 2.
Similarly, we can define the characteristic frequency ν ⊕,m,b in the problem to be given by equation 28 evaluated at t ⊕,b , and ν ⊕,m to be the frequency scaled by this value. Then we can again obtain a unique relation ν ⊕,m ( t ⊕ ), which is shown in figure 3.
At late times, the numerical integrations yield a flux density that is a factor ǫ F ≈ 0.7 smaller than in equation 40, and a frequency of peak emission that is a factor ǫ ν ≈ 0.74 smaller than in equation 34. This is presumably due to the approximate initial conditions used for the exponential regime evolution. These initial conditions are obtained by applying an asymptotic approximation outside its range of validity, and it should not be surprising if this procedure introduces some error. We suggest below that this error may be corrected empirically.
To obtain predictions for a given set of model parameters from these dimensionless curves, we need only (1) determine numerically the values of t ⊕,b , F ν,m,⊕,b , and ν ⊕,m,b ; and (2) determine the time interval over which our assumption 1/Γ 0 ∼ < f ∼ < Γ 0 remains valid. The early behavior, before the ejecta accrete a dynamically important amount of ambient medium (i.e., f < 1/Γ 0 ) is unlikely to be observed at long wavelengths, since it is over within a fraction of a second for reasonable burst parameters. We therefore consider only the end condition, f ≈ Γ 0 . This At later times, our assumptions that Γ ∼ > 2 and β ≈ 1 break down, and the behavior of the fireball changes again. Such changes may be relevant to the radio behavior of gamma ray burst afterglows, but we will not consider them here.
Empirical Interpolations
To obtain a readily calculated burst behavior around time t ⊕,b , we can interpolate between the asymptotic behaviors for earlier and later times. We do this first for F ν,m,⊕ and then for ν ⊕,m . We use interpolants of the form g = (g −κ 1 + ǫg −κ 2 ) −1/κ , where g 1 and ǫg 2 represent limiting behaviors of an arbitrary function g for early and late times. The exponent κ determines the smoothness of the transition between the limiting behaviors. The scalar ǫ is introduced so that the numerically derived correction factors to the late-time asymptotic results can be applied.
For F ν,m,⊕ , the asymptotic behaviors are F ν,m,⊕ constant and F ν,m,⊕ ∼ t ⊕ −1 . We work with the scaled quantities defined in section 3.2.4, so that the break between the two asymptotic behaviors is expected for log( t ⊕ ) ∼ 0. We set g 1 = F ν,m,⊕,b . We use equation 40 for g 2 , and set the correction factor ǫ = 0.7. Finally, we choose κ = 0.4. The resulting interpolation is plotted atop the numerical integration results in figure 2. The asymptotic behaviors of ν ⊕,m are ∝ t −3/2 ⊕ and ∝ t −2 ⊕ . In this case, we have taken g 1 from equation 28. For g 2 , we take equation 34, and set ǫ = 0.74. Here we find κ = 5/6 works well. This interpolation is shown in figure 3.
Light Curves: Optically thin case
The afterglow light curve at fixed observed frequency is obtained by combining the predicted behavior of ν ⊕,m and F ν,m,⊕ with the spectrum for a truncated power law electron energy distribution. We use the analytic results of section 3.2. Then we find four generic behaviors, depending on whether the frequency is above or below ν ⊕,m and whether the time is earlier or later than t ⊕,b . These are (50) Here p is the electron energy spectrum slope and α = (p − 1)/2 is the high frequency spectral slope, as usual. Note particularly how steep the light curve becomes for t ⊕ > t ⊕,b and ν ⊕ > ν ⊕,m (t ⊕ ).
Three representative light curves are shown in figure 4. These have been derived by combining the empirically interpolated ν ⊕,m and F ν,m,⊕ curves with the broken power law spectrum. Note that the rollover at the beaming transition (log( t ⊕ ) ∼ 0) is rather slow, so that observed behavior will be intermediate between the asymptotic power laws of equations 50 for a considerable time. This slow rollover is in part due to the compound nature of the break. The light curve decay begins to accelerate as soon as we can "see" the edge of the jet, when Γ < 1/ζ m . The additional steepening when dynamical effects of beaming become important occurs slightly later, when Γ < Γ b ∼ 0.23/ζ m (cf. equation 17) (cf. Panaitescu & Mészáros1999 for additional discussion of this point).
Equation 50 assumes ν ⊕,abs < ν ⊕ < ν ⊕,cool throughout. (Here ν ⊕,abs is the self-absorption frequency, measured in the observer's frame.) If we now include the cooling break, we obtain the additional light curve behaviors (derived in Rhoads 1999b) (51) where we have also assumed that ν ⊕,m < ν ⊕,cool . These behaviors are not shown in figure 4, but were used to fit the light curve of GRB 970508 with beamed afterglow models (Rhoads 1999b).
Light Curves: Optically thin case without sideways expansion
It remains possible to constrain gamma ray burst beaming by looking for light curve breaks even in the case where lateral expansion of the evolving burst remnant is unimportant. This corresponds to dropping our fourth assumption from section 2.
While these slope changes are less dramatic than those in section 3.2.6, they would be strong enough to detect in afterglow light curves with reasonably large time coverage and good photometric accuracy.
Optically Thick Case
We now consider briefly the electron energy distribution model of Paczyński & Rhoads (1993). This model differs from that of the preceding sections in a few ways. First, the electron power law index was fixed at p = 2 to avoid strong divergences in the total energy density in electrons. Second, the minimum electron energy E min was taken to be sufficiently small that emission from electrons with E = E min was always in the optically thick regime. Under these circumstances, there is a single break in the electron energy spectrum at the frequency corresponding to optical depth τ = 0.35 (cf. Pacholczyk 1970), with spectral slope ν 5/2 below the break and ν −1/2 above. The magnetic field behavior is the same in this model and more recent ones.
Combining this electron behavior with the power law regime dynamical model reproduces the scalings from Paczyński & Rhoads (1993), namely ν ⊕,m ∝ E If we use instead the exponential regime dynamical model, we find Readers interested in the precise numerical coefficients for these relations are referred to Paczyński & Rhoads (1993) for the spherical case. For the beamed case, numerical results may be found by applying the Paczyński & Rhoads (1993) results at the transition between the power law and exponential regimes, and continuing the evolution using equations 54.
The light curve for this electron model then becomes From these results, we see that the substantial changes in the observable behavior of a beamed burst are not dependent on the precise nature of the electron energy distribution.
DISCUSSION
We now put mathematics aside to recapitulate our results and to discuss their implications for the interpretation of afterglow observations.
We have shown that the dynamics of a gamma ray burst remnant change qualitatively when the remnant's Lorentz factor Γ drops below the reciprocal opening angle 1/ζ m of the ejecta. Before this time, the Lorentz factor behaves as a power law in radius. Afterwards, the Lorentz factor decays exponentially with radius. The change occurs because lateral expansion of the ejecta cloud increases the rate at which additional material is accreted. Such lateral expansion is prohibited by symmetry in the spherical case.
When the remnant enters this "exponential regime," the relation between the observed spectrum and the observed light curve changes. Inferences about the electron energy spectrum in afterglows come from the light curve decay rate and spectral slope. The general agreement between the two methods has been taken as a confirmation of the (spherically symmetric) fireball model Waxman 1997a).
The light curve decline at frequencies above the spectral peak becomes very steep (t −p ⊕ , where p is the index of the electron energy spectrum) once the burst dynamics enter the exponential regime. Reconciling this relation with the observed decays (−1 ∼ > d log(F ν,⊕ )/d log(t ⊕ ) ∼ > −1.5) would require an extremely flat electron energy spectrum, and consequently a very blue spectral energy distribution. This was not seen in early observed spectral energy distributions (see Wijers et al 1997 for GRB 970228;Reichart 1997 andSokolov et al 1997 for GRB 970508;and Reichart 1998 for GRB 971214). We infer that GRBs 970228, 970508, and 971214 were probably not in the exponential regime during their observed optical afterglows. GRB 971227 provides a possible, though dubious, counterexample. There is one image suggesting a counterpart (magnitude R ≈ 19.5) on December 27.91 (Castro-Tirado et al 1997). Later images show no corresponding source, requiring a decay at least as fast as t −2.5 ⊕ (Djorgovski et al 1998). This is consistent with a t −p ⊕ decay for typical values of p. However, this explanation remains speculative, since there is no second image confirming the proposed counterpart.
Subsequent afterglows have provided a more hopeful picture for practical application of beaming models. In particular, GRB 990123 shows a break that is quite possibly due to beaming (e.g., Castro-Tirado et al 1999;Kulkarni et al 1999), and comparison of the spectral slope and decay slope for GRB 980519 gives better agreement for beamed than for spherical regime models (Sari et al 1999).
In the case of GRB 970508, we can place a stringent limit on the beaming angle in the context of our model. The optical light curve extends to ∼ 100 days after the burst and does not depart drastically from a single power law after day 2 ; thus, no transition to the exponential regime occurred during this time. As already noted, the spectral slope and light curve law decay rate are in fair agreement for the spherical case, and poor agreement for the beamed case. The radio light curve furnishes the last critical ingredient. Goodman (1997) pointed out that diffractive scintillation by the Galactic interstellar medium is expected in early time radio data, and that this scintillation will stop when the afterglow passes a critical angular size. By comparing this characteristic size with the time required for the scintillations to die out, one can measure the burst's expansion rate. This test has been applied Waxman, Kulkarni, & Frail 1998) and shows that Γ ∼ < 2 at t ⊕ ∼ 14 days. Thus, no power-law break to faster decline is observed at Γ ∼ > 2, and we infer that GRB 970508 was effectively unbeamed (ζ m ∼ > 1/2). This rough derivation is borne out by detailed fitting of beamed afterglow models to the GRB 970508 light curve, which yields the same beaming limit ζ m ∼ > 1/2 radian (Rhoads 1999b).
This conclusion, combined with the GRB 970508 redshift limit z ≥ 0.835 , immediately implies a minimum energy for the burst. This burst was detected as BATSE trigger 6225, and the total BATSE fluence was (3.1 ± 0.2)× 10 −6 erg/ cm −2 over the range 20-1000 keV (Kouveliotou et al 1997). The gamma ray emission alone therefore implies E 0 ∼ > 4.7 × 10 51 (Ω/4π) erg ∼ > 3 × 10 50 erg. Here we have based the luminosity distance on an Ω = 0.2, Λ = 0, H 0 = 70 km/ s/ Mpc cosmology, and applied the beaming angle limit ζ m ∼ > 0.5 radian in the second inequality. This conclusion is of course modeldependent and might change if our assumptions about the blast wave physics or beaming geometry are badly wrong. We will discuss possible ways to reduce the energy requirements of GRB 970508 while retaining consistency with the afterglow data in section 4.1 below.
If the beaming angle ζ m is substantially variable from burst to burst, it is possible that some bursts enter the rapid decay phase before the spectral peak passes through optical wavelengths. Present data suggests that this is indeed the case; GRB 980519 is best fit by assuming exponential regime behavior (Sari et al 1999), while GRB 990123 appears to be a transition case with a break observed in the optical light curve (e.g. Castro-Tirado et al 1999, Kulkarni et al 1999, Sari et al 1999. The resulting rapid decay could then explain some of the optical nondetections of well studied GRBs such as 970828 (Groot et al 1997). Alternatively, for characteristic beaming angles 1 ≫ ζ m ∼ > 0.1, we would expect beaming to become dynamically important between the time of peak optical and radio afterglow. This would then help explain the paucity of radio afterglows, which unlike optical afterglows cannot be hidden by dust in the burster's environment. There is some evidence that the radio emission involves a different process, or at least a different electron population, from the optical and X-ray afterglows: The peak flux density in GRB 970508 did not follow a single power law with wavelength as it ought to under the simplest fireball models (Katz & Piran 1997b).
The transition in light curve behavior at Γ ∼ 1/ζ m is also important for "blind" afterglow searches. Such searches would look for afterglows not associated with detected gamma ray emission. A much higher event rate for afterglows than for bursts is a natural consequence of beamed fireball models, since the afterglow emission peaks at lower bulk Lorentz factors than the gamma ray emission does. Comparison of event rates at different wavelengths can therefore constrain the ratio of beaming angles at those wavelengths (Rhoads 1997a). However, we will only see the afterglow if either (a) we are within angle ζ m of the burst's symmetry axis, and therefore could also see the gamma ray burst, or (b) the Lorentz factor has decayed to Γ < 1/ζ m and the afterglow light curve has entered its steep decay phase. We have already argued that GRB 970228, GRB 970508, and GRB 971214 were not in this steep decay phase based on the comparison of light curves and spectral slopes. It follows that if blind afterglow searches find a population of afterglows not associated with observed gamma rays, those afterglows will exhibit a steeper light curve decay than did the 1997 afterglows. The efficiency for detecting such rapidly fading "orphan" afterglows will be substantially lower than the efficiency estimated from direct comparison with sphericalregime afterglows.
Other models of beamed gamma ray bursts are possible. In particular, we have assumed a "hard-edged" jet, where the mass and energy emitted per unit solid angle are constant at small angles and drop to zero as a step function at large angles. Profiles in which these quantities decrease smoothly to zero may be more realistic. Whether these differ importantly from the model presented here depends on whether most of the energy is emitted into a central core whose properties vary slowly across the core. Layered jet models in which most of the kinetic energy from the fireball is carried by material with a low Lorentz factor can have substantially different afterglow light curves from either the spherically symmetric case or the hard-edged jet case. This is because the afterglow emission can be dominated by outer layers where the initial Lorentz factor is high enough to yield optical emission during ejecta deceleration, but insufficient to yield gamma rays. The afterglow is thereby effectively decoupled from the gamma ray emission, and it becomes harder to predict one from the other. Such models have been explored by several groups (e.g., Mészáros& Rees 1997b;Paczyński 1997). A similar decoupling of the gamma-ray and afterglow properties can be produced in the spherical case by allowing inner shells of lower Lorentz factor and larger total mass and energy to follow the initial high-Γ ejecta .
It is possible to approximate the afterglow from a layered jet by a superposition of hard-edged jets. For this to be reasonably accurate, the outer layers should have Lorentz factors substantially below those of the inner layers, and opening angles and energies substantially above those of the inner layers.
Energy Requirements for GRB 970508
We now consider how our model will change if we vary some of the basic assumptions. Our primary concern is to determine whether the minimum energy required to power the GRB 970508 afterglow can be reduced substantially below the requirements derived from a spherical adiabatic fireball model expanding into a homogeneous medium. We will therefore sometimes err on the side of extreme model assumptions chosen to minimize the energy needs. In order to declare a model consistent with the data, we require that either (1) there be no break in the light curve or spectrum around Γ ∼ 1/ζ m , or (2) the break occurs early (before t ⊕ ∼ 2 days) and the late time light curve shows a slow decline even for spectral slopes as red as those observed.
The first requirement is physically implausible. Even in the absence of the dynamical effects reported above, so long as the afterglow is from relativistically moving and decelerating material, its flux will scale with an extra factor Γ 2 once Γ ∼ > 1/ζ m . Since Γ decreases with time, a break is generally expected, though perhaps it could be avoided with sufficient fine-tuning of the model.
The second possibility is more interesting. It requires us to construct a model where factors besides beaming contribute relatively little to the decay of F ν,m,⊕ with t ⊕ , or where the observed spectrum does not directly tell us about the electron energy distribution. A burst expanding into a cavity (such that ρ increases with r) might give a slow decay, while a sufficiently large dust column density would give a red spectrum despite a flat electron energy distribution (cf. Reichart 1997Reichart , 1998. However, both would require some degree of fine tuning. The dustreddened spectra would deviate measurably from pure power-laws given good enough data, but the present data are probably equally consistent with both pure and reddened power law spectra. Certainly such reddened beaming models would imply little correlation between observed spectral slopes and light curve decays, since the dust column density could vary wildly from burst to burst. This hypothesis is somewhat ad hoc, but is consistent with present data and is supported by other circumstantial evidence linking GRBs to dust and star forming regions (e.g. Groot et al 1997;Paczyński 1998). At present, then, it appears the most viable way of reconciling beamed fireball scenarios with the 1997 afterglow data.
We now discuss a few variations of fireball models in greater detail.
Radiative case
We first consider the behavior of a radiative regime fireball. In this regime, the internal energy of the fireball is low, since it is converted to photons and radiated away. The largest implications for beaming are when the internal energy density is so low that c s ≪ c. In this case, the lateral expansion that leads to the exponential regime of burst remnant evolution in the adiabatic case is unimportantly small. We assume this low sound speed through much of the following discussion.
We assume that energy in magnetic fields and protons is transferred to electrons in the burst remnant on a remnant crossing time (∼ t co ). The electrons are assumed to maintain a power law energy spectrum, with a large E max whose precise value is determined by the requirement that the burst radiate its internal energy efficiently. Under these circumstances, the Lorentz factor scales as Γ ∝ r −3 and the comoving frame internal energy E int of the remnant follows the evolution This admits a solution of the form E int ∼ (πζ 2 m ρc 2 (Γr 3 ) − c 2 r −4 )/4 where c 2 is a constant of integration. At late times, we throw away the c 2 r −4 term, which becomes negligible. The result then becomes which is constant since Γ ∝ r −3 in this regime. If the sound speed becomes negligibly small at some point in the burst remnant evolution, then the volume of the shell scales as V ∝ r 2 thereafter. The magnetic field then scales as B ∝ r −1 , based on constant E int . The observed peak frequency scales as ν ⊕,m ∝ Γ 3 B ∝ r −10 ∝ t −10/7 ⊕ . The power radiated is simply ∼ Γ × (Γ − 1)c 2 × πζ 2 m r 2 cρ in the observer's frame, but this is dominated by emission from electrons at E max , which we have not calculated. The peak in F ν,⊕ must be estimated as before. We have total comoving frame energy ∼ ξ e E int in electrons at E ∼ E min , which is radiated over the comoving cooling time t s ∼ 1/(ΓB 2 ) ∼ V /(ΓE int ) ∼ r 5 . In the observer frame, this gives total power output ∼ Γ 2 E int /t s , accounting for factors of Γ from the Lorentz boost to photon energies and for the transformation between t co and t ⊕ . The frequency range containing this power scales as ∆ν ⊕ ∝ ν ⊕,m . So, in the spherical case, If we now allow for beaming, we introduce another factor of Ω −1 γ = Γ 2 /π and obtain F ν,m,⊕ ∝ r −7 ∝ t −1 ⊕ . Finally, putting in the spectral shape for fixed ν ⊕ > ν ⊕,m , we find that F ν,⊕ ∝ t −(5p+2)/7 ⊕ . Thus, this radiative regime model yields scalings fairly similar to our canonical adiabatic model. In particular, the late time light curve again shows a steep decline. While the assumptions made here may not be fully self-consistent, allowing c s ∼ c would likely further steepen this decline.
This result suggests that the GRB 970508 data cannot easily be reconciled with a beamed radiative afterglow model.
Beamed burst, isotropic afterglow
Gamma ray bursters may give rise to both fast and slow ejecta, where "slow" here means too slow to cause gamma ray emission. In this case, the optical and γ-ray properties of the event may be effectively decoupled if the slow wind contains most of the energy.
Suppose the gamma ray burst is caused by a small amount of extremely relativistic ejecta, while the afterglow is caused primarily by a greater mass of material with low Γ 0 . The afterglow light curve places almost no direct constraint on the isotropy of the first (high-Γ 0 ) material. However, the low-Γ material must be reasonably isotropic to avoid a visible break in the light curve at late time. To explain a peak optical flux of ∼ 30 µJy in the optical, we need total energy We compare this to the optical fluence of the burst, which we estimate as Inserting our broken power law spectrum and the dependence of ν ⊕,m on t ⊕ yields (60) Here ν 1 is an arbitrary reference frequency, and t ⊕,m (ν 1 ) is defined as the moment when ν ⊕,m = ν 1 . Setting ν 1 = 6 × 10 14 Hz (corresponding to wavelength 0.5µm), F ν,m,⊕ = 30 µJy, t ⊕,m (ν 1 ) = 2 days, and p ≈ 2.85 (corresponding to a t −1.4 ⊕ light curve) yields Q opt = 3.0 × 10 −7 (ν max /ν 1 ) 1/3 erg/ cm 2 . (A similar calculation using p = 2.2 and accounting for the additional break at the cooling frequency yields a similar fluence, 3.8 × 10 −7 erg/ cm 2 , for ν max = ν 1 = 6 × 10 14 Hz. This is the value used in Rhoads 1999b.) Taking luminosity distance 4.82 Gpc and considering only optical and longer wavelength afterglow, the smaller optical fluence estimate implies E = 4.5 × 10 50 (Ω/4π) erg ∼ > 2.8 × 10 49 erg. We have applied our beaming limit, ζ m ∼ > 0.5 radian, to derive the lower limit here. If we take ν max corresponding to the soft X-ray afterglow, the energy rises by another factor of ∼ 10. These fluence-based energy needs are dangerously close to exceeding the energy requirements from equation 58. Since the latter equation is based on an energyconserving model, this comparison shows that ξ B must be substantially below 1 and/or the density substantially below 10 −24 g/ cm 3 if the model is to be self-consistent. Otherwise, the total energy radiated is comparable to or greater than the total energy available. Reassuringly, the density and magnetic energy fraction found by Wijers & Galama (1998) are roughly consistent with these requirements. This consistency check could be refined by replacing equation 60 with a more detailed fluence calculation.
Layered jet models
In this class of models, considered (for example) by and Panaitescu, Mészáros, & Rees (1998), the material dominating the emission changes continuously through a range of initial Lorentz factors. We can approximate such models as a superposition of many "hard-edged" jet models. We have tried developing such models while minimizing the energy requirements. To do this, we build a sequence of adiabatic hard-edged jets, enforcing either the condition ν ⊕,m = ν ⊕ or the condition t ⊕,b = t ⊕ throughout the afterglow evolution, and then adjusting the input energy requirement to match the observed light curve. (Here ν ⊕ denotes the fixed frequency at which our data was taken.) A preliminary exploration of such models has not yielded any drastic reduction in energy requirements. A more thorough study may be needed to make this conclusion firm.
Inhomogeneous environments
The predictions of fireball models change somewhat if the ambient medium is not uniform. To date, investigations of variable density environments have concentrated on density laws ρ ∝ r −g (Vietri 1997b;Panaitescu, Mészáros, & Rees 1998). The best motivated choices of g are g = 0 and g = 2, which correspond to a uniform density medium and the density profile expected from a constant speed wind from the burst progenitor expanding into a vacuum (or an ambient medium of much lower density). When the ambient density decreases with increasing distance from the burster, the general result for spherical symmetry is a faster decay of the afterglow flux (e.g. Panaitescu et al 1998) (though the duration of the afterglow should increase correspondingly). We therefore infer that a decreasing density profile will also steepen the light curve decline in the beamed case. This only exacerbates the disagreement between the observed slow afterglow decay and the model predictions for beamed bursts, given the observed spectral energy distribution.
It is worth asking how the exponential regime of burst remnant evolution (section 2.2.2) will change if the ambient density is nonuniform. The exponential scale length r Γ ∝ ρ −1/3 in the uniform density case. We therefore conjecture that a solution similar to the following may be possible: where r 1 and ρ 1 are some fiducial radius and the corresponding density, and where r Γ = r Γ (ρ 1 ) 3 r −g 1 1/(3−g) .
Observational concerns
Throughout this discussion, we have tacitly assumed that the spherical fireball does fit the GRB 970508 observations well, so that difficulties with the various beaming models offer support to the spherical model. This is open to question. In defense of the spherical model, Reichart (1997) has studied the largest available afterglow data set from a single optical observatory (that of Sokolov et al 1997) and finds that the afterglow of GRB 970508 is well fitted using a standard model with the addition of a modest amount of dust extinction at high redshift. By using a single data set, he minimizes many of the possible systematic errors, such as inconsistent zero points for absolute photometry.
On the other hand, if one examines the spectral slope from mixed data sets over larger wavelength intervals (optical -near infrared) and a larger time range, we find some worrying data points. In particular, the HST observations (Pian et al 1998) exhibit a spectral slope α ≈ 1.5 ± 0.3 based on quoted R (0.7µm) and H (1.6µm) band magnitudes from the STIS and NICMOS instruments. The observed slope in Sokolov et al's data is 1.1, which Reichart interprets as a reddened spectrum with intrinsic slope 0.8. The HST data point is thus in mild conflict with the Sokolov et al observation. The significance of this conflict is unclear, since the error on the HST data is dominated by calibration uncertainties.
Likewise, if we compare the spectral slopes inferred from the Keck K s band data (Morris et al 1997) and near contemporaneous optical data Keleman 1997), we find slopes of 0.24 ± 0.12 at t ⊕ = 4.35 day and 0.40 ± 0.10 at t ⊕ = 7.35 day. These are now substantially bluer than the value from Sokolov et al. The first of these may simply indicate that the K band flux has not yet passed its peak and entered the power law decay phase; scaling from the R band peak at t ⊕ ≈ 1.9 day gives a K band peak at ∼ 4.1 day. The second is harder to explain physically but easier observationally, because the R band flux estimate is based on an unfiltered observation that may have (for example) a substantial color term.
The net effect of such outlying data points is illustrated by the light curve fits in Rhoads 1999b. These fits achieve χ 2 per degree of freedom around 3.6 in fitting to a large compilation of R band data (Garcia et al 1998). It is not likely that any current model can do better without discarding either predictive power or outlying data points.
In summary, the spherical model does fit the GRB 970508 afterglow model better than the beamed model developed in this paper for any beaming angle ζ m < 0.5 radian. There are a few discrepancies in the measured spectral slopes. If these are real, they pose a challenge to standard fireball models. However, they could merely be indicative of calibration problems in inhomogeneous data. It is noteworthy that Sokolov et al (1997), who have the largest multiband data set from a single telescope, find no evidence for spectral slope evolution over the interval 2 day ≤ t ⊕ ≤ 5 day.
CONCLUSIONS
We have shown that under a simple model of beamed gamma-ray bursts, the dynamical evolution of the burst remnant changes at late times. This change introduces a break in the light curve, which is potentially observable. The afterglows of GRB 970508, 970228, and 971214 showed no convincing evidence for such breaks, and their combined spectral slopes and light curves are inconsistent with the predictions of this beamed model. This implies that beaming tests based on blind searches for afterglows must be prepared to identify transients whose properties differ appreciably from the properties of these approximately spherical afterglows. Some more recent afterglows (GRB 990123; GRB 980519) better match beamed burst models, and may provide more suitable templates for these searches.
Comparing our model with late time optical and radio observations, we suggest that GRB 970508 was effectively unbeamed. This implies energy requirements that are near the canonical isotropic values for cosmological distances, and are not greatly mitigated by strong beaming. No straightforward variation on our beamed fireball model seems likely to simultaneously explain the observed spectral slope and a pure power law light curve decaying at the observed rate. We conjecture that strongly beamed fireball models cannot explain all observed gamma ray burst afterglows without substantially altering at least one major ingredient of the models. We therefore obtain the first lower bound on GRB energy requirements that does not involve assumptions about beaming: E ∼ > 3 × 10 49 erg. This limit will increase by an order of magnitude if the same material that gives rise to the optical afterglow causes either the X-ray afterglow or the gamma ray emission, and will rise further if the burst's energy is not converted to radiation with perfect efficiency. I wish to thank Ralph Wijers, Jonathan Katz, Tsvi Piran, Eli Waxman, Daniel Reichart, Alexander Kopylov, David De Young, and Sangeeta Malhotra for helpful communications. I also wish to thank Infrared Processing and Analysis Center for hospitality during the course of this work. Finally, I wish to thank all those who worked to achieve accurate gamma ray burst position measurements and so opened the way for afterglow studies. This work was supported by a Kitt Peak Postdoctoral Fellowship. Kitt Peak National Observatory is part of the National Optical Astronomy Observatories, operated by the Association of Universities for Research in Astronomy. Fig. 1.-Dependence of the bulk Lorentz factor Γ on the burst expansion radius for an isotropic burst and a burst beamed into an opening angle ζ m = 0.01 radian. Both bursts follow a Γ ∝ r −3/2 evolution initially, but the beamed burst changes its behavior at Γ ≈ 100 ≈ 1/ζ m , beyond which its Lorentz factor decays exponentially with radius. Fig. 2.-Dimensionless peak flux as a function of dimensionless observer-frame time. Points show the results of numerical integrations. Dashed lines show the analytic asymptotic forms from equations 30 and 40, scaled to fiducial values as described in section 3.2.4. The late-time flux density is below the prediction of equation 40 by a factor of ∼ 0.7. This stems from approximations in the exponential regime initial conditions (equations 16 to 19), which are derived by applying the power law regime results beyond their range of strict validity. The solid curve shows an empirical interpolation between the early and late-time analytic forms, incorporating a factor 0.7 correction to the late-time asymptotic form (see section 3.2.5). -Dimensionless frequency at which the spectrum peaks as a function of dimensionless observer-frame time. The y-axis shows log 10 (ν ⊕,m t 2 ⊕ ), so that a t −2 ⊕ decay of peak flux density with time appears as a horizontal line. As in figure 2, points show the results of numerical integrations; dashed lines show analytic asymptotic forms (here from equations 28 and 34), and the solid line is an empirical interpolation. The late-time asymptotic frequency given by equations 34 is seen to be too large by a factor ∼ 1.35. This is due to the approximate initial conditions used to derive equation 34, and a correction factor has been applied in deriving the interpolated curve (see section 3.2.5). Fig. 4.-Sample light curves in dimensionless units. The curves have been derived by combining a broken power law spectrum (with electron energy distribution slope p = 2) with the interpolated, dimensionless forms of the F ν,m,⊕ (t ⊕ ) and ν ⊕,m (t ⊕ ) relations (see section 3.2.5 for details). The light curves correspond to frequencies 10 8 ν ⊕,m,b , 10 1.5 ν ⊕,m,b , and 10 −8 ν ⊕,m,b , in order of decreasing flux density at the earliest time plotted. All the asymptotic behaviors described in equations 50 are exhibited here, though the additional cooling break (equation 51) is omitted. The transition between the behaviors for t ⊕ ≪ t ⊕,b and t ⊕ ≫ t ⊕,b is rather gradual. The other transition, between t ⊕ ≪ t ⊕,m (ν ⊕ ) and t ⊕ ≫ t ⊕,m (ν ⊕ ), is artificially sharp in these plots because the adopted spectrum has a discontinuous slope at ν ⊕,m . A more detailed treatment of the spectrum would smooth out this transition also. | 2014-10-01T00:00:00.000Z | 1999-03-25T00:00:00.000 | {
"year": 1999,
"sha1": "5eaa08df29ecd16f975b06dcdf5bfdb8bca113f4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/9903399",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d13b308c065aaee67a3b35b9313e5a5a79a897a4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258358716 | pes2o/s2orc | v3-fos-license | Epipactis bucegensis—A Separate Autogamous Species within the E. helleborine Alliance
A new species of Epipactis from Bucegi Natural Park ROSCI0013, Southern Carpathians, Central Romania is described. Three medium-sized populations of Epipactis bucegensis (65–70 individuals in total) were discovered in the south-eastern, subalpine area of the park. To properly describe and distinguish the newly found taxon from other Romanian Epipactis, 37 morphological characters were measured directly from living plants and flowers. Moreover, a detailed taxonomic treatment and description with corresponding colour photos and line drawings illustrations of the holotype are also included. Epipactis bucegensis is an obligate autogamous species that partially resembles Epipactis muelleri, from which it differs in the basal distribution of leaves on the stem (vs. median distribution); near-erect leaf posture (vs. horizontally spread, arched downwards); lanceolate–acuminate, yellowish-green leaves (vs. oval–elongate, vivid-green leaves); bipartite labellum lacking the mesochile (vs. tripartite labellum); crimson-red, wide, ovoid–elongated, flattened hypochile (vs. dark-brown to black roundish hypochile); triangular, white epichile with a sharply tapering apex (vs. heart-shaped, greenish-yellow epichile with obtuse, roundish apex); and two wide-apart, purple, pyramidal calli (vs. two closely placed, attenuated, mildly wrinkled, greenish-yellow calli). Epipactis bucegensis is easily distinguished from all other European Epipactis taxa by the bipartite, wide labellum that lacks the mesochile. In addition, information regarding its distribution (maps), habitat, ecology, phenology and IUCN conservation assessments are provided.
Introduction
Genus Epipactis Zinn, 1757 belongs to Tribe Neottieae Lindl., 1826, Subfamily Epidendroideae Lindl., 1821, Family Orchidaceae Juss., 1789. The generic name, Epipactis, originates in the ancient Greek word epipaktís, a name given for the first time by Greek philosopher and botanist Theophrastus of Eresos (ca. 371-ca. 287 B.C.E.) to a herbaceous plant that curdled milk, possibly a member of the highly poisonous, unrelated genera of invasive plants Helleborus (Family Ranunculaceae, known as Hellebores) and Veratrum (Family Melanthiaceae, known as the False Hellebores). Since Epipactis orchids have been associated with the poisonous, invasive Hellebores from ancient times, the generic vernacular name of this genus remains, to this day, the Helleborines [1].
In this paper, we describe and illustrate a new autogamous species within the E. helleborine alliance named Epipactis bucegensis. The first encounter with Epipactis bucegensis took place in July 2009 during an orchid field study in the south-eastern part of Bucegi Natural Park, Southern Carpathians ( Figure 1C, red dots). At first sight, in the harsh light of the melting-hot summer days, the plants looked rather like a peculiar group of yellowish, withered Epipactis helleborine (L.) Crantz, perfectly camouflaged among the brownish, grassy, surrounding vegetation. However, the most striking features were the elongated inflorescences bearing several inconspicuous, creamy-white, closed flowers hanging on the pendant, yellowish ovaries, a clear indication of an autogamous species, different from the common, allogamous Epipactis helleborine (L.) Crantz, which is sporadically found in the area. After a closer examination, which involved manually opening several flowers, we also noticed the unusual, unique structure of the nectarless labellum, which completely lacked the middle narrowing junction, a feature that differentiated it from any other Romanian Epipactis species. Furthermore, the absence of the viscidium and the crumbling, disintegrating pollinia reinforced our initial supposition of a distinct, autogamous taxon. Furthermore, in the summer of 2022, several expeditions to Bucegi Natural Park were made, and during these field trips, two new populations of the same taxon were discovered. The most distinctive morphological features proved to be highly preserved and consistent, the new specimens showing little to no At first sight, in the harsh light of the melting-hot summer days, the plants looked rather like a peculiar group of yellowish, withered Epipactis helleborine (L.) Crantz, perfectly camouflaged among the brownish, grassy, surrounding vegetation. However, the most striking features were the elongated inflorescences bearing several inconspicuous, creamy-white, closed flowers hanging on the pendant, yellowish ovaries, a clear indication of an autogamous species, different from the common, allogamous Epipactis helleborine (L.) Crantz, which is sporadically found in the area. After a closer examination, which involved manually opening several flowers, we also noticed the unusual, unique structure of the nectarless labellum, which completely lacked the middle narrowing junction, a feature that differentiated it from any other Romanian Epipactis species. Furthermore, the absence of the viscidium and the crumbling, disintegrating pollinia reinforced our initial supposition of a distinct, autogamous taxon. Furthermore, in the summer of 2022, several expeditions to Bucegi Natural Park were made, and during these field trips, two new populations of the same taxon were discovered. The most distinctive morphological features proved to be highly preserved and consistent, the new specimens showing little to no variation regarding the yellowish aspect of the plants, the creamy-white closed flowers and the nectarless labellum completely lacking the middle junction. Undeniably, the newly discovered populations confirmed, once more, the occurrence of a persistent, new species, well-established within the south-eastern area of the park. Consequently, we chose to formally describe this new taxon as Epipactis bucegiana, with the confidence that, in the years to come, more Plants 2023, 12, 1761 4 of 31 undiscovered populations will be revealed within the park's greater area. Additionally, we provide information about its geographical distribution, habitat, ecology, phenology and IUCN conservation status, together with illustrations and photographs based on living specimens (the holotype).
Sites Studied
The study sites were on wet to dry, calcareous substrates, next to deciduous to mixed woodland; altitude between 700-1100 m a.s.l. The populations occurred in sunny meadows and pasturelands, neighbouring margins of mixed forests covering subalpine slopes, close to urban sites ( Figure 1C, red dots).
Morphological Comparisons
Despite the modern molecular techniques, a quick and simple tool to recognize a taxon in field conditions is still required and thus, morphological comparisons prevail in plant identification [23]. Meanwhile, taking into consideration the great phenotypic plasticity of the genus, the macro-and micromorphological features that can be used in taxon delimitation should be carefully assessed. A detailed comparison emphasizing the most significant morphological characters that distinguish Epipactis bucegensis from the related species is shown in Table 1
Morphological Distinctness of Epipactis bucegensis
Epipactis bucegensis is morphologically comparable to the autogamous Epipactis muelleri Godfery and the allogamous Epipactis helleborine (L.) Crantz, but significantly differs from these species in several main characteristics of the vegetal and floral parts (Figures 5-8).
The lowermost leaf size and shape are species-specific features and thus pivotal in species delimitation/separation within the Epipactis genus [5,7,9]. As such, the lanceolateelongated, tapering Epipactis bucegensis basal leaf, different from the characteristic roundish-oval basal leaf of Epipactis helleborine (L.) Crantz, clearly differentiates/separates the two taxa as separate species. Epipactis bucegensis leaves' distribution on the stem (phyllotaxis) is mainly basal, very different from the middle-stem distribution characteristic to Epipactis muelleri Godfery and Epipactis helleborine (L.) Crantz. The leaf posture is spreading to erect, sheathing to a subtended angle of ca. 30° relative to the stem, differentiating it from Epipactis muelleri Godfery, in which the elongated, arched leaves spread horizontally, curving downwards. The leaf shape is elongate-lanceolate, acuminate, tapering at the tip, vs. the ovoid-elongate, acuminate leaves of Epipactis muelleri Godfery and the broadly ovoid to ovoid-elongated, horizontally spread leaves of Epipactis helleborine (L.) Crantz.
Morphological Distinctness of Epipactis bucegensis
Epipactis bucegensis is morphologically comparable to the autogamous Epipactis muelleri Godfery and the allogamous Epipactis helleborine (L.) Crantz, but significantly differs from these species in several main characteristics of the vegetal and floral parts (Figures 5-8).
The lowermost leaf size and shape are species-specific features and thus pivotal in species delimitation/separation within the Epipactis genus [5,7,9]. As such, the lanceolateelongated, tapering Epipactis bucegensis basal leaf, different from the characteristic roundishoval basal leaf of Epipactis helleborine (L.) Crantz, clearly differentiates/separates the two taxa as separate species. Epipactis bucegensis leaves' distribution on the stem (phyllotaxis) is mainly basal, very different from the middle-stem distribution characteristic to Epipactis muelleri Godfery and Epipactis helleborine (L.) Crantz. The leaf posture is spreading to erect, sheathing to a subtended angle of ca. 30 • relative to the stem, differentiating it from Epipactis muelleri Godfery, in which the elongated, arched leaves spread horizontally, curving downwards. The leaf shape is elongate-lanceolate, acuminate, tapering at the tip, vs. the ovoid-elongate, acuminate leaves of Epipactis muelleri Godfery and the broadly ovoid to ovoid-elongated, horizontally spread leaves of Epipactis helleborine (L.) Crantz. The leaf colour, especially in young individuals, is yellowish to yellowish-green, different from the light-to deep-green leaves of the compared species (Figures 2B and 4A,C,E). The colour of the sepals and petals is white to whitish-yellow, vs. the whitishgreen to greenish-yellow tepals of Epipactis muelleri Godfery (Figures 2A-E, 4B, 5A and 7A,B).
Epipactis bucegensis's unique labellum structure represents its main distinctive feature, making it easily distinguishable, not only from Epipactis muelleri Godfery, but from all other European Epipactis species (Figures 2D,E, 3A,F,I,M, 5A and 7A). Specifically, the labellum is bipartite, formed of only two parts, the hypochile and epichile, with a completely absent mesochile. By comparison, Epipactis muelleri Godfery and Epipactis helleborine (L.) Crantz have tripartite labella, with a well-defined mesochile, the narrow junction between the hypochile and epichile. The complete absence of the mesochile is the most distinctive feature of the species (Figures 2D,E, 3I,M, 4A,C,E and 7A,B,D,E). The hypochile shape is characteristically wide, ovoid and flattened, vs. the orbicular, cup-like, deep la- The leaf colour, especially in young individuals, is yellowish to yellowish-green, different from the light-to deep-green leaves of the compared species ( Figures 2B and 4A,C,E). The colour of the sepals and petals is white to whitish-yellow, vs. the whitish-green to greenish-yellow tepals of Epipactis muelleri Godfery (Figures 2A-E, 4B, 5A and 7A,B).
Epipactis bucegensis's unique labellum structure represents its main distinctive feature, making it easily distinguishable, not only from Epipactis muelleri Godfery, but from all other European Epipactis species (Figures 2D,E, 3A,F,I,M, 5A and 7A). Specifically, the labellum is bipartite, formed of only two parts, the hypochile and epichile, with a completely absent mesochile. By comparison, Epipactis muelleri Godfery and Epipactis helleborine (L.) Crantz have tripartite labella, with a well-defined mesochile, the narrow junction between the hypochile and epichile. The complete absence of the mesochile is the most distinctive feature of the species (Figures 2D,E, 3I,M, 4A,C,E and 7A,B,D,E). The hypochile shape is characteristically wide, ovoid and flattened, vs. the orbicular, cup-like, deep labellum of Epipactis muelleri Godfery. The hypochile inner wall is shiny, dry, crimson-purple-coloured and unusually wrinkled, completely different from that of Epipactis muelleri Godfery, which is deep, cup-shaped, roundish, shiny, smooth, blackish-brown and mildly nectar-secreting. The epichile is reduced, triangular, flat, smooth and tapering, vs. the wide-obtuse, deeply wrinkled epichile of Epipactis muelleri Godfery and Epipactis helleborine (L.) Crantz. Its colour is invariably bright white vs. the greenish-yellow epichile of Epipactis muelleri Godfery. The two basal calli found at the base of the epichile are also highly specific, pyramid-shaped, tooth-like, prominent, wide apart, crimson-purple-coloured and smooth/non-wrinkled, vs. the significantly wrinkled and attenuated, greenish-yellow (very rarely pale-pinkishwashed) calli of Epipactis muelleri Godfery (Figures 3I, 4B, 5A and 7A,B).
The gynostemium (column) is specific, with the anther significantly angled relative to the stigma (typical of an obligate autogamous species), differentiating it from the erect gynostemium of the allogamous Epipactis helleborine (L.) Crantz. The stigma shape is rectangular, wider than long, bilobed, roof-like and entirely flat, vs. the quadrangular, bilobed, deeply V-shaped and concave one in Epipactis muelleri Godfery.
Epipactis bucegensis can be distinguished from Epipactis helleborine (L.) Crantz by its modified anther morphology associated with its pollination strategy, obligate autogamy ( Figures 6-8). The rostellum and viscidium are completely absent, which distinguishes Epipactis bucegensis from the allogamous Epipactis helleborine (L.) Crantz, in which the rostellum and viscidium structures are well-developed and functional ( Figures 5A,B,E,F and 6A,B,E,F). The clinandrium is absent, the highly friable pollinia lying free in the anther, crumbling onto the upper part of the stigmatic surface, vs. the more compact pollinia of Epipactis helleborine (L.) Crantz, enclosed in the clinandrium and well-separated from the stigmatic cavity by the roof-like rostellum ( Figures 5A,B,E,F and 6A,B,D,E).
The purple-pigmented flower pedicel base of Epipactis bucegensis clearly distinguishes it as a separate species from Epipactis muelleri Godfery, in which the pedicels' bases are yellowish to light-green ( Figure 8A-D,F). The pedicel-base pigmentation is an essential morphological feature (key) in Epipactis species delimitation/separation [5,7]. The fruit of Epipactis bucegensis is also specific, highly distinct from Epipactis muelleri Godfery. In mature stages, it is pear-shaped, dark-green, purple-washed and strongly ridged on the surface, vs. the elongated, light-green, smoother-surfaced fruit of Epipactis muelleri Godfery ( Figure 8A,B). Epipactis bucegensis was also closely compared to the European autogamous species described in detail in the comprehensive, abundantly illustrated database of the Arbeitskreis Heimische Orchideen Bayern e.V. [10], but no similar taxon was observed. The most important feature that distinguishes Epipactis bucegensis from all other European Epipactis taxa is the bipartite, wide labellum that totally lacks the mesochile ( Figures 3I, 4B, 5A and 7A). Therefore, given the significant morphological distinction, its reproductive isolation and its consistent establishment in Bucegi Natural Park, we consider Epipactis bucegensis to be a separate (obligate) autogamous species within the Epipactis helleborine alliance.
Morphological Changes to Autogamy
Orchids of the genus Epipactis that transition from allogamy to autogamy have to go through various overall morphological changes. To enable autogamy, the pollen should be able to reach the stigma. This is achieved by various adaptations of the flower morphology [30]. The transition from chasmogamous to cleistogamous flowers and some modifications in the architecture of the gynostemium and pollinia structure enable the flowers to switch the pollination strategy from allogamy to (near-) obligate autogamy [31,32]. Allogamous Epipactis species attract their specific pollinators with several floral signals, such as flower shape, coloration and complex floral scents (floral volatiles), and reward them with copious amounts of nectar [33]. Nectar is mainly secreted in the concave basal part of the labellum, known as the hypochile (Figures 4D, 5E and 7D). The transition from allogamy to autogamy/cleistogamy is regarded as a more efficient way for the plant to use its energetic/nutritional resources [34].
Epipactis bucegensis is an obligate autogamous species that does not require the presence of pollinators, showing all the particular morphological transformations of a typical selfing species. Its flowers are cleistogamous, pendant, scentless and inconspicuously coloured. By blocking the anthocyanin pigment synthesis, the flowers become whitishcreamy to yellowish-green, perfectly camouflaging the plant against the brownish-greenish background of the hot summer, sun-burnt, grassy vegetation characteristic of its preferred habitat (Figures 2 and 4A,B; note: in Figure 2D,E, for the purpose of this study, some of the flowers were hand/manually opened to clearly show the morphology of the floral parts).
However, despite being an obligate autogamous species, Epipactis bucegensis has not lost the ability to produce faint traces of floral nectar (Figures 2A-D, 3B,F,I,J,M, 4B, 5A and 7A,B). The finding was rather surprising since orchids commonly use nectar to attract their pollinators. We found only minute droplets of nectar that accumulated inside the hypochile of the one to two topmost, young flowers ( Figure 2E). Minute nectar production was reported several times in other autogams, such as Epipactis albensis Nováková and Rydlo [17,33,35], Epipactis muelleri Godfery [11] and Epipactis leptochila (Godfery) Godfery [30]. These obligate autogams are relatively young species that recently diverged from within the evolutionarily active Epipactis helleborine alliance [36]. Moreover, recent studies showed that the chemical composition of Epipactis albensis Nováková and Rydlo nectar and scent is partially similar to those of the closely related allogamous species Epipactis helleborine (L.) Crantz, further proving its evolutionary origin [33]. The above examples constitute indicative examples of species that transition from ancestral allogamous, insect-pollinating species to obligate autogamy. While still retaining some early features, such as nectar and scent production, these orchids became obligate autogamous/cleistogamous, making insects' visits nearly impossible [30]. The synthesis of floral attractants or stimuli, i.e., olfactive (scent, odours), food (nectar, food bodies, exudates) and visual stimuli (pigments, colours, shapes, sizes), is highly energy-costly for the plants [37,38]. Once their production is terminated/ceased, the spared nutrients are used by the plants to produce higher numbers of mature, fertile seeds, crucial for their survival and proliferation, a stage regarded as particularly difficult for newly emerged taxa (such as Epipactis bucegensis) in the full process of colonising new, nutrient-poor niches [39].
Further, the gynostemium also suffered several morphological transformations. Similar to other autogams, the gynostemium of Epipactis bucegensis completely lost the apical structure of the column, termed the clinandrium or anther-bed ( Figures 6A,B and 7C,D) an indicative characteristic of autogamous species [40]. In allogamous species, this spacious, hollow structure, situated above the stigma, houses the pair of pollinia, preventing the pollen tetrads from falling off the anther ( Figures 5E,F, 6E,F and 7A,B). At dehiscence, due to the lack of the clinandrium, the pollinia, which lay freely in the anther, are projected forwards, falling onto the underlying stigmatic cavity (Figures 3B,C,E-G, 5B, 6A,B and 7A,B). The sessile anther angles even more relative to the stigma, further inclining the pollinia, which can thus easily contact the stigmatic surface ( Figures 3C,G, 5B, 6B and 7A,B). The same pollination strategy is employed by other autogams, e.g., Epipactis muelleri Godfery ( Figures 4C,D, 5C,D and 6C,D).
Additionally, the pollinia of Epipactis bucegensis gradually lost coherence and became more friable (Figures 3H, 5B, 6A,B and 7C), disintegrating into individual tetrads or groups of tetrads [28,39]. More compact pollinia, e.g., the pollinia of Epipactis helleborine (L.) Crantz ( Figures 5E,F, 6E,F and 7D,F), prevent the pollen from falling on the stigmatic cavity [41]. When the pollinia are less coherent, the pollen grains crumble on the stigmatic surface, enabling rapid self-pollination [42]. In the case of Epipactis bucegensis, the friability of pollinia is also environmentally dependent. Quite often, external factors, such as high temperatures, humidity and air currents, were reported to influence their friability [43]. In Romania, in July, the outside temperatures may reach 38-40 • C, which causes the pollinia to expand and become even more friable. Apart from the external factors, the flowers hang on fairly long and flexible pedicels, very sensitive to any externally generated movements, such as wind or water drops, which may swing the flowers in all directions. Such movements further increase the disintegration of the mealy pollinia, which crumble onto the viscous stigmatic cavity, situated just below the anther.
Selfing in Epipactis bucegensis is also efficiently promoted by the complete lack of a rostellum, the swollen apical part of the median stigmatic lobe [44], which is well-developed in allogamous Epipactis species (Figures 5F, 6E,F and 7E). According to Uphof (1968), 'a characteristic of the cleistogamic orchid flower is a very rudimentary rostellum or its absence' [45]. In allogamous Epipactis taxa, a well-developed rostellum creates the most important physical barrier between the male/pollinia and female/stigma parts of the flower, preventing selffertilization [2,46,47]. In most self-pollinated orchids, however, this structure either does not develop, as in Epipactis bucegensis ( Figure 6A,B), or, as in Epipactis muelleri Godfery ( Figures 5C,D and 6C,D) it develops incompletely or sometimes disintegrates during flowering [48]. An important feature in autogams is that, in the absence of the rostellum, the stigmatic cavity usually becomes more active and hypersecreting, being covered in abundant, viscous stigmatic exudate. This is easily observed in Epipactis bucegensis, in which the stigma and, in particular, the lateral prominent stigmatic lobes are heavily loaded with viscous, translucid stigmatic exudate ( Figure 2E). Just after the impregnation of the pollen grains with the stigmatic secretions, the pollen tetrads start to germinate, producing elongated tubes that grow, fertilizing the ovules ( Figures 5B, 6A-D and 7A,B). The pollinia are thus fixed in the anther, immobile, continuously shedding tetrads, a feature that can be observed in many autogamous species [49]. Robatsch (1983) estimated that 60% of Epipactis orchids are autogamous, characterized by having powdery pollen that falls onto the stigma [50,51] due to degeneration of the rostellum and relatively low nectar and odour production. In cross-pollinated species, the tip of the rostellum produces adhesive substances, forming a viscidium [52]. In allogamous, insectdependent Epipactis orchids, the viscidium is a protruding sphere-like extension composed of a milky, adhesive liquid, surrounded by a viscidial membrane ( Figures 5E,F, 6E,F and 7D-F(a-c)), which connects the viscidium to the pollinarium [36,44]. The main role of the viscidium is to adhere to the pollinators' bodies and dislodge the pollinia from the anther during pollination ( Figure 7F(a,b)). The presence of a large, viscous viscidium ensures that the pollinia are removed by pollinators and hence, the level of autogamy is decreased.
Pollination Monitoring
True Epipactis pollinators are usually large, strong insects capable of carrying the heavy load of pollinia. Our observations included various hymenopterans-wasps (family Vespidae), bees (family Apidae), bumblebees (mainly genus Bombus) and ants (family Formicidae); coleopterans-beetles (Cerambycidae and Oedemeridae families); and large dipterans-forest flies (family Anthomyiidae). They usually feed on copious amounts of nectar secreted by allogamous Epipactis species such as Epipactis helleborine (L.) Crantz, Epipactis purpurata Sm., Epipactis distans Arv.-Touv. and Epipactis atrorubens (Hoffm.) Besser [19]. In Epipactis bucegensis, the viscidium is not formed as a consequence of the absence of the rostellum (Figures 6A,B and 8B). Similarly, the rostellum is absent in Epipactis muelleri Godfery ( Figures 5D and 6C,D). Thus, the complete lack of the rostellum-viscidium structure(s), accompanied by the friable pollinia and hypersecreting stigma, resulted in very efficient self-pollination, consequently reducing the chances of pollen being transported by insects. As a result, the cleistogamous flowers of Epipactis bucegensis self-pollinate during the early stages or even before anthesis (in the bud stages). This was confirmed by the fact that, during the 10-12 days of field research, we did not observe any true pollinating insects visiting the flowers of Epipactis bucegensis. Nevertheless, the flowers were accidentally visited only by sporadic small forest flies of the family Drosophilidae ( Figure 4B, white arrow) and red ants, Myrmica rubra (family Formicidae, Figure 8A, red arrow). These random visitors are only food foragers, searching for nectar or floral exudates during their visits. They are not true orchid pollinators, since they are too small to carry or displace the heavy pollinia from the anther. In one instance, a small female spider ( Figure 8A, white arrow) was observed to reside in one of the inflorescences, using it as a hunting site for its small dipteran prey. Spiders (order Araneae) are the most common predators in orchids, found to inhabit the inflorescences of many orchid species, successfully preying on their pollinators [53].
Thus, the inconspicuously coloured, nectarless, scentless, cleistogamous flowers of Epipactis bucegensis show all the characteristic features of a typical obligate autogam capable of forming healthy, new populations, completely independent of the presence of pollinating insects. Nevertheless, autogamy is rarely absolute. There is always a chance that an insect of a suitable size, usually a food forager, either a true pollinator or a visitor, occasionally visits the nearly closed (cleistogamous) flowers of Epipactis bucegensis. Because the species does not produce a viscidium, even when the flowers are penetrated by insects, the pollinia do not attach to their bodies. Instead, due to the insects' disturbance and movements, the pollinia disintegrate even more, spreading onto the stigmatic surface, and thus, selfpollinating the flowers.
The early swelling of the ovaries is also a clear indication of early autogamy [54][55][56][57]. Even before the topmost flowers reach maturity, the basal ovaries are already swollen, while still keeping the withered flowers hanging on the capsules. Within 2-5 days, almost all ovaries develop into dark-green, purple-tinted, pear-shaped, swollen fruits (Figures 2A,B and 8A,B). The fruit set is very high, up to 90-98% (in 65 counts), a characteristic of autogamous species. In a few individuals, the upper 1-2 flowers remain non-self-pollinated, being eventually aborted by the plant. Once the fruits start to swell, the initial yellowish-green colour of the ovaries and leaves gradually changes to dark green ( Figure 8A,B). This indicates a significant increase in the photosynthetic activity of the plants, which start to produce higher amounts of carbohydrates to accomplish the maturation of the fruits and seeds, thus assuring their successful reproduction and proliferation. Similar quick and efficient self-pollination strategies were observed in other autogamous Epipactis species, such as Epipactis muelleri Godfery, Epipactis albensis Nováková and Rydlo, Epipactis leptochilla (Godfery) Godfery and Epipactis phyllanthes G.E.Sm. [28,29,36,58].
Active Speciation within the Epipactis Genus
Epipactis is regarded as an evolutionarily young genus that, recently, has undergone a rapid process of diversification and speciation [23,59], with numerous new (mostly) autogamous species being described. According to Delforge (2006), during the last glaciation, these species had their distribution restricted to the south, to the Iberian, Italian and Balkan peninsulas, as well as the Caucasus. With the amelioration of the climate, which began at around 10,000 B.C.E, the beechwoods moved slowly northwest, reaching Scandinavia at around 500 C.E. This recent arrival in mid-Europe may explain why Epipactis seems to be in the process of evolutionary radiation and why the taxonomic treatment of the genus is rather challenging [5] Based on extensive phylogenetic analyses, it was suggested that the newly emerged, near-obligate autogams had repeatedly radiated across Europe from within the more widespread, putative universal ancestral species, the predominantly allogamous Epipactis helleborine sensu stricto (s.s.). According to Sramkó et al. (2019), Epipactis helleborine (L.) Crantz is, most probably, the direct ancestor of at least ten recently derived species, the majority of them near-obligate autogams, such as Epipactis leptochila Godfery) Godfery, Epipactis greuteri H.Baumann and Künkele, Epipactis muelleri Godfery, Epipactis albensis Nováková and Rydlo and Epipactis dunensis (T.Stephenson and T.A.Stephenson) Godfery. In evolutionary terms, these facultative/near-obligate autogams were supposed to have undergone a fairly recent, rapid separation from their ancestral genetic background [36]. Authentic speciation events can lead to the formation of autogams from allogams, although autogams are believed to constitute evolutionary dead-ends, no autogam ever being able to generate further autogamous species, as reported previously [23,29,[60][61][62][63]. Consequently, this excludes the possibility of an eventual radiation/emergence of Epipactis bucegensis from obligate autogams, such as Epipactis muelleri Godfery. Nevertheless, further detailed phylogenetic analyses are needed to elucidate the potential direct ancestral species of Epipactis bucegensis, the time of radiation and its phylogenetic relationships within the aggregate. As such, the Epipactis helleborine alliance represents an example of an active evolutionary clade, within which speciation events have occurred comparatively recently, mainly through transitions from allogamy to autogamy [24,26,64].
It is well-known that self-compatible Epipactis orchids are well adapted to switch from allogamy to autogamy, depending on the degree of the environmental factors' adversity, which may accelerate the process [2,36,47]. Thus, the natural pressure imposed by the external factors may accelerate this transition process, causing autogamy to occur with increasingly high frequency in successive flowering seasons, ultimately leading to genetic drift, i.e., the change in the frequency of an existing gene variant (allele) in a population due to random chance [65], also known as allelic drift or the Wright effect [66]. There are many examples of species that can act as both cross-pollinating (pollinator-dependant) and autopollinating, depending on various external factors of their natural habitats. Thus, even in the obligately allogamous species, autogamy was shown to incidentally take place [31,33,67]. Both autogamous and allogamous flowers within the same Epipactis helleborine (L.) Crantz plant were reported several times [3,42,50,68,69]. Additionally, it was reported that, as an adaptation to extreme conditions, obligate allogams, such as Epipactis helleborine ssp. neerlandica (Verm.) Buttler and Epipactis helleborine subsp. orbicularis (K.Richt.) E.Klein (now Epipactis distans Arv.-Touv.), can change their mode of pollination from allogamy towards autogamy [54]. In temperate regions, they are allogamous and well-visited by insects [35,70]. However, in xerophilous regions, they may become facultative autogams even before anthesis [54][55][56][57]. Therefore, the actual pollination syndrome or the reproductive strategy can be significantly influenced by floral ontogeny (age of the flowers), environment (temperature, high or low humidity, drying winds, etc.) or both [30,61,62]. Nevertheless, the evolutionary (morphological) transition from obligate allogams to obligate autogams is the result of a combination of developmental genetic, epigenetic and ecophenotypic factors, as a consequence of both prolonged natural selection pressure and genetic drift [36].
It must be mentioned that the evolutionary shift from cross-fertilisation to self-fertilization is one of the most frequent evolutionary transitions in plants. It is believed that autogamy is employed by approximately 10-15% of flowering plants [71] as an adaptation to growing in harsh, unfamiliar habitats where, usually, the specific pollinating insects are lacking [31]. There have also been numerous reports of autogamy in the orchid family [72]. Among the temperate orchids, apart from the Epipactis genus, self-pollination (facultative and/or obligate) has been found in several other genera such as Ophrys L., Pseudorchis Ség., Neottia Guett., Cephalanthera Rich., Chamorchis Rich. and Corralorhiza Gagnebin [19,39,73]. The more extreme the conditions in which an orchid grows (biotope, habitat and/or climate changes, presence/absence of pollinators, etc.), the higher the chances that it will turn towards autogamy as a survival strategy. Anthropogenic factors, mainly the destruction and loss of the original habitats (agriculture, urban expansions, deforestation, etc.), leaving only small suitable patches for the orchids, probably also contributed to the switch of pollination mode and reproductive strategy [30]. Regardless of the presence or absence of pollinators, independence from insects offers orchids an opportunity to conquer new habitats, assuring unconditional, certain reproductive success [71,[74][75][76]. Shady woodlands with comparatively impoverished ground floras, where pollinator visits are likely to be less frequent, are the preferred habitats of most of the autogams. Hence, the increased ability of self-pollinating orchids to colonise new ecological niches may explain the large geographic area that the newly formed autogamous Epipactis species can occupy [36].
Inbreeding-Friend or Foe?
In nature, most plant and animal species have evolved various mechanisms to avoid inbreeding. Inbreeding produces increased homozygosity of recessive partially deleterious mutants and by chance in small populations, such as isolated populations of autogamous plants, these alleles can become fixed [77]. Repetitive autogamy leads to population inbreeding depression, generally expressed by an increased frequency and accumulation of recessive lethal or mildly deleterious mutations. Consequently, the individuals experience significantly reduced viability and fecundity, which ultimately leads to a sudden decline in population numbers [75]. In the early 19th century, Darwin argued that outcrossed offspring of plants are usually fitter and better adapted to survive than those produced by self-fertilization [78,79]. He considered that flowering plants evolved well-adapted features to enable outcrossing, thus avoiding inbreeding depression caused by selfing, as the predominant mode of reproduction [80][81][82].
Despite the commonly believed disadvantages of inbreeding, studies/observations of dominantly allogamous species vs. the dominantly autogamous species within the Epipactis section revealed that there is no noticeable deleterious effect of selfing in the recently formed autogams [39]. According to Sramkó et al. (2019), inbreeding depression in Epipactis lineages may be either counterbalanced by outbreeding or cleared out from the autogams by natural selection that acts on the unmasked deleterious recessives. At the same time, the average distributional areas or population sizes/counts proved not to be significantly different between the already established allogams and recently radiated autogams. Thus, it was suggested that the great genetic diversity of Epipactis helleborine (L.) Crantz, together with its greater phylogenetic range, enabled it to function rather successfully as a source of the future novel (autogamous, cleistogamous) species [36].
The Role of Cleistogamy in Active Speciation
In the case of geographically localized populations that suffer genetic isolation from their progenitors, active speciation may take place, generating new lineages, mostly with a tendency towards producing cleistogamous flowers, i.e., flowers that do not open and are self-fertilized in the bud [2,42], a tendency strongly expressed by Epipactis bucegensis. Cleistogamy prevents the access of insects, invariably leading to obligate autogamy. Nevertheless, some authors further suggested that this transition in the breeding system was unidirectional, the allogams never arising from autogams, which makes the autogamous Epipactis species potentially evolutionary dead-ends [29,[60][61][62]. Varying degrees of autogamy were reported in several other groups, e.g., the Spiranthes sinensis (Pers.) Ames species complex, in which autogamy has contributed to intraspecific morphological variability and, in some instances, speciation [63]. A typical feature of obligately self-pollinating taxa is that the newly emerged group(s) are highly homogenous, while there are considerable differences between different populations [27][28][29]47]. Squirrell et al. (2002) noted that: 'With each generation of complete selfing, homozygosity increases by 50%. In this fashion, a large genetic distance arises rapidly between progenitor and derivative species' [28]. This has led to an increase in speciation, mostly represented by local (micro)endemic forms, demonstrating the plasticity of the genus and the dynamics of its evolution [11].
The cleistogamous, micro-endemic Epipactis bucegensis may represent an example of a recently genetically separated autogam that eventually colonized new habitats and successfully reproduced and proliferated, independent of the pollinators' presence. Discovered 14 years ago, Epipactis bucegensis proved to form stable, large, healthy populations in the south-eastern part of Bucegi Natural Park, at the same time presenting highly preserved specific characters that showed little to no variability. The essential morphological features (keys) in Epipactis species separation, such as the creamy-white, pendant, cleistogamous flowers; the unique structure of the labellum lacking the mesochile; the distinctive pyramidal/triangular purple-coloured labellar calli; and the purple-pigmented base of the pedicel and fruit represent species-specific characters, which significantly distinguish it from the related Epipactis taxa.
Therefore, our thorough approach strongly supports the recognition of Epipactis bucegensis as a morphologically, phenologically and ecologically distinct species within the Epipactis helleborine aggregate.
Sites Studied
The studies were conducted in three subalpine areas within the Bucegi Natural Park, a protected area included within Natura 2000 site ROSCI0013, IUCN category V (Protected Landscape, Law No. 5, 6.03.2000), covering Prahova, Dâmbovit , a and Bras , ov Counties, Southern Carpathians, Central Romania, with an area of ca. 32.663 ha/326.63 km 2 and the highest elevation (elev.) at Omu Peak of 2505-2514 m a.s.l (above sea level).
Populations Counts
The first population of Epipactis bucegensis, counting a total of 5-6 individuals, was discovered by NEA on 26 July 2009 in Prahova County, Bucegi Natural Park, elev. 810-960 m a.s.l. Its occurrence was subsequently monitored in 2010 and 2011, counting 3-4 and 6-7 individual plants, respectively. Several digital photographs were taken, but neither detailed measurements nor formal descriptions were performed at the time. Unfortunately, further monitoring of the first Epipactis bucegensis population was not possible as the area was destroyed and most of the present flora was lost due to real estate development. Nevertheless, on 17 July 2022, during a botanical field study, two new populations were discovered by LB and MB in the south-eastern part of the park, in Dămbovit , a County, elev. 820-980 m a.s.l. Together, the two newly discovered populations contained a total of ca. 60-75 individuals (ca. 45-55 and 10-15 individuals/population). The plants were found occurring individually or in groups of 2-6 siblings. The initial population numbers might have been higher since the areas were used as cattle fields and part of the vegetation was already destroyed by the grazing animals.
Extent of Occurrence (EOO)
The populations were found growing nearby, at a distance of 3-5 km, with an EOO of ca. 10-15 km 2 each ( Figure 1C, red dots).
Morphological Comparisons
Measurements of the vegetative and floral parts were made from living plants and fresh flowers. To describe this newly found population as comprehensively as possible, a total of 117 morphological characters were compared, out of which 37 morphological characters were measured directly from living plants and flowers. The morphological characters used for the study included most of the characters used previously [83]. Special attention was given to the characters that proved to be taxonomically informative and those that involve the differentiating details in the morphology of the leaves, gynostemium, labellum, pollinia, ovary and fruit. The measurements are examples of the new taxon, Epipactis bucegensis, and its related species, the autogamous Epipactis muelleri Godfery and the allogamous Epipactis helleborine (L.) Crantz.
Pollination Monitoring
Monitoring was conducted for a total of 4-6 h per day, between 17 and 21 July 2022 when most of the flowers were in full anthesis. Nevertheless, the cleistogamous flowers were never fully opened; hence, pollinator presence/attraction was rather scarce. The observer (NA) was initially located approximately 2-3 m from the subjects (groups or individual plants). Once various insects were observed to patrol and/or approach the flowers, they were (intended) to be recorded in digital photographs (note: no insects were collected or harmed in any way during the study).
Digital Photographic Equipment
Digital images of individual plants and floral parts were taken using Nikon D3 and Nikon D850 camera bodies equipped with Nikon Micro NIKKOR 60 mm and NIKKOR 24.0-70.0 mm lenses. Additional equipment included a Manfrotto Tripod and Litra Torches 2.0s. An adapted Helion FB tube was used for automated focus bracketing. The images were analysed using Adobe Photoshop ® CC 2023, Zerene Stacker Software, Vers.2021-11-16 [84].
Maps
The map was created using ArcGIS Pro 3.1 software; the maps and elevation services were provided by the entities mentioned in the copyright.
Conclusions
Autogamy is a common reproductive mechanism used by many species of flowering plants, including the complex orchid genus Epipactis, as an adaptation to colonise new habitats. This monophyletic clade, with numerous, mostly newly evolved autogamous species is presently undergoing evolutionary radiation driven by a large spectrum of genotypic (genetic and/or epigenetic factors, genetic drift), phenotypic (ecophenotypic) and environmental factors (habitat changes, climate changes, presence/absence of true pollinators and specific mycorrhizae). Ancestor species, such as Epipactis helleborine s.l., have been shown to have, rather frequently and recently, generated many isolated, local autogamous (often cleistogamous) forms. These, generally viewed as examples of incipient speciation from within the parental genetic background, are often as widespread and ecologically successful as allogams, a result of a high level of initial/incipient genetic variation [29], which gives them the potential to evolve into new taxa [36].
Thus, due to the great phenotypic plasticity of the genus in response to environmental requirements, the formation of micro-endemic populations with different reproductive mechanisms led, in recent years, to noticeable, fast changes within the taxonomy of the Epipactis genus [85]. Novel morphological adaptations to new, isolated habitats are constantly described, often making the recently emerged taxa the subject of much discussion [60,86] and Epipactis one of the most taxonomically complex and dynamic orchid genera in Europe.
Cytogenetics: Chromosome numbers are very variable within the genus, with a basic chromosome number x = 10 [84]. The species might be similar in chromosome number to its relative Epipactis muelleri Godfery 2n = 4 [5]; nevertheless, this still needs to be determined.
Flowering period: The species has been observed exclusively in its natural habitat flowering from the beginning to mid-July. The flowers' longevity is very short to absent, self-pollination/autogamy occurring before the anther dehiscence, while still in the bud stages (cleistogamy). Nevertheless, we noticed closed flowers still hanging on the developing/swollen fruit capsules for several days before showing clear signs of flower senescence (flower wilting or shedding of the floral parts).
Habitat: Epipactis bucegensis prefers a cool subalpine climate, with moderate humidity, in full sun to partial shade, on dry to moist, neutral to calcareous/alkaline substrates. It also grows in open woodland, next to forest edges, in mixed (deciduous and coniferous) forests, grasslands, shrublands and anthropogenic habitats, such as rural and urban roadsides, lawns or private estates.
Ecology: Individuals of the species have been found occurring either as isolated adult plants, separated by a distance of ca. 10-30 m, or aggregated, forming small-to medium-sized groups (usually n < 10) composed of several siblings and one to three adult plants. Our field observations suggest that plants usually synchronize their blooming, most of them flowering during the hottest summer season, which usually corresponds to the subalpine areas of the park may lead to discovering new populations of Epipactis bucegensis. In recent years, Bucegi Natural Park proved to harbour undiscovered taxa, such as the newly discovered Nigritella nigra subsp. bucegiana Hedrén, Anghel. and R.Lorenz, subsp. nov. [87]. At the same time, our future research includes several similar habitats outside the Bucegi Mountains Natural Park protected area that may be suitable to Epipactis bucegensis occurrence, since they are important biological reserves for threatened species [88]. It must, however, be emphasized that this micro-endemism is restricted to an area subject to rapid deforestation due to abrupt urban expansion and increased anthropogenic activities, such as cattle farming, agriculture, tourism and real estate development. According to the EU Biodiversity Strategy (2020-2050), which works towards restoring natural environments by stopping the destruction of ecosystems and loss of biodiversity [89], effective measures should be implemented in order to protect and preserve these fragile habitats that harbour rare endemic species. Consequently, we are proposing this taxon, which is restricted exclusively to one mountain range, to be treated as 'Endangered' (EN) following the Red List criteria of the IUCN Standards and Petitions Committee of the IUCN [90]. | 2023-04-28T15:11:40.354Z | 2023-04-25T00:00:00.000 | {
"year": 2023,
"sha1": "0f4a42526392c59f5e60e2e33d49d183ee415dfd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/12/9/1761/pdf?version=1682421829",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df7cf9761fd9f955cf53a30ab4688beab256bccf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73420894 | pes2o/s2orc | v3-fos-license | Emphysema quantification using hybrid versus model-based generations of iterative reconstruction
Abstract To compare 2 incompatible generations of iterative reconstructions from the same raw dataset based on automatic emphysema quantification and noise reduction: a hybrid algorithm called sinogram affirmed iterative reconstruction (SAFIRE) versus a model-based algorithm called advanced modeled iterative reconstruction (ADMIRE). Raw datasets of 40 non-contrast thoracic computed tomography scanners obtained from a single acquisition on a SOMATOM Definition Flash unit (Siemens Healthcare, Forchheim) were reconstructed with 3 levels of SAFIRE and ADMIRE algorithms resulting in a total of 240 datasets. Emphysema index (EI) and image noise were compared using repeated analysis of variance (ANOVA) analysis with a P value <.05 considered statistically significant. EI and image noise were stable between both generations of IR when reconstructed with the same level (P ≥0.31 and P ≥0.06, respectively). SAFIRE and ADMIRE perform equally in terms of emphysema quantification and noise reduction.
Introduction
The widespread use of computed tomography (CT) has been contributing to the increase in radiation dose to the population since its inception in the 1970s. The number of CT scans has increased from 3 to 32 million between 1980 and 2007. [1] Lowering the tube current-time product [2,3] or tube potential [4,5] were some of the strategies introduced to reduce the radiation dose delivered by CT scans. The technological advances and the increase in computational power allowed a renaissance of iterative reconstructions (IR), which were the initially proposed method for data reconstruction. [6] IR has been shown to be a promising tool to lower radiation dose while maintaining diagnostic accuracy and quality imaging. [7] sinogram affirmed iterative reconstruction (SAFIRE) and advanced modeled iterative reconstruction (ADMIRE) are the 2 latest IR algorithms released by Siemens Healthcare (Forchheim, Germany) in 2010 and 2015, respectively. iterative reconstruction in image space (IRIS) is their firstgeneration IR algorithm and will not be discussed in this paper. [8] Studies on the impact of IR showed different results in the field of quantitative imaging such as emphysema assessment. [9,10] IRs have been proven to influence emphysema quantification. Some of the studies evaluated the added value of IR technique in association with a dose-reduced protocol. [11,12] Model-based reconstructions are offered as an alternative or a replacement of earlier generations based on a hybrid reconstruction technique. [8] Hybrid techniques versus model-based techniques studies have already demonstrated that emphysema quantification is altered even more by the latest algorithm, at least for ASiR and MBIR (GE Healthcare, Waukesha, Wis.), respectively. [13] As far as we know, no studies have been conducted in order to compare the 2 latest iterative algorithms from Siemens, namely SAFIRE and ADMIRE. This is probably because it is an almost impossible comparison. Once a system has been updated with ADMIRE, SAFIRE is no longer accessible. This comparison is only feasible on a prototype allowing to reconstruct raw data with both SAFIRE and ADMIRE. Hence, the primary goal of this study was to compare emphysema quantification using 2 IR algorithms. The secondary goals were to study image noise and segmentation on both IR.
Materials and methods
The local Ethics Committee on research involving humans approved this prospective study (CCER 15-048). Oral and written information was given and signed declarations of consent were obtained from all patients before examination.
Patients
Enrolment started on June 9 and finished on August 12, 2015. All consecutive patients undergoing a non-contrast thoracic CT scanner required clinically on the Somatom Definition Flash unit (Siemens Healthcare, Forchheim, Germany) of our department were included. Patients under 18 years of age and those who required intravenous contrast injection were excluded. The total study sample consisted of 58 patients. Four patients refused to participate. Fourteen CT examinations were excluded from the 3D quantitative analysis database due to image quality limiting automatic segmentation of the lungs. The final study sample consisted of 40 patients (M:F ratio 13:7, mean age 60 [range 18-89]).
Technical acquisition and reconstruction parameters and radiation dose
A single acquisition was performed craniocaudally during full inspiration from the apices to the bases of the lungs with the following parameters: collimation 64 Â 2 Â 0.6 mm, pitch 0.6, gantry rotation period 0.28 second, tube voltage 100 kV (CARE kV), tube current 120 mAs ref. (CARE Dose4D), slice thicknessinterval 1 to 0.7 mm.
The raw data acquired on the VA44 system were reconstructed with 3 levels of IR of the 2 latest generations of algorithms, that is, SAFIRE 1, 3, 5 and ADMIRE 1, 3, 5.
Dose-length product (DLP) and CT dose index volume (CTDIvol) were obtained on the basis of a well-calibrated CT with a 32 cm phantom. Size-specific dose estimates (SSDE) were obtained via Bayer's Radimetrics TM Enterprise Platform.
Image analysis
Automatic pulmonary segmentation of the lungs and emphysema quantification was performed using Pulmo3D (syngo.via VA30, Siemens), the reading and visualization software provided by the vendor. The volume of each lung was automatically calculated after lung segmentation. A threshold of -950 Hounsfield Units (HU) was applied to this volume to calculate the Emphysema Index (EI).
Electronic noise was assessed by collecting standard deviation values in HU with 3 standardized regions of interest (ROI) (∼1 cm 2 ), 1 inside the trachea, 1 in the anterior extracorporeal air and 1 in the pectoral muscles. The mean value of the 3 measures was then considered as image noise. ROIs were carefully placed to avoid artefacts and clothes around the patients. Automatic propagation of the ROIs was performed in a copy-paste mode to assure the reproducibility of the location between the different reconstruction techniques.
Statistical analysis
The Gaussian distribution of the continuous variables of lung volume in liters, EI in percentage and image noise in HU was evaluated by the D'Agostino-Pearson omnibus normality test. When normality was confirmed, statistical differences were analyzed using a pairwise repeated measure (RM) 1-way analysis of variance (ANOVA) with the GreenHouse-Geisser correction and Tukey's multiple comparisons test. When normality was not confirmed, variables were analyzed using a pairwise Friedman test with Dunn's multiple comparisons test. A P value less than .05 was considered statistically significant. All variables were studied as means and standard errors of the mean.
Quantitative analysis
Lung volume, EI and image noise are summarized in Table 1 and illustrated in Figures 1 to 3.
Lung volume comparison between the 3 levels of the same IR technique demonstrated no significant difference for ADMIRE (P .10). On the contrary, lung volume comparison between the 3 levels of SAFIRE showed no significant differences only for the levels 1 and 3 (P = .93).
The EI was not statistically different between SAFIRE and ADMIRE when reconstructed with the same level of IR (P >.99).
There was no significant difference in image noise when comparing the same levels of IR of SAFIRE and ADMIRE (P ≥ .06).
Discussion
Direct comparison of SAFIRE and ADMIRE is impossible on a clinical unit. Each IR algorithm has been developed on its own version of the system (VA44 for SAFIRE, VA48 for ADMIRE) and these are incompatible. Once the CT unit has been updated to VA48, ADMIRE is accessible, but unfortunately, SAFIRE no longer runs on that system. Raw data produced on a VA48 system cannot be loaded on a VA44 system and vice-versa. It is thereby impossible to compare SAFIRE and ADMIRE reconstructions without introducing a skew of acquisition or reconstruction. For this study, we used a VA44 compatible ADMIRE version.
The purpose of IR technique is to reduce image noise in order to allow a reduction in radiation while maintaining image quality. The dose reduction needs then to be matched to a certain level of IR to obtain an image quality similar to the gold standard obtain from standard radiation and classical reconstruction technique. [7,[10][11][12] The objective of our study was to evaluate independently the impact of the new model-based technique compared to the previous hybrid technique. Therefore, the design of our study did not require multiple acquisitions at different radiation doses.
Quantitative CT parameters including lung volume and EI play a relevant clinical role as predictors of mortality and morbidity. A study using mortality data collected during 8 years from the Norwegian Cause of Death Registry demonstrated that EI is a strong independent predictor of mortality with a shorter survival in patients with an EI ≥3% of 19 months. [14] CT phenotypes can also help to classify patients with chronic obstructive pulmonary disease (COPD) at higher risk for exacerbations. [15] Our study demonstrated that SAFIRE, the hybrid technique, and ADMIRE, the model-based technique, equally quantify emphysema when compared at the same strength of IR (Fig. 3). Keeping in mind the need for standardized CT protocols in the follow-up of patients with pulmonary emphysema or in longitudinal studies, [16,17] it is relevant to point out that it is more important to be consistent with the levels of IR than with the type or the generation of algorithm when using either 1 of the 2 latest versions from Siemens.
ADMIRE had a denoising effect as effective as SAFIRE when the same level of IR was compared.
According to our study, ADMIRE had a lesser impact than SAFIRE on lung segmentation with lung volumes showing no statistical differences. The major limitation of our study has already been identified as the use of a VA44 compatible version counterbalanced by the absence of skew of acquisition.
Conclusion
No statistical differences in emphysema quantification and image noise were shown between the 2 latest generations of IR algorithms. The added value of ADMIRE compared to SAFIRE lied in a statistically more robust segmentation of the lung. In other words, acquisition and reconstruction parameters do not need to be modified when upgrading from SAFIRE to ADMIRE in terms of emphysema quantification. | 2019-03-08T14:17:19.588Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "ef9524a08c98f334fe1602298cc2159f82eb10b6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000014450",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef9524a08c98f334fe1602298cc2159f82eb10b6",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
36232417 | pes2o/s2orc | v3-fos-license | Unobtrusive electromyography-based eating detection in daily life: A new tool to address underreporting?
Research on eating behavior is limited by an overreliance on self-report. It is well known that actual food intake is frequently underreported, and it is likely that this problem is overrepresented in vulnerable populations. The present research tested a chewing detection method that could assist self-report methods. A trained sample of 15 participants (usable data of 14 participants) kept detailed eating records during one day and one night while carrying a recording device. Signals recorded from electro-myography sensors unobtrusively placed behind the right ear were used to develop a chewing detection algorithm. Results showed that eating could be detected with high accuracy (sensitivity, speci fi city > 90%) compared to trained self-report. Thus, electromyography-based eating detection might usefully complement future food intake studies in healthy and vulnerable populations. © 2017 Published by Elsevier Ltd.
Introduction
Eating behavior research has mainly relied on dietary selfreport, including food records, 24-h recall, food frequency questionnaires and diet history. Although frequently utilized, these methods come with several disadvantages in that they require high compliance and motivation and are subject to self-presentation and memory biases. Thus, unsurprisingly, when comparing subjective measures with more objective measures of energy intake (e.g., intake in controlled, residential programs, energy expenditure measures such as the Goldberg cut-off (Goldberg et al., 1991) or doubly labeled water methods) reported calories are frequently underestimated in a range from 4% to 37% (Livingstone & Black, 2003;Stice, Palmrose, & Burger, 2015;Thompson & Subar, 2008). A recent review even classified self-report based energy intake 'wholly unacceptable for scientific research ' (Dhurandhar et al., 2015). These limitations and the advent of mobile measurement technology have sparked the use of smartphone devices and ambulatory psychophysiological measurements for assessing food intake. Many apps equip the user with databases to select food and portion size, possibilities of take photographs of their foods (Lieffers & Hanning, 2012), audio-recording, barcode scanning (Illner et al., 2012 or even automated food identification and portion size estimation (Boushey et al., 2017). While these approaches result in better self-monitoring adherence (Lieffers & Hanning, 2012) and control over temporal compliance (Shiffman, Stone, & Hufford, 2008), thereby outperforming paper based methods, they still rely on user activity: One needs to be aware of an eating episode and record it precisely (its start and end, any leftovers in case of photos).
Another group of methods therefore tries to bypass such user compliance. Laboratory measures include as video (Cunha, P adua, Costa, & Trigueiros, 2014) or scale-based approaches (Manton, Magerowski, Patriarca, & Alonso-Alonso, 2016;Zhou et al., 2015) and have reported good precision but they are not (entirely) mobile and can thus not be used in free-roaming individuals. Other measures can be recorded in a natural environment and are focusing on eating episodes instead of calorie intake. E.g., 'bite counters' are based on the assumption that eating always involves characteristic dominant hand movements (to the mouth), hence an accelerometer-based wrist band might be able to capture bites taken (Dong, Hoover, Scisco, & Muth, 2012;Salley, Hoover, Wilson, & Muth, 2016;Scisco, Muth, & Hoover, 2014;Thomaz, Essa, & Abowd, 2015;Ye, Chen, Gao, Wang, & Cao, 2016). Apart from the limitation that eating with the non-dominant hand will be missed most bite counters still rely on the user input to press a start button before the eating episodes in naturalistic environments. Other approaches aim at detecting eating episodes based on continuous measurements of swallowing and/or chewing activities: For example, audio recording at the inner ear has been used (Amft, Kusserow, & Troster, 2009;Bedri, Verlekar, Thomaz, Avva, & Starner, 2015;Nishimura & Kuroda, 2008;Papapanagiotou, Diou, Zhou, van den Boer et al., 2016;P€ aßler & Fischer, 2014). Because of specialized algorithms that are needed to process the acoustic signals, most devices achieve acceptable results in laboratory setting with restricted food types and eating episodes, however, their accuracy in unrestricted, more challenging environments needs to be established. Privacy protection implications arise because voices in the vicinity are recorded as well. In this respect, non-audio-based physiological measures can be useful alternatives. While photoplethysmography (PPG) detects muscle related blood flow in the ear concha during chewing (Papapanagiotou, Diou, Zhou, Boer, et al., 2016), electroglottography (EGG) is used to measure impedance changes at the neck when a bolus of food passes through the larynx to detect swallowing (Farooq, Fontana, & Sazonov, 2014). However, the most common physiological measures used at present utilize electromyography (EMG) to detect swallowing ( Despite elaborated approaches to discriminate ingestive behavior from the various interferences and confounds (environmental sounds, speaking, laughing, coughing, sneezing, yawning, head movements, whistling, smoking), only few have been exam- Such real live proofs of concepts are crucial because the long recordings in varied environments increase the potential sources of false positives due to artefactual EMG measurements, which the detection algorithm needs to reject. Night recordings seem important, as jaw movements are likely to occur during sleep (Po, Gallo, Michelotti, & Farella, 2013), particularly, but not only in individuals with bruxism. Long term recordings also require high individual and social acceptability (e.g., by low obtrusiveness and visibility of sensors) of the devices, which is crucial for any practical application in larger populations. Furthermore, high accuracy might be achieved in the laboratory but not generalize to the natural environment: accuracy decreased from 81% to 62% when applying laboratory based models of chewing behavior to freeroaming data (Fontana, Farooq, & Sazonov, 2014).
The present research focused on indirect, continuous recordings of chewing episodes based on mobile EMG in free-roaming individuals. Instead of targeting precise calorie intake or macronutritional composition (what and how much is eaten) our approach focused on the occurrence of eating episodes (when and how long, episode frequency) indicated by chewing activity. This choice is based on the reasoning that any fully automatic classification of food content and amount will always be imprecise and that omission of eating episodes is a key contributor. Underreporting can, for example, be due to unconscious omission of eating occasions, recording fatigue or conscious misreporting (e.g., denial of consumption) (Maurer et al., 2006). Further suggesting that especially missing eating episodes contribute to underreporting, Poppitt and Prentice (1996);Poppitt, Swann, Black, and Prentice (1998) found that although main meals were well reported, between-meal snacks were omitted from participants' 24-h report with more than one third of snack consumption being absent. Similarly, Johansson, Wikman, Åhr en, Hallmans, and Johansson (2001) found that underreporters (relative to their food intake level) seem to selectively underreport unhealthy snacks (less so healthy foods). In sum, although our EMG-based chewing detection approach misses food content and amount, it captures important eating episode characteristics: time, duration and frequency throughout the day.
We took advantage of EMG recordings from miniature, noninvasive electrodes behind the ear, which are dominated by activity of the lateral pterygoid muscle (the only muscle of mastication involved in opening the jaw). This simple measurement along with mobile lightweight amplifiers allows for long recording periods (including during night), low risk of sensor detachment, and is relatively unobtrusive for most users. However, the detected eating episodes have to be compared to a 'gold standard' of food intake.
Although the most precise method might be doubly labeled water, it seems inappropriate since individual eating episodes cannot be identified. Thus, we test this method against (app and device assisted) self-report in a sample that was specifically trained to report every single eating episode. We expect that this EMG-based method alongside sophisticated data analysis will be able to capture eating episodes with high sensitivity. However, specificity is also of key importance: confusion of speaking, drinking, laughing, yawning, head movements, smoking or bruxism with eating episodes could lead to an overestimation of eating. Previous jawmotion sensor/EMG research reviewed above has demonstrated excellent sensitivities but did not record continuously over the day and night in natural environments and can thus not speak to specificity. Hence, in our proof of principle research, 24-h recordings were obtained from 15 well trained 'calibration participants' in their daily life to obtain valid measures of sensitivity and specificity of EMG-based meal detection relative to self-report.
Participants
Participants were recruited from the master's students in clinical and health psychology at the University of Salzburg because these individuals could be expected to demonstrate the level of background knowledge and high motivation to comply with the self-recording instructions (described below). Participants had a mean age of 21.7 (SD ¼ 2.13, range ¼ 18e25), healthy BMI (M ¼ 22.0 kg/m 2 , SD ¼ 2.9, range ¼ 17.5e26.7) and normal-range scores on the Eating Behavior and Weight Problems Inventory, EWI (Diehl, 1999). A brief interview enquired about the presence of nail biting or bruxism. 1 Participation in the 24 h protocol was remunerated with V 12. One participant was excluded due to technical problems during the ambulatory recording, leaving 14 participants (six women). Ethical approval for the measurement protocol was granted by the local ethics committee. | 2018-04-03T03:39:50.514Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "256e9af70eb5cc211675f766982e29dd0f8de3e2",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/846235/files/chewing_emg_accepted_manuscript.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5c918af0fb9919b39501c092514fa60611377e7f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231882715 | pes2o/s2orc | v3-fos-license | Drawing lessons from the standard treatment guidelines and essential medicines list concept in South Africa as the country moves towards national health insurance
The essential medicines concept is recognised as an instrument to improve medicines access and to promote cost-effective use of health resources. South Africa adopted the concept and implemented the Standard Treatment Guidelines and Essential Medicines List (STGs/EML) in 1996 when the National Drug Policy for South Africa was launched. The STGs/EML was meant to address the inequities in medicines access and use and to ensure a standard of care to all citizens, yet these inequities still exist. The implementation of the new National Health Insurance (NHI) scheme is envisaged to relieve this healthcare inequity. The STGs/EML still forms the basis of care in the public sector, but a critique of implementing this tool and lessons that can be applied from this implementation for NHI are lacking. This piece addresses these shortfalls and highlights questions surrounding the implementation of the STGs/EML.
Introduction
The essential medicines concept is internationally recognised as an instrument to improve medicines access and use and to promote the cost-effective use of health resources. It guides countries with limited resources to better manage healthcare services and provides a means to maximise limited available resources. 1 The careful selection of a limited number of essential medicines leads to better medicine management and improved quality of care. The essential medicines list is a fundamental tool that guides countries in the procurement and distribution processes, which ultimately reduces costs to both the healthcare system and the patient. Representative essential medicines policies are key to supporting health and attaining sustainable development. 1,2 The essential medicines concept and South Africa South Africa adopted the essential medicines concept in 1996, when the country's new democratic government launched the National Drug Policy for South Africa. One of the objectives of this policy was for the ministerially appointed Drug Policy Committee to create an essential medicines list for use in the public sector and to prepare treatment guidelines for healthcare personnel. By 1994, a total of about 2600 medicines were being purchased in the public sector. 3 The list contained a large number of examples from the same pharmacological classes, which reflected the personal preferences of those responsible for selection, rather than a deliberate choice between interchangeable options. The creation of the first National Essential Drugs List Committee (later renamed the 'National Essential Medicines List Committee', or NEMLC) actually preceded the publication of the national medicines policy. 4 One of the first tasks of the NEMLC was to rationalise the list of medicines in the public sector and guide procurememt according to evidence-based treatment guidelines.
The resultant Standard Treatment Guidelines and Essential Medicines List (STGs/EML) was first published in 1996 at the primary healthcare level, then secondary and tertiary hospital levels in subsequent years, with separate editions for adults and paediatrics. The STGs/EML was meant to address the health objectives of the National Drug Policy, which essentially were intended to ensure the availability and accessibility of safe, efficacious and quality essential medicines to all citizens. By 1998, a far more restricted list of 337 medicines (in 422 dosage forms) had been The essential medicines concept is recognised as an instrument to improve medicines access and to promote cost-effective use of health resources. South Africa adopted the concept and implemented the Standard Treatment Guidelines and Essential Medicines List (STGs/EML) in 1996 when the National Drug Policy for South Africa was launched. The STGs/EML was meant to address the inequities in medicines access and use and to ensure a standard of care to all citizens, yet these inequities still exist. The implementation of the new National Health Insurance (NHI) scheme is envisaged to relieve this healthcare inequity. The STGs/EML still forms the basis of care in the public sector, but a critique of implementing this tool and lessons that can be applied from this implementation for NHI are lacking. This piece addresses these shortfalls and highlights questions surrounding the implementation of the STGs/EML. Keywords: essential medicines lists; health systems strengthening; national health insurance; South Africa; standard treatment guidelines.
Drawing lessons from the standard treatment guidelines and essential medicines list concept in South Africa as the country moves towards national health insurance
Read online: Scan this QR code with your smart phone or mobile device to read online.
compiled, which compared well with the WHO Model List of Essential Medicines. 4 It was envisaged that the implementation of the STGs/EML would address the inequities in access to healthcare and medicines inherited from the apartheid government by ensuring a standard level of care to all citizens seeking care in a public sector healthcare facility, 5 but this has not happened and healthcare is still inequitable. However, the concept of the EML is still relevant and has been incorporated into universal health coverage (UHC) principles as well.
Access to medicines and universal health coverage
Equitable access to medicines and healthcare is enshrined in the World Health Organization's UHC concept, which is defined as follows 6 : Universal health coverage means that all people have access to the health services they need, when and where they need them, without financial hardship. It includes the full range of essential health services, from health promotion to prevention, treatment, rehabilitation, and palliative care. (n.p.) The concept of health systems strengthening comprises the policy tools required to attain this goal of UHC. 7 Currently, South Africa is an upper-middle-income country 8 moving towards UHC through the National Health Insurance (NHI) policy funding initiative. The NHI is meant to bridge the gap in inequity in healthcare currently still experienced in South Africa because of the existence of a two-tiered public and private healthcare system. The NHI bill 9 was passed on 08 August 2019. For South Africa the policy dialogue now needs to shift from critiquing the contents of the NHI policy to a conversation around the reasons why there is still inequity in medicines access in South Africa. The shortfalls of the existing healthcare and essential medicines policies need to be investigated to make recommendations towards achieving access to medicines under UHC. One of the National Drug Policy tools meant to address access to medicines inequity is the STGs/EML, which still features prominently in the new NHI package of benefits. 9 However, since its implementation in 1996, an evaluation of the impacts of the STGs/EML on availability, affordability and pricing of medicines in South Africa has not been done. Studies conducted thus far that quantitatively analysed changes in the South African STGs/ EML (2016) and interviewed members from the NEMLC on the selection of medicines (2017) found that the monitoring and evaluation of the South African STGs/EML policies and processes were wanting over the years. 10,11 This means that the successes and/or failures of the essential medicines policies and STGs/EML and their impacts on the end users or patients in the public sector have possibly not been highlighted. Strengthening health systems comprises 'a significant, purposeful effort to improve performance' 12 and goes further than investing inputs as it also involves reforming how the healthcare system currently operates. 13 Despite the efforts of the new democratic government to ensure equitable access to medicines to all by introducing the STGs/EML into the public healthcare system, many questions surrounding the South African STGs/EML processes and impact thereof remain unanswered and warrant investigation. This is especially important in light of the fact that the STGs/EML is to be effectively utilised in the NHI financing system.
Lessons for national health insurance in South Africa
The following knowledge gaps exist and must be filled to strengthen South Africa's healthcare system. These suggestions are designed to improve transparency in the policies and processes currently in operation and to further provide recommendations for possible improvement where required.
An investigation into how the healthcare budget is calculated for the burden of disease and how this is translated into pharmaceutical expenditure for STGs/EML as well as for non-STGs/EML items must be done. There is currently dispensation to national and then provincial departments of health who have autonomy in terms of their budget allocation and expenditure for medicines, but this is not explicit nor is this public knowledge. Understanding the allocation process towards medicines expenditure will assist the government and other stakeholders to develop cost-benefit packages for NHI. Another model would be to ring-fence the medicines budget from benefit packages that consider all other interventions and care pathways.
The essential drugs programme is a branch of the affordable medicines directorate responsible for the selection and review of essential medicines (done by the NEMLC) and the rational use of medicines. 14 The financial implications of having STGs/EMLs in South Africa must also be investigated to make transparent the actual costs of reviewing and updating STGs/EMLs and how this process is funded. This needs to be researched, as it is not known if the processes and policies used in the daily functioning of the essential drugs programme are cost-efficient and/or sustainable. What are the capacity-building and succession-planning initiatives and policies of the NEMLC surrounding the running of the essential drugs programme? Should training be instituted across health sciences programmes in academic institutions at undergraduate or postgraduate level in order to build a community of practitioners around evidence-based guideline development? Moreover, are the desired effects of the essential drugs programme policies being achieved? Is the NEMLC monitoring and evaluating its functioning against their operational plans and policies and timeously implementing effective corrective measures? Widespread media reports of medicine shortages at public health facilities and the rising costs of medicines make monitoring of the essential drugs programme critical. In an NHI environment, this would need to be considered in terms of whether a national committee structure would be more efficient and cost-effective in terms of reviewing guidelines for treatment under benefit packages. Lessons from the current NEMLC approach can be used to guide NHI structures and their development. With an increasing emphasis on health technology assessment (HTA) use in UHC roll-out, the application of HTA principles by the NEMLC and its related task teams will need to be identified. The practical application of HTA in a South African environment, with information and data constraints, will be useful in NHI. Plans will need to be developed, more data gathered, registries constructed and transparency extended about assumptions made in modelling under HTA. Principles used to balance the outcomes of HTA with budget impact analyses can also be formulated based on the NEMLC experiences.
The gaps that have been identified therefore require further investigation, and recommendations should be made to improve the efficiency of the country's essential drugs programme. An updated trend analysis of changes to the South African STGs/EML and evaluation of these impacts may aide this process. Furthermore, to provide a global picture, it would be interesting to investigate how South Africa's EML compares with other countries (in Africa and around the world). Comparisons could also be made regarding the progress with UHC in terms of policies and processes for medicine listings, review and updating, and monitoring and evaluation of EMLs. These country case studies could be possible exemplars for South Africa, or to the contrary, the South African situation could provide lessons for other countries attempting to use the essential medicines concept and expanding this under UHC policies.
Conclusion
With the NHI bill just being passed, much research is required to fill these gaps and provide new knowledge in the South African context and to provide recommendations to inform not only pharmaceutical and essential medicines policy development but also for the NHI policy implementation. Currently, this type of policy evaluation is scarce in South Africa, and without a greater investment in strengthening the health system from a policy perspective, it may be difficult to achieve the goals of the NHI and ultimately those for UHC. | 2021-02-12T06:16:22.894Z | 2020-12-17T00:00:00.000 | {
"year": 2021,
"sha1": "e55d02a6c3baca69b0b86818c69197eec9aed3d2",
"oa_license": "CCBY",
"oa_url": "https://safpj.co.za/index.php/safpj/article/download/5145/6564",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3abaed33014f9eda401d4fbe824c79ce746d292",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207061432 | pes2o/s2orc | v3-fos-license | Face Numbers of Certain Cohen-Macaulay Flag Complexes
We show that if a $d$-dimensional Cohen-Macaulay complex is, in a certain sense, sufficiently"close"to being balanced, then there is a $d$-dimensional balanced Cohen-Macaulay complex having the same $f$-vector. This in turn provides some partial evidence for a conjecture of Kalai on the $f$-vectors of Cohen-Macaulay flag complexes.
Introduction
1.1. Background. One of the fundamental invariants of a simplicial complex ∆ is its f -vector, f (∆) = (f −1 , f 0 , . . . , f dim(∆) ), which lists the number of faces ∆ has in each dimension (i.e., f i is the number of i-dimensional faces of ∆). Characterizing the possible f -vectors of various classes of simplicial complexes is one of the central problems of geometric combinatorics. Of particular recent interest are flag complexes and balanced complexes; it was conjectured by Kalai and proven by Frohmader [8] that the f -vector of an arbitrary flag complex is also the f -vector of some balanced complex (though the reverse does not hold). Kalai further made the following conjecture, which remains open: Our main theorem provides some partial evidence for this conjecture. Theorem 1.2. Let Γ 1 , Γ 2 , . . . , Γ k be 0 or 1 dimensional flag complexes (i.e., trianglefree graphs) such that for each i, either Γ i is bipartite or Γ i − e is bipartite for some edge e of Γ i . Let Γ = Γ 1 * Γ 2 * · · · * Γ k (where * denotes the simplicial join). Then for ∆ any full-dimensional Cohen-Macaulay subcomplex of Γ, f (∆) is the f -vector of some balanced complex of the same dimension.
Notice that the complexes described in Theorem 1.2 are in some sense "close" to balanced; they can be made balanced by deleting an appropriate edge from each of the terms in the join which are not bipartite. In Section 3 we will see that the theorem applies to a large class of examples of flag complexes arising as independence complexes of graphs with certain properties. Note, however, that the theorem applies to many complexes which are not flag; while the complex Γ is flag, the subcomplexes described need not be.
Preliminaries.
We begin by reviewing some basic concepts and notation from the study of simplicial complexes. For further details, [14] is a good reference.
Simplicial Complexes and Multicomplexes A simplicial complex ∆ on finite vertex set V is a set of subsets of V which is closed under inclusion. An element of ∆ is called a face, the faces which are maximal with respect to inclusion are called facets. The dimension of a face γ is dim(γ) := |γ| − 1, and the dimension of the complex is dim(∆) := max{dim(τ ) : τ ∈ ∆}. Faces of dimension 0 and 1 are called vertices and edges, respectively. The complex is pure if all of its facets have the same dimension. For i ≤ dim(∆), the i-skeleton of ∆ is the subcomplex of ∆ consisting of all the faces of ∆ with dimension no greater than i. In particular, the 1-skeleton of ∆ may be thought of as a graph. For τ ∈ ∆, the link of τ in ∆ is The f -vector of simplicial complex ∆ is defined to be f (∆) = (f −1 , f 0 , . . . , f d−1 ), where d − 1 is the dimension of ∆ and f i is the number of i-dimensional faces of ∆ (these f i are known as the face numbers of ∆). Notice that f −1 = 1 for any non-empty ∆, as the empty set will be the unique (−1)-dimensional face.
In practice it is often more convenient to study the face numbers of the complex in terms of the h-vector of the complex, h(∆) : It is clear that the f -vector of ∆ completely determines its h-vector and vice versa.
Similary, let X be a finite set of variables, and define a multicomplex on X to be a collection of monomials in X which is closed under divisibility (we include 1 as the unique degree 0 element of any non-empty multicomplex). Notice that if M is a multicomplex on X such that every element of M is square-free, then M corresponds to a simplicial complex in the obvious way. The F -vector of a multicomplex M is F (M ) := (F 0 , F 1 , . . .), where F i is the number of elements in M of degree i (if M is also a simplicial complex then the F -vector is just the f -vector up to a shift in index).
For S ⊆ X and m a monomial on X, let m S denote the part of m supported in S (i.e., the unique monomial such that m = m S m X−S , where m X−S is divisible by no element of S). I∆ , where I ∆ is the ideal in k[X] generated by the squarefree monomials x v1 x v2 . . . x v k such that {v 1 , v 2 , . . . v k } / ∈ ∆. We call I ∆ the Stanley-Reisner ideal of ∆; it is easy to see that it is generated by the monomials corresponding to the minimal non-faces of ∆.
We will define ∆ to be k-Cohen-Macaulay (k-CM) if for some (equivalently, every) l.s.o.p. θ 1 , θ 2 , . . . , θ d of k[∆], When the field k is understood, we will simply say such a complex is Cohen-Macaulay (CM). Note that if ∆ is k-CM, then ∆ is K-CM for each field K with the same characteristic as k.
There are many equivalent definitions of the Cohen-Macaulay property, see, for example [14]. Of particular note is Reisner's characterization [11] of CM complexes in terms of the vanishing of certain homologies. In particular, it follows from Reisner's result that every CM complex is pure. Many interesting complexes are CM, including all shellable complexes and all triangulations of balls and spheres.
Balanced Complexes and Flag Complexes A simplicial complex ∆ is flag if every clique in the 1-skeleton of ∆ forms a face of ∆. In particular ∆ is completely determined by its set of edges, and I ∆ is generated in degree two. In this case ∆ is both the clique complex of its 1-skeleton and the independence complex of the graph complement of its 1-skeleton.
For ∆ a simplicial complex on V , a map κ : V → [k] is called a proper k-coloring of ∆ if whenever distinct vertices v 1 and v 2 are contained in a common face of ∆, κ(v 1 ) = κ(v 2 ) (in other words, κ is a proper coloring of the 1-skeleton of ∆ in the graph theoretic sense). A complex which has a proper k-coloring is called (Sometimes these complexes are called completely balanced if more general types of balance are in play.) A result of Stanley [13] (necessity) and Björner, Frankl and Stanley [1] (sufficiency) completely characterized the h-vectors of balanced Cohen-Macaulay complexes.
The following are equivalent: It is furthermore worth noting that a purely numerical characterization of the f -vectors of d-colorable simplicial complexes was found in [7]; if Conjecture 1.1 is true, it would imply that the h-vector of any Cohen-Macaulay flag complex is the f -vector of such a complex. Now suppose there is some 2-dimensional flag complex Ω with f (Ω) = f (∆). The 1-skeleton of Ω is then a graph on 7 vertices which contains no K 4 (as Ω is 2-dimensional), and has 16 edges. Turán's Theorem [15] tells us this is the maximum number of edges in a K 4 -free graph on 7 vertices, and in particular that the 1-skeleton of Ω must in fact be the Turán graph T (7, 3). But T (7, 3) contains 12 triangles, all of which must be faces of Ω, a contradiction. Thus there is no 2-dimensional flag complex having the same f -vector as ∆, so the reverse of Conjecture 1.1 does not hold. Outline. We first outline our approach, which is adapted from that used in [10] and [2]. Throughout the following, let ∆ be a Suppose we fix a total order on X and let ≺ denote the corresponding reverse lexicographical (revlex) order on the monomials in X. Let T ≺ denote the last d elements of X with respect to ≺, and (T ≺ ) the ideal in k[X] these elements generate. Further suppose we pick a graded automorphism g of k[X] (considered as a matrix with entries in k) such that gI∆ is isomorphic to k[∆] and ∆ is CM. Furthermore, gI ∆ + (T ≺ ) is a homogeneous ideal, so its revlex initial ideal In(gI ∆ + (T ≺ )) is well-defined, and Theorem 15.3 of [3] asserts that the set B g (∆) of monomials in X not in Thus, in light of Theorem 1.3, to prove Theorem 1.2 it will suffice to show that for ∆ a complex as in the statement of the theorem, we may choose an order on X, automorphism g, and partition of X − T ≺ into disjoint sets X 1 , X 2 , . . . , X d such that T ≺ is an l.s.o.p for k [X] gI∆ , and for each m ∈ B g (∆) and 1 ≤ i ≤ d, deg(m Xi ) ≤ 1 (the last condition ensures that B g (∆) is a simplicial complex with d-coloring corresponding to the partition of X − T ≺ ).
Finally, to verify that T ≺ is an l.s.o.p for k[X] gI∆ , it will suffice to check that g satisfies the Kind-Kleinschmidt condition [9]: • For every facet {x v1 , x v2 , . . . , x v k } of ∆, the submatrix of g −1 given by the intersection of the last d columns of g with the rows corresponding to v 1 , v 2 , . . . , v k has rank k. It will be convenient in our arguments to replace the field k with a larger field K defined to be the field of rational functions over k in indeterminates z 1 , z 2 , z 3 , z 4 . A complex which is k-CM is also K-CM, and passing between the two will not affect the enumerative consequences of our arguments.
Proof of Theorem
Let ≺ be a total order of X, g a graded automorphism of K[X]. We will call (≺, g) a balancing pair for ∆ if there exists a partition of X − T into disjoint sets X 1 , X 2 , . . . , X d such that (1) g satisfies the Kind-Kleinschmidt condtion for ∆.
(2) If m ∈ B g (∆) and 1 ≤ i ≤ d, deg(m Xi ) ≤ 1. Then to prove Theorem 1.2, it suffices to show that there exist balancing pairs for the complexes in question. We will induct from the k = 1 case with the aid of some lemmas.
First suppose we have balancing pairs (g 1 , ≺ 1 ) and (g 2 , ≺ 2 ) for complexes ∆ 1 and ∆ 2 , respectively, where ∆ i has dimension d i − 1, vertex set V i , and corresponding set of variables X i . Let X i 1 , . . . X i di denote the corresponding partitions of X i −T ≺i , for i = 1, 2.
Further suppose that for i = 1, 2, g i is of the form Now let ≺ be the order of X 1 ∪ X 2 given by x ≺ y if and only if either • x, y ∈ X i , and x ≺ i y, • x / ∈ T ≺i for i = 1, 2 and y ∈ T ≺i for some i, Lemma 2.1. The pair (≺, g) defined above is a balancing pair for ∆ 1 * ∆ 2 .
Proof. Let X = X 1 ∪ X 2 . It is clear that g is a graded automorphism of K[X], and observe that for i = 1, 2, The dimension of ∆ 1 * ∆ 2 is d 1 + d 2 − 1, and any facet τ of ∆ 1 * ∆ 2 is of the form τ = τ 1 ∪ τ 2 , where τ i is a facet of ∆ i . Then the submatrix of g − 1 given by the intersection of the last d 1 + d 2 columns of g with the rows indexed by τ is just where for i = 1, 2, given by the intersection of the last d i columns of g −1 i with the rows indexed by τ i . Thus the rank of M is rank (M 1 ) + rank (M 2 ) = |τ 1 |+ |τ 2 | = |τ |, so g satisfies the Kind-Kleinschmidt condition for ∆ 1 * ∆ 2 .
For our partition of X− T ≺ , we will simply use that inherited from the partitions of X 1 − T ≺1 and X 2 − T ≺2 (noticing that T ≺ = T ≺1 ∪ T ≺2 ).
Suppose that for some j = 1, 2 and i ∈ {1, 2, . . . , d j }, m is a monomial on X j i of degree greater than 1. Then m / ∈ B gj (∆ j ), so there is some ν ∈ I ∆j such that m = In(g j ν). But then it is clear that ν ∈ I ∆ , and gν = g j ν. As g j ν involves only variables in X j and ≺ restricts to ≺ j on X j , In(gν) = m, so m / ∈ B ≺ . Then as B is a multicomplex, no monomial on in K[X] has degree greater than 1 in X j i .
Lemma 2.2.
Suppose ∆ is a full-dimensional subcomplex of Γ and (≺, g) is a balancing pair for Γ. Then (≺, g) is a balancing pair for ∆.
Proof. As each face of ∆ is a face of Γ, it following immediately that g satisfies Kind-Kleinschmidt for ∆. Now, suppose m is a monomial in K[X] such that for some i ∈ {1, 2, . . . d}, deg(m Xi ) > 1. Then m / ∈ B g (Γ), so m ∈ In(gI Γ ). In other words, there is an element ν of I Γ such that In(gν) = m. But as ∆ is a subcomplex of Γ, I Γ ⊆ I ∆ , so m ∈ In(gI ∆ ). Proof. If d = 0, take any arbitrary total order ≺ on X and let The Kind-Kleinschmidt condition is immediate. Our partition of X − T ≺ must simply be X 1 = X − {x w }, where x w is the last element of X with respect to ≺. As Γ is 0-dimensional, for any distinct vertices v 1 , v 2 of Γ, x v x z ).
Then, as . Thus no monomial in B g has degree greater than 1. If d = 2 and Γ is bipartite, i.e., 2-colorable, let V 1 and V 2 be the color classes for Γ (for some proper 2-coloring of Γ). Then identifiying V 1 and V 2 with the 0dimensional complexes on them, Γ is a full dimensional subcomplex of V 1 * V 2 , so the conclusion follows from the d = 1 case and our lemmas. Finally, suppose d = 2 and Γ is not 2-colorable, but Γ−e is 2-colorable for some edge e of Γ. Let e = {y, z}, and let A and B be the color classes of Γ − e. Note that y and z must be in the same color class; we may assume that both are in A. Take ≺ to be a total order on X such that the elements of B come before all the elements of A, and y and z are the second to last and last elements, respectively, so that T ≺ = {y, z}. Our partition of X − T will be Now, let C be the (n − 2) × 2 matrix whose first column is all ones and whose second column has a one in its first |B| rows and zeroes elsewhere. Let and define We first check the Kind-Kleinschmidt condition. Notice that the rows of −CZ corresponding to variables in B are exactly −(z 1 + z 3 ) −(z 2 + z 4 ) , and rows corresponding to variables in A are −z 1 −z 2 .
A facet of Γ is a pair {v 1 , v 2 } where either v 1 = y and v 2 = z or v 1 ∈ A and v 2 ∈ B. In the first case, the submatrix g −1 defined by the intersection of the last two rows with the rows indexed by v 1 and v 2 is Z, in the second it will always have first row −(z 1 + z 3 ) −(z 2 + z 4 ) , while the second row will be either −z 1 −z 2 , z 1 z 2 , or z 3 z 4 . In any case, the rows are linearly independent, so the submatrix has rank 2, and Kind-Kleinschmidt is satisfied.
To complete the proof that (≺, g) is a balancing pair, it suffices to show that any degree two monomial in X 1 or X 2 lies in In(gI Γ ).
First, if x i = x j are both elements of X l for l = 1, 2, then they correspond to vertices of the same color class, so x i x j ∈ I Γ , and g(x i x j ) = x i x j , and so x i x j ∈ In(gI Γ ).
Now suppose x i ∈ X 2 , so x i = x v where v is a vertex in B. Now, as Γ is flag and 1-dimensional it contains no triangles, so as {y, z} ∈ Γ, at least one of {v, y} or {v, z} is not in Γ. In particular, x i x w ∈ I ∆ , where w is either y or z. In either case, where S is a sum of degree two monomials occuring later than x 2 i in the revlex order. As x j x i ∈ gI Γ for all j < i, we then have x 2 i + S ∈ gI ∆ , so x 2 i ∈ In(gI Γ ). Finally, suppose x i ∈ X 1 , so x i = x v where v ∈ A − {y, z}. Then x i x y and x i x z are both in I ∆ , and so gI Γ contains g(x i x y − x i x z ). But where S 1 and S 2 consist of monomials occuring after x 2 i in the revlex order. In particular, x 2 i ∈ In(gI Γ ).
Independence Complexes of Graphs with Large Girth
Recall that the independence complex of a graph G on vertex set V is the simplicial complex I(G) whose faces are exactly the independent sets of G, that is, subsets τ of V such that no two elements of τ are adjacent in G. A simplicial complex is flag if and only if it is the independence complex of some graph. The aim of this section is to show that Conjecture 1.1 holds for CM flag complexes arising as independence complexes of graphs of sufficient girth.
Suppose ∆ = I(G) for some graph G. Define β(G) to be the maximum size of an independent set of G, so that dim(I(G)) = β(G) − 1. If ∆ is Cohen-Macaulay, then ∆ is in particular pure, so all of the maximal independence sets of G have size β(G). Such a graph is called well-covered. Finbow and Hartnell (see [4,5]), gave a characterization of well-covered graphs of large girth: Let G be a graph on vertex set V . A pendant edge of G is an edge which is incident to a vertex of degree 1. A perfect matching in G is a set of edges M of G such that each vertex of G is in exactly one edge in M . If we allow smaller girths, however, things become more interesting. Following [4], define a 5-cycle in G to be basic if it contains no adjacent vertices of degree greater than or equal to 3 in G. Let PG be the set of graphs G such that vertex set of G may be partitioned into two disjoint subsets P and C such that: • P contains the vertices in G adjacent to pendant edges of G, and the pendant edges form a perfect matching of P , and • C contains the vertices of the basic 5-cycles of G and the vertices of these 5-cycles give a partition of C. A simple example of a graph in PG is given in Figure 2. Our aim is to show that CM-complexes arising from graphs of girth at least 5 satisfy Conjecture 1.1. We first address the exceptional cases: Proposition 3.3. If G is one of C 7 , P 10 , P 13 , P 14 , or Q 14 , then I(G) is not Cohen-Macaulay.
Proof. It is shown in [6] that the I(C n ) is Cohen-Macaulay if and only if n is 3 or 5; in particular I(C 7 ) is not Cohen-Macaulay (this may also be seen by explicitly computing its homology).
Finally, we note that I(P 14 ) and I(Q 13 ) both have dimension 4, but it may be computed (we used the Sage computer algebra system) that each has non-vanishing homology in degree 3. As CM complexes may only have non-vanishing homology in their top degree, neither of these complexes is Cohen-Macaulay.
We are now in a position to prove the main result of this section: Proof. Suppose that G has connected components G 1 , G 2 , . . . , G r . It may easily be seen that I(G) = I(G 1 ) * I(G 2 ) * · · · * I(G r ). It is known [12] that the join of two complexes is CM if and only if both complexes themselves are CM, in particular each I(G i ) must be CM; furthermore each G r must have girth at least 5, so by Theorem 3.2 and Proposition 3.3 each G i is either K 1 or in PG. Now, suppose G i ∈ PG. Let γ 1 , γ 2 , . . . , γ j be the basic 5-cycles of G i and e 1 , e 2 , . . . e l the pendant edges of G (so each vertex of G i is in exactly one G s or e s ). Then I(G i ) is a subcomplex of I(γ 1 ) * I(γ 2 ) * · · · * I(γ j ) * I(e 1 ) * · · · I(e l ). Note that each I(γ s ) is a 1-complex isomorphic to C 5 , while each I(e s ) is 0-dimensional. Furthermore, the dimension of I(G i ) = 2j+l, so I(G i ) is a full-dimensional subcomplex of this join.
Thus, we see that I(G) is a full-dimensional subcomplex of Γ 1 * Γ 2 * · · · Γ k where each Γ j is either 0-dimensional or C 5 , so we may apply Theorem 1.2 to complete the proof.
Finally, we note that this class of flag complexes contains a large number of examples: Corollary 3.5. Suppose G is a well-covered graph such that any induced cycle in G has length 5 and β(G) = d. Then there is a balanced (d − 1)-complex ∆ such that f (I(G)) = f (∆).
Proof. Clearly the girth of G is either 5 or ∞. In [16], Woodroofe showed that if G is well-covered and contains no induced cycles of length other than 5 or 3, then I(G) is CM. Hence we may apply Theorem 3.4. Example 3.6. We conclude with an example of a flag complex that does not satisfy the conditions of Theorem 1.2. Let ∆ be the flag complex whose 1-skeleton is the graph G pictured in Figure 4.
Note that ∆ is shellable and hence Cohen-Macaulay (in fact, ∆ is a sphere). The dimension of ∆ is 2.
Suppose ∆ is a full-dimensional subcomplex of some Γ of the type described in Theorem 1.2. Then Γ is of dimension 2, so either Γ = Γ 1 * Γ 2 * Γ 3 where each Γ i is 0-dimensional, or Γ = Γ 1 * Γ 2 where Γ 1 is 0-dimensional and Γ 2 is 1-dimensional. In the former case, it would follow that G is 3-colorable, which it is not. So suppose Γ = Γ 1 * Γ 2 where Γ 1 is 0-dimensional and Γ 2 is 1-dimensional. As ∆ is a pure 2-dimensional subcomplex of Γ, Γ 1 must consist of a set of vertices which are pairwise disjoint in ∆ such that every facet of ∆ contains an element of Γ 1 . In other words, Γ 1 must be an independent set of G that intersects every facet of ∆. One may check by hand that no such independent set exists, and thus ∆ does not satisfy the condition of Theorem 1.2.
Acknowledgements
I would like to thank Isabella Novik for many enlightening discussions. This research was partially supported by VIGRE NSF grant DMS-0354131. | 2010-10-11T22:23:06.000Z | 2010-10-11T00:00:00.000 | {
"year": 2010,
"sha1": "f44ee9f1c313ebef5daff6784c8cf6521f6896b8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1010.2253.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "582bc4002ad761cc43b8360cc3be0be91821a896",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
258861282 | pes2o/s2orc | v3-fos-license | Occupational models from 42 million unstructured job postings
Summary Structuring jobs into occupations is the first step for analysis tasks in many fields of research, including economics and public health, as well as for practical applications like matching job seekers to available jobs. We present a data resource, derived with natural language processing techniques from over 42 million unstructured job postings in the National Labor Exchange, that empirically models the associations between occupation codes (estimated initially by the Standardized Occupation Coding for Computer-assisted Epidemiological Research method), skill keywords, job titles, and full-text job descriptions in the United States during the years 2019 and 2021. We model the probability that a job title is associated with an occupation code and that a job description is associated with skill keywords and occupation codes. Our models are openly available in the sockit python package, which can assign occupation codes to job titles, parse skills from and assign occupation codes to job postings and resumes, and estimate occupational similarity among job postings, resumes, and occupation codes.
In brief
We present an open data resource and software tool for understanding the associations between occupations, job titles, and skills in the United States labor market. These associations are used in several fields of research, including economics and public health, as well as for practical applications like matching job seekers to available jobs.
Structured occupational codes have been in use in the United
States since 1977 with the release of the Standard Occupational Classification (SOC) system, 1 which is now the federal statistical standard for defining occupations. 2 Official statistics on workforce participation from the United States Bureau of Labor Statistics, the United States Census Bureau, and other federal agencies are structured in terms of these codes, of which there are 867 at the most detailed level in the 2018 version. However, the titles and descriptions that workers and employers use for particular jobs vary widely. Likewise, the functional descriptions and skill keywords associated with particular jobs vary, even though there are commonalities in the skills required among jobs within the same occupation or across similar occupations.
Occupational codes are central to many research studies. For example, recent studies of the labor market's response to the COVID-19 pandemic examined the dynamics of supply and demand shocks by occupation 3 and the feasibility of remote work by occupation. 4 Similarly, studies of occupational hazards in the public health literature often use SOC codes to proxy for exposure to hazards, for example in studying the differential risks to healthcare workers during the pandemic. 5 Assigning SOC codes by hand is time consuming and does not scale to large datasets or to real-time applications. Several tools for automatically assigning SOC codes to job titles are available 6 but are limited by their model transparency and software accessibility. The National Institute for Occupational Safety and Health developed the NIOCCS system based on handcoded SOC assignments to survey data, 7 but access to the system currently requires account registration and approval. 8 Similarly, the National Cancer Institute created a tool called Standardized Occupation Coding for Computer-assisted Epidemiological Research (SOCcer) by modeling expert-coded job titles, 9 but it is only accessible through a web interface, and results are retrieved later by e-mail. 10 The United States Department of Labor provides another web-based tool, the O*NET Code Connector. 11 Another web-based tool, Occupational Self-Coding and Automatic Recording, requires self-reporting by research participants. 12 There are also commercially licensed options, including the Lightcast Titles API 13 and the O*NET-SOC AutoEncoder. 14 Existing approaches either do not provide the parameters underlying their models, cannot run offline (e.g., to efficiently process large amounts of job title data), or do not adhere to FAIR principles for research software. 15 We present a reusable data resource and software toolkit that models the occupational structure in unstructured job titles and job descriptions derived from a comprehensive sample of online job postings. There are over 3 million distinct job titles in the approximately 42 million job postings underlying our models. In contrast to existing methods, our model parameters are openly available and reproducible. Our models are pre-packaged in the downloadable sockit python package, 16 as well as in a hosted web application, 17 for convenient reuse with minimal dependencies.
Beyond their applications in scientific research, occupation codes also have important practical uses for policy makers and in real-world applications. The use case that motivated this data resource was a practical application to extract structured occupational information from available unstructured data. Specifically, sockit was implemented in a recommendation system that helps job seekers discover new careers, recently deployed by labor departments in Rhode Island, Hawai'i, New Jersey, Colorado, and Maryland of the United States. 18 The entry point for job seekers to these applications is a resume upload or manual entry of previous job titles, which are unstructured data. The algorithm for recommending careers, however, requires structured SOC codes and skill keywords that are estimated from the unstructured input using the methods described in this article. With the increasing volume of unstructured job and resume data available online, automatic processing with methods like sockit will be increasingly important for a broader range of both research and policy applications.
RESULTS
Our primary data come from the NLx Research Hub, 19 a realtime and historical data warehouse representing the diversity of jobs available in the United States labor market, which is accessible at no cost for approved research projects. Job postings in the Research Hub originate in the National Labor Exchange, 20 which is a partnership between the National Association of State Workforce Agencies 21 and the DirectEmployers Association 22 to collect and distribute online job postings from corporate career websites, state job banks, and the United States federal jobs portal. 23 At the time of writing, the National Labor Exchange advertises that they collect job postings for 300,000 employers with a daily volume of 3.7 million active (both new and existing) job postings. 20 We accessed 42,298,617 historical records in the NLx Research Hub for the years 2019 (13,241,134 records) and 2021 (29,057,483 records). We chose these two years because they represent the United States labor market before and after the COVID-19 economic crisis but not during the beginning of the COVID economic crisis itself in 2020 Q2. Each record contains unstructured fields for job title, a full-text job description, and structured fields for acquisition date, city, state, and postal code.
We make use of prefix trees (also known as tries) throughout the data processing pipeline. Briefly, a prefix tree is a data structure that allows efficient lookups of strings and is frequently used to solve string searching, spellchecking, and autocompletion tasks. We developed the open-source wordtrie python package 24 specifically to implement substring matching in sockit and the data processing pipeline described below, but we released it as its own package due to its generality.
Research Hub job postings contain 849,284 distinct job titles after normalization In practice, job titles are often written as a series of adjectives that add specificity to a principal noun. For example, a ''licensed practical nurse'' is a specific type of nurse, and a ''pizza delivery driver'' is a specific type of driver. In these cases, nurse and driver are the principal nouns that encode the most general meaning of the job. Common exceptions to this adjective-noun ordering are supervisory job titles, such as ''director of nursing'' or ''supervisor of delivery drivers,'' and assistant job titles, such as ''special assistant to the vice president.'' However, we can normalize those types of titles to adjective-noun ordering by pivoting them around the prepositions ''of,'' ''for,'' or ''to'' so that, for example, ''director of nursing'' becomes ''nursing director.'' Based on these insights, we started by identifying suitable principal nouns from existing datasets with job titles. We applied a natural language processing technique called part-of-speech tagging to identify nouns in all of the sample job titles available in the O*NET 27.0 Database, 25 including the 1,016 titles in the ''Occupation Data'' file and the 52,742 titles in the ''Alternative Titles'' file, as well as the 6,520 titles in 2018 SOC Direct Match Title File from the United States Bureau of Labor Statistics. 26 We manually reviewed all words identified as nouns and curated them into a list of 2,514 principal nouns. At the same time, we curated a list of 259 unambiguous acronyms by extracting and reviewing all acronyms occurring in parentheses in the job titles, ll OPEN ACCESS Descriptor e.g., ''RN'' in ''Registered Nurse (RN),'' and retaining only the acronyms that mapped to a distinct SOC code in the files above.
Next, we extracted 3,179,805 distinct job titles occurring in the 42,298,617 records from the Research Hub after converting job titles to lowercase, removing extraneous text, and retaining alphabetical characters (implemented in the sockit.title.clean method). We further processed these titles to filter employer names using a prefix tree of 999 members of DirectEmployers, 27 United States place names using a prefix tree of state names and abbreviations as well as 330 large cities, 28 and a smaller set of 26 phrases and abbreviations that denote work schedule (e.g. ''part time'' or ''evenings'') and often occur in job posting titles. Of the 3,179,805 distinct job titles, 578,745 titles (representing 3,828,432 job postings) had one or more of these employer names, place names, or scheduling terms filtered out, and 433,764 titles (representing 3,138,421 job postings) were normalized to adjective-noun ordering by pivoting around a preposition. We retained 2,605,739 titles (representing 36,951,252 job postings) containing at least one of the 2,514 principal nouns.
Finally, we truncated 944,562 titles (representing 5,999,760 job postings) containing more than three words to retain only the principal noun and up to two preceding adjectives. This process is visualized in Figure 1, with an example title of ''Senior Director of Research (remote).'' Truncating the number of words represented in each title helps control the long tail of singleton titles corresponding to a single job posting. Table 1 shows how varying the threshold on the number of words affects the counts of distinct titles and singleton titles. While there is no optimal threshold given that every increase in the threshold also increases the number of unique titles and the proportion of singleton titles, moving from a threshold of two to three words results in the largest marginal return in terms of increasing the number of distinct titles and the proportion of non-singleton titles.
Our approach yielded a final list of 849,284 distinct job titles (representing 36,951,252 job postings) in normalized adjectivenoun ordering with between one and three words, where the last word is a principal noun. We submitted these titles to the SOCcer web application 10 to obtain probabilities for the 10 most likely SOC 2010 codes associated with each distinct job title. We converted the SOC 2010 codes to SOC 2018 codes using a crosswalk provided by the United States Bureau of Labor Statistics. 29 We constructed a job title prefix tree by inserting the distinct job titles with their counts weighted by their SOCcer probabilities of each SOC code, excluding probabilities below 0.02. This threshold is meant to control for false positives and reduce the number of SOC codes assigned to each job title. While there is no way to determine an optimal probability threshold since there is no ground truth available, the threshold value of 0.02 controls the number of jobs that would be assigned multiple SOC codes that differ at the 2-digit level (Table 2), which arguably should be applicable to only a small proportion of jobs. With no threshold, approximately two-thirds of job titles would be assigned a second SOC code that differs at the 2-digit level. In contrast, a threshold of 0.10 would have less than 1% of jobs with a second 2-digit SOC code but would eliminate almost three-quarters of job titles. The threshold of 0.02 allows for 5% of jobs to be assigned a second 2-digit SOC code while retaining approximately half of the job titles.
Because the titles are normalized to end with the principal noun, we inserted the titles into the prefix tree in reverse word order (e.g., from right to left). Therefore, the more general principal nouns occur at the top of the tree, and the leaf nodes are the more specific titles that include adjectives. Each leaf node in the tree contains a histogram of SOC code counts, which we aggregated across parent nodes so that we can assign SOC probabilities to partial title matches, all the way down to the root nodes that contain a single principal noun. Figure 2 illustrates the structure of the job title prefix tree with examples of job title families for nurses and drivers. The sockit package includes a ''title'' module that can search for titles within a longer query string in reverse word order so that all matches start from a principal noun.
In practice, we found that management titles containing the principal nouns ''manager,'' ''director,'' ''supervisor,'' ''vp,'' and ''president'' were difficult to classify correctly with the job title prefix tree because of variation in their adjectives and modifiers. Therefore, we built a separate management title prefix tree that maps 6,150 such titles to the one or two most relevant SOC codes using search results from the O*NET Code Connector. 11 Term-weighted job descriptions estimate skill probabilities by occupation To study skills in job descriptions, we began by manually sampling 1,075 skill keywords from CareerOneStop, 30 Six reviewers manually edited these keywords and suggested groupings of similar keywords. One of the reviewers used the others' edits and groupings to curate a final list of 755 keywords, and we constructed a skills prefix tree to map the original 1,075 keywords plus 254 alternative forms (e.g., plural vs. singular) to the curated 755 skill keywords.
Next, we counted the occurrence of skill keywords in the 42,298,617 records from the NLx Research Hub using the skills prefix tree and estimated SOC probabilities for their corresponding job titles using the job titles prefix tree. We found that the records contained 26,953,261 distinct job descriptions (see Table S1), and 24,009,146 of those (89.1%) contained at least one skill keyword and had at least one SOC code with R0.1 probability in their title. We represented these associations as a sparse ''job-skill'' matrix with the dimensions 24,009,146 3 755 and a sparse (transposed) ''SOC-job'' matrix with the dimensions 867 3 24,009,146.
Because the skill keywords vary from general to specific and technical, we applied a natural language processing technique called Term Frequency-Inverse Document Frequency (TF-IDF) 31 to reweight the occurrences of skill keywords in the job-skill matrix to better approximate the relevance of individual skill keywords for determining occupation. 32 We calculated the matrix product of the SOC-job matrix and the TF-IDF-weighted job-skill matrix to produce a dense SOC-skill matrix with dimensions of 867 3 775. We normalized the rows of the SOC-skill matrix, which can be interpreted as probability distributions over skills for each SOC code. Figure S1 visualizes the structure of this matrix.
Cosine distance between skill probabilities captures occupational similarity in the SOC code hierarchy We estimated occupational similarity by computing pairwise distances between vectors of skill probabilities and using the inverse of the distance as a similarity measure. To compare occupations, we computed distances between all pairs of SOC code rows in the SOC-skill matrix to produce a ''SOC-SOC'' similarity matrix. We tested four distance measures for this matrix: the Euclidean (L 2 ) metric, the Manhattan (L 1 ) metric, the cosine metric, and the Kullback-Leibler divergence. 33 We found that the cosine metric best captured the block-diagonal structure of occupations at the 2-digit SOC code level ( Figure S2). To quantitatively assess this, we grouped SOC code pairs by whether they share the same first 2 digits and calculated the ratio of the mean similarity score within these two groupings. We found that the highest ratio was for cosine similarity (2.083), followed by Manhattan (1.467), Kullback-Leibler (1.113), and Euclidean (1.099). Therefore, cosine distance, on average, assigns higher similarity between SOC codes within the same 2-digit SOC code level.
Skill distributions occur in many sources, including job descriptions and resumes. Therefore, we extended this method to be able to count skill keywords in arbitrary documents, apply the same TF-IDF transformation from the SOC-skill matrix above, and compare similarity between two documents or between a document and all SOC code rows in the SOC-skill matrix. These functions are provided in the ''parse'' and ''compare'' modules ( Figure 3) of the sockit package.
Research Hub job postings sample approximately 12% of United States job openings
We filtered the NLx Research Hub job postings using their acquisition date by removing jobs that were exact duplicates on job description content within an acquisition month. We aggregated the probability-weighted SOC counts at the month, year, and United States state level. These counts are a proxy for job openings, and we compared the counts both nationally and for the largest five states in the United States against official job opening estimates from the United States Bureau of Labor Statistics' Job Openings and Labor Turnover Survey 34 (JOLTS).
We found that, on average, across all months in 2019 and 2021, there were 8.3 times as many job openings reported in JOLTS as job postings in the NLx Research Hub, suggesting that it represents a 12% sample of job openings in the United States. We scaled the NLx Research Hub counts by a factor of 8.3 and compared it at the month level with the JOLTS estimates and found that these are closely related (Figure 4) and likely reflect the job market recovery to pre-pandemic levels. 35 However, the same comparison for the five largest states showed that California is under-represented, especially in the year 2019 (which is consistent with a known technical issue regarding California's data in the NLx Research Hub), and that New York is consistently over-represented. Therefore, our job posting data appear to be representative of job openings at the national level but not at the level of individual states in the United States.
We also found that occupational representation in job postings differs from actual United States employment by comparing the proportion of job postings at the 2-digit SOC code level with estimates of employment levels in the United States in 2019 and 2021. The employment estimates at the 2-digit SOC code level come from the United States Census Bureau's American Community Survey, 36 including estimates of all employed workers and of non-seasonal full-time workers, and from the United States Bureau of Labor Statistics' Occupational Employment and Wage
Descriptor
Statistics. 37 This comparison examines which occupations in our data are over-or under-represented due to a combination of actual demand in the labor market and potential occupational biases in our job postings. We found that NLx Research Hub job postings are over-represented in computer and healthcare occupations and under-represented in legal, food service, farming, and construction occupations relative to actual employment levels ( Figure 5).
Accuracy of matching job titles to SOC codes varies by occupation
We tested the accuracy of estimating SOC codes from job titles and job postings using the title and parse modules in sockit. The title module estimates the most probable SOC codes for a job title using the job title prefix tree. The parse module estimates the most similar SOC codes for a job posting from the cosine A B Figure 2. The prefix tree data structure used for substring matching of job titles to SOC code frequencies
OPEN ACCESS
Descriptor similarity between a TF-IDF-weighted skill keyword vector parsed from the job posting vs. each row in the SOC-skill matrix.
To establish a ground truth for our tests, we used the O*NET 27.0 Database. 25 We tested the 7,541 titles in the ''Sample of Reported Titles'' file against their corresponding SOC codes. We tested synthetic job postings that we constructed for 818 SOC codes by concatenating all their entries in the ''Task Statements'' and ''Detailed Work Activities'' files, which approximate the language used to describe qualifications in a job posting for these occupations.
We defined accuracy as the fraction of cases where the correct SOC code was contained in the three most probable codes (for titles) or in the three most similar codes (for postings) returned by sockit. Overall, titles matched at the 6-digit level with 56.7% accuracy and at the 2-digit level with 81.7% accuracy, while postings matched at the 6-digit level with 27.8% accuracy and at the 2-digit level with 78.9%. Accuracy varied by SOC code levels and by the occupational group of the test SOC codes ( Figure 6).
DISCUSSION
Job postings contain a rich source of information on the associations between job titles, skill keywords, and occupational codes. In our sample of job postings from the NLx Research Hub, these empirical associations accurately recovered 2-digit SOC codes from job titles, although matching at the 6-digit SOC code level was less accurate for most occupations. This is consistent with previous findings. 6 Variation in accuracy may be due to varying effectiveness of our title cleaning methods for certain occupations and specialized job titles. Overall accuracy might be improved by supplementing our title cleaning methods and job title prefix tree with information from the job descriptions. Variation may also be due to sampling bias in either the SOCcer model used in our initial estimates or in the reported titles and task statements in the O*NET survey data used for testing. In this case, collecting additional labeled training and testing data would help, for example using an active learning strategy that targets label acquisition according to which occupational groups have lower accuracy. 38 The distribution of job postings by state in our sample is biased relative to official statistics on job openings by state. Further corrections or supplemental data may be required for job posting frequencies to serve as accurate proxies for actual job openings at the state or city level. However, the overall frequency of job postings in our sample is consistent with a 12% month-to-month sampling rate among national job openings.
Our sample could be biased in terms of occupational representation, although this is more difficult to test. Our comparison of job postings with actual employment levels by occupation is not ideal since it conflates sampling bias with actual demand in the labor market. A preferable comparison would have been between job postings and job openings at the 2-digit SOC level, but JOLTS estimates of job openings are not available at this level. We expect more job postings relative to actual employment levels for occupations that are in high demand, for example in healthcare, where we observe roughly twice as many job postings as currently employed workers ( Figure 5). This over-representation in healthcare job postings is greater post-pandemic and could be driven by increased demand for healthcare workers following turnover during the pandemic. Legal, construction, and farming job postings have similar under-representation pre-and post-pandemic, which could be driven by preferences in those industries to post jobs offline or on specialized sites.
A limitation of our use of keyword analysis is that 10.9% of the 26,953,261 distinct job descriptions in our data are dropped because they are concise and list few skills or qualifications. In future work, an alternative approach might examine all occurring unigrams, bigrams, or trigrams that are putative skills and cluster them into a skills taxonomy with topic modeling. This approach might be able to retain all job postings in our data but would also introduce greater model complexity and potential noise from ambiguous job postings that are currently dropped in our analysis. Alternatively, additional keyword analysis could capture educational, licensing, and certification requirements that are sometimes used in place of skills in concise descriptions.
The associations between skills and occupations in our data provide a level of detail not currently available in official statistics. Through natural language processing of skill keywords and their associations with occupational codes, we found that occupations can be modeled as probability distributions over termweighted skills and that cosine distance between these distributions captures the existing SOC-code hierarchy of occupations. Accuracy # of SOC Digits in Match Figure 6. Accuracy of title to SOC models and job postings to SOC models as measured by the percentage of test cases where the three most probable SOC codes match at the 2-, 3-, 5-, or 6-digit SOC level We have applied these models in a real-world application to recommend career transitions to job seekers based on skill similarity to their previous occupations. 18 Using these methods, researchers can parse unstructured job description, resume, or job title data in order to conduct analyses that rely on structured SOC codes, which could open up new lines of research that were previously not possible.
In future work, we hope to improve our sampling through additional sources of job postings, in particular to address the lower accuracy of under-represented SOC codes in the Research Hub data. Improved sampling may reduce occupational and regional biases and increase the accuracy of matching SOC codes to job titles and approximating job openings from job posting frequencies.
Resource availability
The NLx Research Hub data reported in this study cannot be deposited in a public repository because it is accessible only by authorized users under agreement with the National Association of State Workforce Agencies. For more information, see the NLx Research Hub's request process at https://nlxresearchhub.org/ request-nlx-data. Datasets reported in this study that were derived from the NLx Research Hub data have been deposited at Zenodo and are publicly available as of the date of publication. 39 All original code has been deposited at GitHub and Zenodo. 40,41 These datasets and software are openly available for academic use, including reverse engineering and derivative works, under a custom license. | 2023-05-24T15:10:24.140Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "4cb03b121af9b60b8860fee612a7779e69cc10a6",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2666389923001022/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c29d314b80ccbda35fef9035764e4c9bb9a8a9d",
"s2fieldsofstudy": [
"Economics",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
1000722 | pes2o/s2orc | v3-fos-license | Energy-efficient 8-point DCT Approximations: Theory and Hardware Architectures
Due to its remarkable energy compaction properties, the discrete cosine transform (DCT) is employed in a multitude of compression standards, such as JPEG and H.265/HEVC. Several low-complexity integer approximations for the DCT have been proposed for both 1-D and 2-D signal analysis. The increasing demand for low-complexity, energy efficient methods require algorithms with even lower computational costs. In this paper, new 8-point DCT approximations with very low arithmetic complexity are presented. The new transforms are proposed based on pruning state-of-the-art DCT approximations. The proposed algorithms were assessed in terms of arithmetic complexity, energy retention capability, and image compression performance. In addition, a metric combining performance and computational complexity measures was proposed. Results showed good performance and extremely low computational complexity. Introduced algorithms were mapped into systolic-array digital architectures and physically realized as digital prototype circuits using FPGA technology and mapped to 45nm CMOS technology. All hardware-related metrics showed low resource consumption of the proposed pruned approximate transforms. The best proposed transform according to the introduced metric presents a reduction in power consumption of 21--25%.
In [48], Meher et al. proposed a HECV architecture where the wordlength was maintained fixed by means of discarding least significant bits. In that context, the goal was the minimization of the computation complexity at the expense of wordlength truncation. Such approach was also termed 'pruning'. However, it is fundamentally different from the approach discussed in the current paper. This terminology distinction is worth observing.
Thus, in response to the growing need for high compression of image and moving pictures for various applications [12], we propose a further reduction of the computational cost of the 8-point DCT computation in the context of JPEG-like compression and HEVC processing. In this work, we introduce pruned DCT approximations for image and video compression. Essentially, DCT-like pruning consists of extracting from a given approximate DCT matrix a submatrix that aims at furnishing similar mathematical properties. We advance the application of pruning techniques to several DCT approximations listed in recent literature. In this paper, we aim at identifying adequate pruned approximations for image compression applications. VLSI realizations of both 1-D and 2-D of the proposed methods are also sought.
This paper is organized as follows. In Section 2, a mathematical review of DCT approximation and pruning methods is furnished. Exact and approximate DCT are presented and the pruning procedure is mathematically described. In Section 3, we propose several pruned methods for approximate DCT computation and assess them by means of arithmetic complexity, coefficient energy distribution in transform-domain, and image compression performance. A combined figure of merit considering performance and complexity is introduced. In Section 4, a VLSI realization of the optimum pruned method according to the suggested figure of merit is proposed. Both FPGA and ASIC realizations are assessed in terms of area, time, frequency, and power consumption. Section 5 concludes the paper.
2 Mathematical Background
Discrete Cosine Transform
Let x = x 0 x 1 · · · x N −1 ⊤ be an N -point input vector. The one-dimensional DCT is a linear transformation that maps x into an output vector X = X 0 X 1 · · · X N −1 ⊤ of transform coefficients, according to the following expression [49]: x n · cos (n + 1 2 where k = 0, 1, . . . , N − 1, α 0 = 1/ √ 2 and α k = 1, for k > 0. In matrix formalism, (1) is given by: where C is the N -point DCT matrix whose entries are expressed according c m,n = α m · 2/N · cos (n + 1 2 )mπ/N , m, n = 0, 1, . . . , N − 1 [23]. Being an orthogonal transform, the inverse transformation is given by: x = C ⊤ · X. Because DCT satisfies the kernel separability property, the 2-D DCT can be expressed in terms of the 1-D DCT. Let A be an N × N matrix. The forward 2-D DCT operation applied to A yields a transform-domain image B furnished by: B = C · A · C ⊤ . In fact, the 2-D DCT can be computed after eight column-wise calls of the 1-D DCT to A; then the resulting intermediate image is submitted to eight row-wise calls of the 1-D DCT. In this paper, we devote our attention to the case N = 8.
DCT Approximations
In general terms, a DCT approximationĈ is constituted of the product a low-complexity matrix T and a scaling diagonal matrix S that ensures orthogonality or quasi-orthogonality [31]. Thus, we haveĈ = S · T [16,27,28,50]. The entries of the low-complexity matrix are defined over the set {0 ± 1, ±2}, which results in a multiplierless operator-only addition and bit-shifting operations are required. Usually possessing irrational elements, the scaling diagonal matrix S does not pose any extra computation overhead for image and video compression applications. This is due to the fact that the matrix S can be conveniently merged into the quantization step of compression algorithms [16,27,29,50].
Among the various DCT approximations archived in literature, we separate the following methods: (i) the signed DCT (SDCT), which is the seminal method in the DCT approximation field [25]; (ii) Bouguezel-Ahmad-Swamy approximations [27,29,30]; (iii) the rounded DCT (RDCT) [28], and (iv) the modified RDCT (MRDCT) [50]. These approximations were selected because they collectively exhibit a wide range of complexity vs. performance trade-off figures [50]. Moreover, such approximations have been demonstrated to be useful in image compression. The low-complexity matrices of above methods are shown in Table 1. Ad- Yes RDCT [28] Yes ditionally, we also considered the 8-point naturally ordered Walsh-Hadamard transform (WHT), which is a well-known low-complexity transform with applications in image processing [30,51].
Pruned Exact and Approximate DCT
Essentially, DCT pruning consists of extracting from the 8×8 DCT matrix C a submatrix that aims at furnishing similar mathematical properties as C. Pruning is often realized on the transform-domain by means of computing fewer transform coefficients than prescribed by the full transformation. Usually, only the K < N coefficients that retain more energy are preserved. For the DCT, this corresponds to the first K rows of the DCT matrix. Therefore, this particular type of pruning implies the following K × 8 matrix: where 0 < K ≤ 8 and c m,n , m, n = 0, 1, . . . , 7, are the entries of C. The case K = 8 corresponds to the original transformation. Such procedure was proposed in [32,46] for the DCT in the context of wireless sensor networks. For the 2-D case, we have that the pruned DCT is given by: Notice thatB is a K × K matrix over the transform-domain. Lecuire et al. [46] showed that retaining the transform-domain coefficients in a K × K square pattern at the upper-right corner leads to a better energy-distortion trade-off when compared to the alternative triangle pattern [32].
The pruning approach can be applied to DCT approximations. By discarding the lower rows of the low-complexity matrix T, we obtain the following K × N pruned matrix transformation: where t m,n , m, n = 0, 1, . . . , 7, are the entries of T (cf. Table 1). Considering the orthogonalization method described in [31], the K×8 pruned approximate DCT is given by: where S K = diag{(T K · T ⊤ K ) −1 } is a K × K diagonal matrix and diag(·) returns a diagonal matrix with the diagonal elements of its argument. If T is orthogonal, then T K satisfies semi-orthogonality [52, p. 84].
The 2-D pruned DCT of a matrix A is given bỹ Resulting transform-domain matrixB is sized K × K.
Complexity and Performance Assessment
In this section, we analyze the arithmetic complexity of the selected pruned DCT approximations. We also assess their performance in terms of energy retention and image compression for each value of K.
Arithmetic complexity
Because all considered approximate DCT are natively multiplierless operators, the pruned DCT approximation inherits such property. Therefore, the arithmetic complexity of the pruned approximations is simply given by the number of additions and bit-shifting operations required by their respective fast algorithms.
To illustrate the complexity assessment, we focus on the MRDCT [50], whose fast algorithm signal flow graph (SFG) is shown in Figure 1(a). The full computation of the MRDCT requires 14 additions. By judiciously considering the computational cost of only the first K transform-domain components, we derived fast algorithms for the pruned MRDCT matrices as shown in Figure 1.
The same procedure was applied to each of the discussed approximations based on their fast algorithms [25,[27][28][29][30]50,51]. The obtained arithmetic additive complexity is presented in Table 2. We notice that the pruned MRDCT exhibited the lowest computational complexity for all values of K. Such mathematical properties of the MRDCT are translated into good hardware designs. Indeed, in [16], several DCT approximations were physically realized in FPGA devices. Hardware and performance assessments revealed that the MRDCT outperformed several competitors, including BAS 2008 [27] and RDCT [28], in terms of speed, hardware resource consumption, and power consumption [16].
An examination of (6) reveals that the 2-D pruned approximate DCT is computed after eight columnwise calls of the 1-D pruned approximate DCT and K row-wise call of 1-D pruned approximate DCT. Let A 1-D (T K ) be the additive complexity of T K . Therefore, the additive complexity of the 2-D pruned approximate DCT is given by: For the particular case of the pruned MRDCT, we can derive the expressions below: for K = 1, 2, . . . , 8.
Retained energy
To further examine the performance of the pruned approximations, we investigate the signal energy distribution in the transform-domain for each value of K. This analysis is relevant, because higher energy concentrations implies that K can be reduced without severely degrading the transform coding performance [23]. In fact, higher energy concentration effects a large number of zeros in the transform-domain after quantization.
On its turn, a large number of zeros translates into longer runs of zeros, which are beneficial for subsequent run-length encoding and Huffman coding stages [53]. We analyzed a set of fifty 512 × 512 256-level grayscale standard images from [54]. Originally color images were converted to grayscale by extracting the luminance. Image types included textures, satellite images, landscapes, portraits, and natural images. Such variety is to ensure that selection bias is not introduced in our experiments. Thus our results are expected to be robust in this sense. Images were split into 8×8 subimages. Resulting subimages were submitted to each of the discussed pruned DCT approximation for all values of K. Subsequently, the relative amount of retained energy in the transform-domain was computed.
Obtained values are displayed in Table 2.
Image Compression
Proposed methods were submitted to an image compression simulation to facilitate their performance as an image/video coding tool. We based our experiments on the image compression simulation described in in [25,27,36,53,55], which is briefly outlined next. We considered the same above-mentioned set of images, sub-image decomposition, and 2-D pruned transformation, as detailed in previous sub-section. Resulting data were quantized by dividing each term of the transformed matrix by elements of the standard quantization matrix for luminance [53, p. 153]. Differently from [25,27,28], we included the quantization step in image compression simulation. This is a more realistic and suitable approach for pruned methods which take advantage of quantization step.
An inverse procedure was applied to reconstruct images considering 2-D inverse transform operation. Recovered images were assessed for image degradation by means of peak signal-to-noise (PSNR) [53, p. 9], structural similarity index (SSIM) [56], and spectral residual based similarity (SR-SIM) [57]. The SSIM compares an original image I with the recovered image R according to the following expression: where L = 255 is the dynamic range of pixels values, and ω i,j is entry of a Gaussian weighting function w = [ω i,j ] , i, j = 1, 2, . . . , 8, with standard deviation of 1.5 and normalized to unit sum. The SR-SIM between the original image I and the recovered image R is calculated as described in [57].
Average PSNR, SSIM, and SR-SIM values of all images were computed and are shown in Table 2. For a qualitative analysis, Figure 2 displays the reconstructed Lena image computed via the MRDCT for all values of K. Associated PSNR, SSIM, and SR-SIM values are also shown. Visual inspection suggests K = 6 as good compromise between quality and complexity. Indeed, we notice that the PSNR improvement from K = 5 to K = 6 is 3.92 dB, while the PSNR difference from K = 6 and K = 7 is just 0.4 dB.
Combined analysis
In order to compare the discussed approximations, we consider a combined figure of merit that takes into account some of the previously discussed measures. Although popular and worth reporting, mean retained energy and PSNR are closely related measures. Similarly, the SR-SIM is a derivative of SSIM. For a combined figure of merit, we aim at selecting unrelated measures; thus we separated the 2-D additive complexity, PSNR, and SSIM values, whose numerical values are listed in Table 2. Such combined measure is proposed as the following linear cost function: where α 1 , α 2 ∈ [0, 1] are weights; and NMSSIM and NMPSNR represent the normalized mean SSIM, and normalized mean PSNR, respectively, for all considered images submitted to a particular approximation T K . The above cost function consists of a multi-objective function, which are commonly found in optimization For mid-range values of α 1 , we have less trival scenarios. In Figure 3(a), considering the optimal transform, we notice that for mid-range values of α 1 the MRDCT and the BAS-2008 occupy most of the central area of the discussed region. Around the same region in Figure 3(b), for the MRDCT, we obtain mostly K = 6; whereas for the BAS-2008 we have K = 6, 8. We emphasize that the proposed pruned MRDCT with K = 6 requires only 12 additions. The fast algorithm for this particular case is presented in Figure 1(c).
HEVC Simulation
Taking into account the previous combined analysis, we embedded the proposed pruned MRDCT (K = 6), the BAS-2008 approximation (K = 8), and the pruned BAS-2008 (K = 6) in the widely employed HEVC reference software HM 10.0 [59]. This embedding consisted of substituting the original 8-point integer-based DCT transform present in the codec for each of the above-mentioned approximations. We considered nine CIF video sequences with 300 frames at 25 frames per second from a public video bank [60]. Such sequences were submitted to encoding according to: (i) the original software, and (ii) the modified software. We assessed mean PSNR metrics for luminance by varying the quantization parameter (QP) from 10 to 50 with Figure 5 shows the relative percent PSNR of each approximate method compared to the original HEVC according to QP and bitrate values. The curves show very close performance to the original codec. In Figure 5(a), for low QP values, the approximations show even higher PSNR, i.e., more than 100% relative PSNR, suggesting better compaction capability at low compression rates. However, same QP values do not necessarily generate the same compression ratio for each method, since distinct coefficients are derived from each transformation and submitted to the same quantization table. Figure 5(b) indicates that the approximations possess slightly lower coding performance compared to original HEVC when compared at same bitrate. At the same time, the approximate methods present considerable lower computational cost and the lost of performance is smaller than 1%. Figure 6 shows a qualitative comparison considering the first frame of the standard "Foreman" video sequence at QP = 30. The degradation is hardly perceived.
VLSI Architectures
We aim at the physical realization of pruned designs based on the MRDCT, BAS-2008, and BAS-2013. The MRDCT and BAS-2008 were selected in accordance to the discussion in previous section. The BAS-2013 was also included because it is the base for the only pruned approximate DCT competitor in literature [47]. Such designs were realized in a separable 2-D block transform using two 1-D transform blocks with a transpose buffer between them. Such blocks were designed and simulated, using bit-true cycle-accurate modeling, in Matlab/Simulink. Thereafter, the proposed architecture was ported to Xilinx Virtex-6 field programmable gate array (FPGA) as well as to custom CMOS standard-cell integrated circuit (IC) design. The transform was applied in a row-parallel fashion to the blocks of data and all blocks were 8 × 8, irrespective of pruning. When K decreases, the number of null elements in the blocks increases. The row-transformed data were subject to transposition and then the same pruned algorithm was applied, albeit for column direction.
FPGA Rapid Prototypes
The pruned architectures were physically realized on a Xilinx Virtex-6 XC6VLX 240T-1FFG1156 FPGA device with fine-grain pipelining for increased throughput. The FPGA realizations were verified using hardwarein-the-loop testing, which was achieved through a JTAG interface. Proposed approximations were verified using more than 10000 test vectors with complete agreement with theoretical values. Evaluation of hardware complexity and real-time performance considered the following metrics: the number of employed configurable logic blocks (CLB), flip-flop (FF) count, critical path delay (T cpd ), and the maximum operating frequency (F max ) in MHz. The xflow.results report file, from the Xilinx FPGA tool flow, led to the reported results. Frequency normalized dynamic power (D p , in mW/MHz) was estimated using the Xilinx XPower Analyzer software tool. Above measurements are shown in Table 3
ASIC Synthesis
For the ASIC synthesis, the hardware description language code from the Xilinx System Generator FPGA design flow was ported to 45 nm CMOS technology and subject to synthesis using Cadence Encounter.
Standard ASIC cells from the FreePDK, which a free open-source cell library at the 45 nm node, was used Table 4.
Discussion
The FPGA realization of the proposed pruned MRDCT showed a drastic reductions in both area (measured from the number of CLBs) and frequency normalized dynamic power consumption, compared to the full MRDCT. Table 5 shows the percentage reduction of area and frequency-normalized dynamic power for both FPGA implementation and CMOS synthesis for different pruning values. All metrics indicate lower hardware resource consumption when the number of outputs are reduced from 8 to 1. In particular, for K = 6, which minimizes the discussed cost function (cf. (11)), we notice a power consumption reduction for approximately 20-25%.
In order to compare the hardware resource consumption of the introduced pruned DCT approximation Table 3 and 4, it can be seen that the proposed transform discussed here outperforms both pruned BAS-2008 and pruned BAS-2013 in terms of hardware resource consumption, and power consumption while is in par in terms of speed as well.
Conclusion
In this paper, we present a set of 8-point pruned DCT approximations derived from state-of-the-art methods. We have embedded the proposed methods into a standard HEVC reference software [59]. Results presented very low qualitative and quantitative degradation at a considerable lower computational cost.
Additionally, low-complexity designs are required in several contexts were very high quality imagery is not a strong requirement, such as: environmental monitoring, habitat monitoring, surveillance, structural monitoring, equipment diagnostics, disaster management, and emergency response [61]. All above contexts can benefit of the proposed tools when embedded into wireless sensors with low-complexity codecs and low-power hardware [62].
We summarize the contributions of the present work: • The pruning approach for DCT approximations was generalized by not only considering all possible pruning variations but also investigating a wide range of DCT approximations; • An analysis covering all cases under different figures of merit, including arithmetic complexity and image quality measures was presented; • A combined figure of merit to guide the decision making process in terms hardware realization was introduced; • The 2-D case was also analyzed and concluded that the pruning approach is even better suited for 2-D transforms.
• The considered pruned DCT approximation was implemented using Xilinx FPGA tools and synthesized using CMOS 45 nm ASIC technology. Such implementations demonstrated the low resource consumption of the proposed pruned transform. | 2016-12-02T19:47:28.000Z | 2015-12-30T00:00:00.000 | {
"year": 2016,
"sha1": "a5d54bbefc1df46c181f107c65e446bbd5f475dc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1612.00807",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a5d54bbefc1df46c181f107c65e446bbd5f475dc",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
4639346 | pes2o/s2orc | v3-fos-license | Use of Esophageal Impedance beyond Diagnosis of GERD
Combined multichannel intraesophageal impedance-pH (MII-pH) monitoring is currently the the gold standard method to diagnosis of gastroesophageal reflux disease (GERD). A standard MII-pH catheter has six pairs of impedance electrodes. Multiple impedance-measuring within the esophagus allows determination of direction of bolus movement within the esophagus. So that MII is used as a good esophageal functional test in the diagnosis of swallow, belching, aerophagia and regurgitation.
Introduction
Since its advent in the early 1990s, MII-pH monitoring has been the gold standard for diagnosis of GERD [1]. This widely adopted technique has superseded the use of isolated intraesophageal pH monitoring by its ability to detect and localise intraesophageal boluses, and further classify reflux activity beyond conventional acid reflux, by differentiating reflux events into gas versus liquid episodes.
The modus operandi of MII monitoring is the measurement of electrical conductivity. An alternating electric current is passed between pairs of electrodes mounted onto a specialised nasogastric catheter. The adjacent intraluminal material conducts the current. In the empty, collapsed esophagus, the esophageal mucosa is the agent providing electrical resistance as it lies in direct contact with the catheter. Ionic liquid conducts electricity well, thus its intraesophageal presence generates lower impedance readings to reflect reduced electrical resistance. In contrast, gaseous material is an electrical insulator which translates into high intraluminal impedance when it passes by said electrodes. Liquid boluses can be further characterised into acid versus non-acid episodes by the combination of MII with pH sensing.
Furthermore, impedance measurement can ascertain the direction of bolus travel within the esophagus. This is due to the presence of multiple pairs of electrodes conducting the aforementioned current placed at standardised intervals. A standard MII-pH catheter has six pairs of impedance electrodes. Antegrade bolus movement, i.e. what happens on swallowing, is detected by changes in impedance progressing chronologically from the proximal sensors to their distal counterparts. Conversely, retrograde bolus transit (i.e. reflux) manifests as changes in impedance progressing proximally.
The versatile diagnostic capabilities of impedance measurement render it a potentially valuable tool in evaluating many other common esophageal disorders. In this article i review its clinical applications beyond the diagnosis of GERD.
Belching: Gastric Versus Supragastric
Not all retrograde flow patterns of gaseous boluses are equal. The advanced study of impedance monitoring has revealed that there are two types of belching: the gastric belch and the supragastric belch.
The gastric belch is a vagally generated reflex leading to relaxation of the lower esophageal sphincter. Intragastric air is expelled through the esophagus and out through the mouth. It is accepted that gastric belches are physiological events. A gastric belch shows up on impedance monitoring as a one-way incline progressing distal to proximal, from left to right. During the supragastric belch, pharyngeal air is subconsciously sucked or injected into the esophagus, then expelled again without reaching the stomach. This is not to be confused with aerophagia, where the subject swallows air into the stomach. Supragastric belches can be observed on impedance tracings whereby there is antegrade movement of air down into the distal esophagus, followed seconds later by venting of this same gaseous bolus back up through the esophagus and out through the mouth. This results in the characteristic 'V' shaped pattern of flow on impedance monitoring.
In 2004, Bredenoord et al were the first to demonstrate the difference between the two belching subtypes [2]. Prior to this, conventional wisdom dictated that excessive belching was purely the venting of air from the stomach after a period of excessive air swallowing. This study examined 14 healthy volunteers and 14 patients with complaints of excessive belching. The rate of swallowing and the incidence of air swallowing were similar in patients and controls. While gastric belches were found both in patients and healthy volunteers, supragastric belch was only observed in patients and not in controls. None of these
Accurate diagnosis guides treatment
The distinction between gastric and supragastric belching is important because it is now understood that the therapeutic approach to the two disorders are different. This can be intuited from their disparate aetiologies. Excessive and problematic gastric belching is relieved by agents that inhibit transient LES relaxation, the commonest being baclofen. Supragastric belching will not respond the same way as it is not a consequence of LES dysfunction. Instead, supragastric belching arises from pathological air sucking, thus its correction requires behavioural modification. This can be achieved by biofeedback therapy. The patient can be trained to be aware, by watching their impedance monitoring in real-time, of their habit of sucking air into their esophagus, and is able to take steps to consciously repress this tendency.
The combination of MII with high resolution manometry (HRiM) allows synchronous measurement of bolus transport and esophageal clearance without the use of radiation. HRiM is also useful in evaluating aerophagia. Aerophagia is classified as a swallow together with a rapid impedance increase of 1000 ohm with this technique. Combined with manometry, impedance technique allows a better time definition between increased abdominal pressure and regurgitation events. The differential diagnosis could be easily between reflux, rumination, belching and aerophagia. Blondeau et al. performed a study were taken from 12 patients with clinically suspected rumination or supragastric belching using HriM [3] . They examined baclofen (10 mg, 3 times daily) effect on reflux, rumination, supragastric belching and aerophagia. In this study, the number of flow events 473 at baseline (42 reflux, 192 rumination, 188 supragastric belching, and 42 aerophagia) was significantly reduced to 282 (32 reflux, 99 rumination, 123 supragastric belching, and 13 aerophagia) during baclofen therapy (P=0.02). They suggested that baclofen is an effective treatment for patients with rumination or supragastric belching/aerophagia.
Aetiology of supragastric belching
Meals: The average rate of supragastric belching in Bredenoord's study was shown to be lower preprandially than postprandially, but this was not a statistically significant difference (40.9 vs 67.7). The investigators concluded that meals do not influence supragastric belches.
Psychological factors: Attention, or the lack thereof (i.e. distraction), also appears to impact upon the frequency of belches in symptomatic patients, which highlights the relevance of psychological factors in supragastric belching [4]. A Greek study showed that gastric belching is not affected by diurnal variation, but supragastric belches almost cease at night, suggesting the presence of a behavioral disorder [5].
Motility: Silva et al. [6] studied esophageal motility in 16 patients with troublesome belching and 15 controls [6], on the hypothesis that symptomatic patients demonstrate aberrant patterns of esophageal contractions and bolus transit. The study disproved the former premise (there was no difference in esophageal contractions between patients and controls) but did identify abnormal bolus transit in patients compared to controls, whereby the ingested bolus travelled slower through the proximal and middle esophageal body, then crossed the distal esophageal body faster [5].
Belching and GERD
Patients with GERD often have increased frequency of belching. It has been reported that air swallowing promotes belching but does not facilitate acid reflux in healthy volunteers [7]. Bredenoord et al. [8] studied 12 controls and 12 patients with GERD, before and after intragastric inflation of 600 mL of air. There was a higher frequency of air swallowing in the patient group compared to healthy controls, and the consequent larger intragastric air bubble also led to more frequent belching [8]. The proposed mechanism is that patients with GERD swallow more often than healthy subjects by responses to perceived reflux events [9]. However no relationship between the occurence of acid reflux and number of belches. We understand that gastric belching and acid reflux are not causally related.
Hemmink et al. investigated the relationship between the number and type of reflux episodes and supragastric belches during ambulatory 24-h MII-pH monitoring off proton pump inhibitor therapy in 50 patients with typical reflux symptoms and 10 healthy volunteers. They found that patients with reflux symptoms were more prone to supragastric belching, and that 48% of supragastric belches occurred in close temporal association with reflux episodes [10]. The authors suggested that supragastric belching accomplishes reflux as a result of abdominal straining or by provoking TLESRs.
Low baseline impedance in identification of esophageal disorders
The impedance between two electrodes depends not only upon luminal contact but also mucosal integrity, wall thickness and cross sectional area. The baseline impedance value is considered a reasonable surrogate of transepithelial resistance measured in vitro, which itself represents underlying esophageal epithelial integrity [11]. Distal baseline impedance values have been found to correlate inversely with esophageal acid exposure in GERD patients -more acid exposure leads to lower baseline impedance. The relationship appears to be causal, and evidence for this lies in the ability of PPI therapy to significantly increase baseline impedance [12].
Impedance levels in patients with ineffective esophageal motility are also lower than in healthy controls [13], as studied in patients with eosinophilic esophagitis, various esophageal motor disorders, and previous radiofrequency ablation treatment in Barrett's esophagus [12,[14][15][16]. As alluded to earlier, baseline impedance values as a marker of esophageal epithelial integrity is dependent on the characteristics of the collapsed esophageal wall. Blonski et al. analyzed MII and manometry studies in patients with abnormal manometry, nutcracker esophagus (n=20), distal esophageal spasm, (n=20), ineffective esophageal motility (IEM, n=20), achalasia (n=20), and systemic sclerosis affecting the esophagus (n=10) 13 . They calculated average values of esophageal impedance measured at 5 and 10-cm above the lower esophageal sphincter before liquid swallows [distal baseline impedance (DBI)], after 10 liquid swallows [distal liquid impedance (DLI)], and after 10 viscous swallows [distal viscous impedance (DVI)].
DBI, DLI, and DVI were significantly lower in patients with achalasia and systemic sclerosis than healthy volunteers with normal esophageal manometry. The authors also found that patients with IEM had significantly lower DBI, DLI, and DVI than healthy volunteers or patients with nutcracker esophagus and significantly higher DVI than patients with achalasia. Lower baseline impedance levels were described in patients with IEM than in healthy controls [13].
Interestingly, the mean DBI, DLI, and DVI in patients with IEM were not significantly different from those found in patients with systemic sclerosis. These results might suggest some level of fluid retention within the esophagus in patients with IEM, similar to that found in achalasia. Furthermore, the low distal esophageal impedance values in patients with IEM and achalasia are speculated to reflect the inflammation caused by fluid retention within esophageal mucosa.
The mean DBI, DLI, and DVI values in patients with DES
were not significantly different than those observed in healthy volunteers. This might be explained by the heterogeneity within the DES group with regard to esophageal pressure and bolus transit.
In their discussion, the authors suggested that esophageal impedance might be a useful parameter to evaluate fluid retention and may assist in the diagnosis of esophageal motility abnormalities. This recommendation has been echoed by other groups that showed decreased distal esophageal baseline impedance levels in achalasia that may help identify chronic fluid retention [17][18][19]. In other words, low baseline impedance values help identify a diseased esophagus, whether that is due to altered esophageal motility, mucosal inflammation, or chronic fluid retention as in achalasia.
Impedance in dynamic assessment of esophageal clearance
Nguyen et al. [18] also explored the potential clinical utility of impedance monitoring in assessing esophageal emptying in achalasia [18]. Their study found failed bolus transport through the esophagus, luminal content regurgitation in 35% of the swallows, and impedance evidence of pathological air movement within the proximal esophagus during deglutition in 38% of the swallows. In addition, a good correlation has been established between esophageal impedance measurements and videofluoroscopic assessment in evaluating esophageal clearance [20].
Mainie et al. performed combined MII-manometry on patients
with systemic sclerosis (n=15) and achalasia (n=20), and recruited subjects with poorly relaxing lower esophageal sphincter (LES) with normal esophageal body function (n=20) as a control group [21]. They found that overall bolus transit is impaired in both patients with achalasia and systemic sclerosis, as a result of abnormal esophageal body contraction and not abnormal LES relaxation. Segmental bolus stasis in patients with achalasia and scleroderma caused bolus transit abnormalities in this study.
Eosinophilic esophagitis and impedance
Eosinophilic esophagitis (EoE) is a chronic inflammatory disease of the esophagus that leads to fibrosis and structural changes within the esophagus. Patients with EoE most frequently report symptoms of dysphagia, food impaction, chest pain and sometimes heartburn. It has been postulated that esophageal mucosal integrity is impaired in patients with EoE [22,23]. van Rhijn et al. [14] studied esophageal baseline impedance levels in EoE patients and in controls. The relationship between baseline impedance levels and esophageal acid exposure was also examined as a potential causal mechanism [14]. Eleven adult patients with histologically confirmed EoE and a history of dysphagia and/or food impaction were included, and 11 controls matched to the EoE patients by total acid exposure time. Baseline impedance levels were assessed every 2 hours during a 30-second time period. The median baseline impedance level during all 2-hour periods was considered to be the baseline impedance level for the measurement.
Baseline impedance levels in EoE patients were markedly lower compared to controls in the distal esophagus, midesophagus and proximal esophagus (p=0.005). While baseline impedance decreased from proximal to distal in healthy subjects, there was no such gradient in patients with EoE. Because baseline impedance values are decreased throughout the esophagus in patients with EoE without favouring the distal esophagus, the authors concluded that impaired mucosal integrity in EoE is likely to be a function of factors beyond pure acid reflux. However baseline impedance monitoring remains clinically advantageous as a marker both of disease activity and therapeutic monitoring.
Rumination
The rumination syndrome is a functional gastroduodenal disorder that is characterized by near-immediate regurgitation of ingested food and the rechewing and reswallowing of said food. Rumination events are induced by a rise in intra-gastric pressure generated by a voluntary but unintentional contraction of the abdominal wall musculature.
The utility of HRiM in delineating esophageal motility and bolus transit has extended to elucidating the rumination syndrome. Rommel et al. [24] subjected 16 patients with clinically suspected rumination to HRiM for one hour after a solid-liquid meal [24]. Only 50% (8/16) were proven on HRiM to have actual rumination; the others were found to have postprandial belching and regurgitation.
A novel diagnostic classification for the rumination syndrome has been proposed which utilises HriM [25], based on the investigation of 12 patients with rumination syndrome and 12 patients with GERD who presented with predominant symptoms of regurgitation. In this study, abdominal pressure peaks exceeding 30mmHg during proximal reflux episodes were not observed in any patients with GERD, but seen in all of the rumination group. Furthermore, amplitudes over 30mmHg were observed in 70% of individual gastric pressure peaks during proximal reflux events in the rumination cohort. This paper describes three different mechanisms of rumination. The first mechanism, 'primary rumination', is denoted by a rise in intraabdominal pressure that preceded retrograde flow. This occurs in 100% of patients with rumination. 'Secondary rumination', affecting 45% of patients, is similar to a primary rumination event, but the increase in abdominal pressure occurs after the onset of a reflux event. The third mechanism is termed supragastric belch-associated rumination, seen in 36% of patients. Barba et al. recently reported that rumination can be effectively corrected by biofeedback-guided control of abdominothoracic muscular activity [26]. They prospectively studied 28 patients fulfilling the Rome criteria for rumination syndrome who then had their diagnoses confirmed on intestinal manometry (showing abdominal compression associated with regurgitation). These patients underwent three electromyography (EMG)guided biofeedback training sessions within a 10-day period, complemented by instructions for daily home exercises, with good results.
Barrett's Esophagus and Impedance
In patients with Barrett's esophagus, it is recognized that analysis of esophageal impedance tracings is hampered by low esophageal baseline levels, impeding reliable assessment of reflux episodes [27]. Very low impedance baseline which are very likely to occur abnormal esophageal mucosa in patients with Barrett's esophagus. Baseline impedance has been recently considered to be related to esophageal integrity. Another explanation is the occurrence of large numbers of reflux episodes in patients with Barrett's can result in an increased conductivity and therefore decreased impedance. Hemmink et al. [16] examined the effect of radiofrequency ablation (RFA) on esophageal baseline impedance in 15 patients with Barrett's esophagus [16]. They found that RFA increased baseline impedance in all recording segments in the upright position, in the supine position, although it didn't to reach statistically significant levels.They have shown that baseline impedance levels increased after conversion into neosquamous epithelium.
Functional Heartburn and Impedance
Functional heartburn (FH) is an exclusive diagnosis and is defined by the Rome III criteria as a burning retrosternal discomfort, excluding GERD and esophageal motility disorders as a cause of the symptom. The advent of MII-pH monitoring has allowed us to subdivide the heterogeneous subgroups of patients within the group of nonerosive reflux disease. The absence of visible lesions on endoscopy, normal distal esophageal acid exposure and absence of troublesome reflux-associated (to acid, weakly acidic or non-acid reflux) is subclassified FH. Martinucci et al studied baseline impedance levels in patients with FH divided into two groups on the basis of symptom relief after PPIs [28]. In this study 30 patients with a symptom relief higher than 50% after PPIs composed Group A, and 30 patients, matched for sex and age, without symptom relief composed Group B, a group of 20 healthy volunteers (HVs) was enrolled. Group A (vs Group B) showed an increase in the mean AET mean reflux number, proximal reflux number, acid reflux number. Baseline impedance levels were lower in Group A than in Group B and in HVs (p < 0.001). The authors concluded that evaluating baseline impedance levels could improve the distinction between FH and hypersensitive esophagus. Kohata et al. also showed that among patients with PPI-refractory nonerosive reflux disease, acid-reflux type is associated with lower baseline impedance compared with nonacid-reflux type and functional heartburn [29]. They suggested baseline impedance value may be useful for the classification of PPI-refractory patients. | 2019-03-17T13:02:23.952Z | 2017-04-13T00:00:00.000 | {
"year": 2017,
"sha1": "dc33314b1c31ddef84716aa4ac9dad34322ab443",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/GHOA/GHOA-06-00208.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "769b616fc7866a8105345668c760e2e2a2573990",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6552009 | pes2o/s2orc | v3-fos-license | Systematic literature review of basal bolus insulin analogue therapy during pregnancy
The maternal, fetal and neonatal complications of having diabetes during pregnancy, whether pre-gestational (PGDM) or gestational (GDM), have been well-established in the literature, prompting recommendations that women achieve and maintain strict glycemic control before and during pregnancy.1 DM-associated maternal complications include progression of retinopathy, nephropathy, hypertension, and preeclampsia. DM-associated fetal and neonatal complications include congenital malformations, stillbirth, macrosomia, shoulder dystocia, intrauterine growth restriction (IUGR).2,3 Poor glycemic control (glycated hemoglobin (A1c) ≥7%) in the first trimester (T1) is associated with an increased rate of major congenital malformations and spontaneous abortions. In the third trimester (T3), poor glycemic control is associated with increased rate of preterm birth, preeclampsia and perinatal mortality.2,4 However, simply targeting A1c levels is inadequate to prevent such complications. Insulin requirements vary throughout pregnancy: increasing in early pregnancy (for mean requirement of 0.7units/ kg), may decrease in the second half of T1, likely due to nausea and vomiting, then increase in second trimester (T2) (mean 0.8units/kg) and T3 (mean 0.9-1.0units/kg), and lastly, either plateaus or decreases beyond 32 weeks’ gestation.5 Insulin dosing must be continually managed and altered on an individual basis. Therefore, stringent glycemic control during pregnancy strives to minimize glucose excursions throughout the day.6
Introduction
The maternal, fetal and neonatal complications of having diabetes during pregnancy, whether pre-gestational (PGDM) or gestational (GDM), have been well-established in the literature, prompting recommendations that women achieve and maintain strict glycemic control before and during pregnancy. 1 DM-associated maternal complications include progression of retinopathy, nephropathy, hypertension, and preeclampsia. DM-associated fetal and neonatal complications include congenital malformations, stillbirth, macrosomia, shoulder dystocia, intrauterine growth restriction (IUGR). 2,3 Poor glycemic control (glycated hemoglobin (A1c) ≥7%) in the first trimester (T1) is associated with an increased rate of major congenital malformations and spontaneous abortions. In the third trimester (T3), poor glycemic control is associated with increased rate of preterm birth, preeclampsia and perinatal mortality. 2,4 However, simply targeting A1c levels is inadequate to prevent such complications. Insulin requirements vary throughout pregnancy: increasing in early pregnancy (for mean requirement of 0.7units/ kg), may decrease in the second half of T1, likely due to nausea and vomiting, then increase in second trimester (T2) (mean 0.8units/kg) and T3 (mean 0.9-1.0units/kg), and lastly, either plateaus or decreases beyond 32 weeks' gestation. 5 Insulin dosing must be continually managed and altered on an individual basis. Therefore, stringent glycemic control during pregnancy strives to minimize glucose excursions throughout the day. 6 Physiologic pancreatic beta cell secretion of insulin consists of a continuous rate of basal insulin and stimulated bolus insulin, released in response to an exogenous glucose load. Basal insulin secretion regulates lipolysis and hepatic glucose production in between meals. 7 Conventional insulin therapy during pregnancy includes human regular insulin and neutral protamine Hagedorn (NPH); these fail to optimally simulate physiologic insulin release. 8 Through the advent of recombinant DNA techniques, both rapid-and long-acting analogues were used to create products that mimic physiologic insulin profiles. 6 Without the use of continuous subcutaneous insulin infusion, basalbolus therapy (BBT), most closely simulates physiologic insulin profiles with the use of a long-acting insulin analogue used as basal insulin, coupled with a rapid-acting analogue given preprandially. 1 It is known that GDM increases a woman's risk of developing future type 2 diabetes mellitus (T2DM). 9 Having patients become familiar with BBT during pregnancy could improve continuity of care for women with PGDM and those with GDM, who may later develop T2DM.
The conventional regular insulin has a 30-minute onset of action (Table 1). This delayed onset of action requires it be administered 30 minutes prior to meals. 8 In clinical practice, however, rapid-acting analogues get substituted for regular insulin in the conventional insulin regimen. Patients administer regular insulin immediately prior to meals and consequently, patients experience postprandial hyperglycemia, followed by hypoglycemia 30 minutes later. 1 Rapidacting insulin analogues, like insulin aspart, lispro and glulisine are better options for simulating physiologic postprandial insulin action, allowing for more convenient dosing-immediately preprandially. Compared to regular insulin, rapid-acting insulin is associated with lower rates of 1-and 2-hour postprandial hyperglycemia. This may be important for perinatal outcome as postprandial hyperglycemia is more predictive of neonatal complications than is elevated fasting blood glucose levels. 6,7 Rapid-acting analogues also reduce the risk of late postprandial hypoglycemia, helping to minimize daily glucose excursions. 10 NPH is intermediate-acting insulin used a basal insulin that fails to mimic physiologic insulin secretion. It has a delayed onset of 2-4 hours, peaking in 4-10 hours, and lasting 12-18 hours (Table 1), whereas glargine and detemir successfully create a near peakless action profile. 7,8 Glargine has a duration of action of approximately 24 hours, whereas detemir has a shorter duration of action of 20 hours (Table 1). Clinically, this requires the use of multiple daily doses of NPH that rarely achieve a steady state of basal insulin. Whereas steady state basal insulin is achieved with the use of long-acting insulin analogues. Daily doses of basal insulin detemir requires twice daily dosing, while glargine necessitates once daily dosing. 8 Benefits of long-acting analogues include lower serum fasting glucose levels. Reduced risk of nocturnal hypoglycemia and, less glycemic variability. 10 BBT using insulin analogues is standard care for non-pregnant individuals affected by T2DM. However, due to the lack of adequately powered randomized clinical trials evaluating the antenatal use of BBT, its routine use during pregnancy is not recommended by most experts. A review of the literature suggests that when compared to the current standard therapy (NPH and regular insulin regimen), BBT with insulin analogues used as therapy for pregnancies complicated by PGDM and GDM during pregnancy, have been associated with improved patient satisfaction and glycemic control. 8,11 We suggest these reported benefits as well as enabling therapeutic continuity for gravidas with PGDM, that insulin analogues become standard of insulin care during pregnancy. The widespread, often inadvertent use of BBT during pregnancy, will likely prevent conduction of an adequately powered randomized trial to compare the perinatal risks and benefits of BBT versus standard perinatal insulin regimen, therefore we conducted a systematic literature review of insulin analogue use during pregnancy to evaluate the evidence concerning BBT use in pregnancy.
Methods
Electronic literature search using PubMed, Medline, Cochrane Library were performed for randomized clinical trials, prospective, retrospective, observational, case reports and case series, using search terms, insulin analogues, lispro, aspart, detemir, glargine, glulisine, Humalog, NovoLog, Lantus, Levemir, Apidra, pregnancy, and diabetes, that were published between 1996 and 2014. Studies included case reports and series, randomized clinical trials, prospective and retrospective observational studies evaluating rapid and longacting insulin analogues in pregnancies complicated by either PGDM or GDM. Studies assessing efficacy, safety, and historical comparisons between analogues were included. Review articles were excluded. Search was limited to human studies and in the English language. The principal efficacy endpoint examined was glycemic control through A1c levels. Maternal safety outcomes were incidence of hypoglycemic events, fasting and postprandial hyperglycemia Neonatal outcomes included rate of large for gestational age (LGA) or macrosomic (>4000g birth weight) infants, mean birth weight, rate of congenital malformations, and incidence of hypoglycemic events.
Results
Nineteen reports were identified that assessed the use of rapidacting insulin analogues, lispro and aspart, as hypoglycemic agents during pregnancy. Their prenatal use was associated with significant decreases in A1c, incidence of maternal hypoglycemia, comparable rates of neonatal hypoglycemia and congenital malformations when compared to regular insulin use. Twenty-three reports assessed the safety and efficacy or the long-acting insulin analogues, glargine and detemir. These agents were associated with significant decreases in A1c, maternal and neonatal hypoglycemia with no increase in rates of LGA/macrosomia infants or congenital malformations.
A total of 42 studies were included. Of those, 3 were uncontrolled studies of lispro, 10 were either observational or randomized studies comparing insulin lispro to NPH insulin. Mecacci et al. 12 compared lispro and regular insulin to healthy controls, rather than to each other. Three publications on randomized controlled trials comparing insulin aspart to NPH insulin were included; of these, Mathiesen et al. 13 and Hod et al. 14 reported on the same study. One study compared both rapid-acting insulin analogues (lispro and aspart) to regular insulin. Four uncontrolled observational studies of glargine were included, 7 studies compared glargine use to NPH and one study compared glargine to detemir. Of the 7 studies comparing glargine to NPH, Imbergamo et al. 15 compared glargine users to healthy controls for fetal and neonatal outcomes. Two uncontrolled studies of detemir were included and 2 publications by Mathiesen et al. 13 and Hod et al. 14 reporting on the same randomized controlled trial comparing detemir to NPH were included.
Rapid-acting insulin analogues
Glulisine: There are no published reports of the use of insulin glulisine in pregnancy.
Lispro: Lispro is the most well-studied insulin analogue used during pregnancy. An early case report by Diamond & Kormas 16 reported possible adverse fetal effects of lispro in 2 diabetic mothers. One pregnancy was terminated and was found to have multiple heart abnormalities, polysplenia, and abdominal situs inversus. The second infant was born at full term and died suddenly at 3 weeks' and was found to have congenital diaphragmatic hernia and bilateral undescended testes. 16 Three uncontrolled studies evaluated the effects of lispro during pregnancies complicated by PGDM (Table 2) (Table 3). Masson et al. 17 included data from 71 subjects with T1DM and reported decreased A1c from levels in the preconception period to T3 (7.4±1.7 to 6.17±0.85). Early during gestation, the authors reported that 12 women experienced 12 episodes of severe hypoglycemia. In terms of neonatal outcomes, 2 had congenital malformations, 29 neonates had hypoglycemia, and 35% had birth weights of ≥90th percentile for gestational age. 17 Garg et al. 18 studied 62 parturients with T1DM treated with lispro, who had a mean preconception A1c of 7.2±0.2, that decreased to 5.8±0.1 at delivery. Fourteen gravida experienced severe hypoglycemic episodes or 0.61events/patient. Two neonates had congenital malformations and 24% were LGA. 18 A retrospective cohort analysis from Wyatt et al. 19 of 496 women with PGDM treated with lispro found a mean A1c value at the first prenatal visit of 8.9±4.2 to be significantly reduced to 6.2±2.4 in T3 (p<0.001). The rates of major congenital abnormalities, LGA infants, and mean birth weight were 5.4% (95% CI [3.45-7.44%]) and 23.4%, and 3464±765g, respectively. 19 Values statistically significant would be indicated with p-value ‡ ‡Unless otherwise specified, A1c is mean or T1 level. †2 hour postprandial measurement ‡Patients randomized at 14 weeks' gestation *Authors did not specify T1DM vs T2DM for PGDM **Results converted from reported mmol/L to mg/dL † †Authors compared Lispro and Regular to healthy controls Twelve studies (4 randomized trials and 8 observational studies) comparing both maternal and neonatal outcomes in PGDM and GDM treated with lispro versus human insulin are reported in Tables 2 & 3. Jovanovic et al. 20 randomized 42 women diagnosed with GDM between 14 and 32 weeks' of gestation, who were insulin naïve, to lispro (19 women) and regular insulin (23 women). A test meal consisting of 20% of daily caloric need was given before breakfast and followed by subcutaneous injection of the calculated initial daily insulin of either lispro or regular insulin. Scheduled measurement of glucose concentrations were fasting, 60 min, 120 min, and 180 min following the meal. The authors reported significantly lesser area under the curve for lispro (p=0.025). However, A1c measured 6 weeks later showed no significant differences between groups (p=0.7508). The group treated with lispro experienced fewer pre-breakfast hypoglycemic events (<55mg/dL), but not before lunch or dinner.
Mean birth weight, frequency of macrosomia, intrauterine growth restriction (IUGR), and fetal abnormalities did not significantly differ. 20 Buchbinder et al. 21 retrospectively analyzed patients with T1DM treated with lispro (n=12) and regular insulin (n=42). Overall, there were no statistically significant differences in glycemic control between the 2 groups during pregnancy and postpartum. 21 Similarly, Bhattacharyya et al. 22 conducted a retrospective cohort study of PGDM and GDM patients and demonstrated no differences in fetal outcomes. However, pre-delivery A1c levels in the lispro group were significantly lower than regular insulin and diet-controlled groups (p<0.05). In addition, patient satisfaction was significantly higher in the lispro group compared to regular insulin (26.3±2.3 vs. 18±8.9, p=0.0005). 22 Loukovaara et al. 23 conducted a randomized trial on women with T1DM (lispro, n=36 and regular insulin, n=33). They found lower, but non-significant, rates of maternal hypoglycemia (9 vs. 11 women, p=0.42) throughout pregnancy. A1c levels were significantly lower in the lispro group in T2 and T3, but not in T1 (p=0.022). 23 No significant difference was demonstrated in A1c levels in any trimester in a prospective observational and retrospective observational study by Cypryk et al. 24 & Aydin et al. 25 respectively. By contrast, Lapolla et al. 26 reported a significantly lower A1c level in T1 in the lispro versus regular group (6.7±1.4 vs. 7.3±1.4, p<0.001), but no difference in A1c in T3 (6.0±1.0 lispro vs. 6.2±1.2 regular). 26 Rates of maternal hypoglycemia were lower but non-significant in the lispro group (2.8% vs. 5.4%). Incidences of congenital malformations were similar (4.3% vs. 4.5%). However, incidence of LGA was significantly higher in the lispro group (55.1% vs. 39.2%, p=0.0267). Cypryk et al. 24 demonstrated a non-significant decrease in the rate of neonatal hypoglycemia (17.4% in lispro group vs. 23.3% in regular insulin group). There were no reports of congenital malformations in either study.
In 2002, randomized controlled trial conducted by Persson et al. 27 randomized 16 women to lispro and 17 to regular insulin at 14 weeks' gestational age. Postprandial breakfast glycemic control significantly improved in the lispro compared to regular insulin group (p<0.01). Only one congenital malformation with regular insulin was reported. Although there was an associated increase in rate of maternal biochemical hypoglycemic events (<54mg/dL) in those treated with insulin lispro (5.5 vs. 3.9%, p<0.05), only the regular insulin group experienced severe hypoglycemia episodes. 27 Mecacci et al. 12 enrolled 25 patients on lispro and 24 on regular insulin in a randomized trial of women with GDM, and compared them to 50 healthy controls. The 1-hour postprandial blood glucose concentrations were significantly higher in the regular insulin group, but similar for the 2 study groups at 2-hours. The rate of neonates born with a cranio-thoracic circumference ratio between the 10th and 25th percentile was significantly higher in the regular insulin group. The authors found no differences in neonatal outcomes between treatment groups. 12 In 2005, Plank et al. 28 found no significant differences in fetal or maternal outcome between pregnant patients using insulin lispro and aspart. 28 A prospective observational study by Durwnald et al. 29 demonstrated significantly improved A1c over T2 and T3 in those treated with insulin lispro versus regular insulin (5.9±1.0 vs 6.7±1.3, p=0.009). In addition, while the mean birth weight was significantly higher in the lispro group (3569±526g vs. 3246±764g, p=0.01), the incidence of LGA (32.8% vs. 20.4%, p=0.15) and neonatal hypoglycemia (41.4% vs. 38.8%) in the study groups were not statistically significant. Congenital malformation was identified in one patient exposed to lispro and two in the regular insulin group. 29 In 2011, Garcia-Dominque et al. 30 retrospectively evaluated 241 parturients treated with regular insulin and NPH and 110 with various combinations of insulin analogues (most commonly, a combination of lispro and NPH). 30 The group treated with insulin analogues had slightly higher mean A1c during T1 (6.9±1.0 vs. 6.6±1.0, p=0.022), but required smaller insulin doses throughout pregnancy. The frequency of severe hypoglycemia was significantly less among rapidacting analogue users (2.3 vs. 10.0%, p=0.025).The treatment groups had similar rates of congenital malformations. In contrast, neonatal hypoglycemia was significantly more frequent in the rapid-acting analogue group (34.9 vs. 23.6%, p=0.043), but the authors attributed this to the concomitant use of an insulin pump.
Di Cianni et al. 31 conducted a smaller clinical trial of women with GDM who were randomized to receive lispro, aspart or regular insulin with NPH as basal insulin. 31 The groups were matched for age, parity, BMI, gestational age and weight gain. One-hour postprandial glucose levels after breakfast were significantly higher in the regular insulin group when compared to the lispro and aspart groups (135±23.4, 118.8±18.9, 121.5±20.16mg/dL, respectively, ANOVA p<0.05). Birth weight was higher in the regular insulin group (p<0.04) and similarly, LGA rates were 15.6%, 12.1%, and 9.6% in regular insulin, lispro, and aspart groups, respectively.
To date, there are two published papers who performed a metaanalysis comparing lispro to regular insulin. One article by Gonzalez Blanco et al. 32 looking at only T1DM, included four studies into their analysis revealed no differences in maternal or neonatal outcomes with the exception of a higher rate of LGA newborns (RR 1.38[1.14-1.68]). 32 A larger and more recent meta-analysis by Lv et al. 33 assessed the safety of four insulin analogs: lispro, aspart, glargine, and determir. Twenty-four studies met their inclusion criteria which involved both pre-gestational and gestational diabetic pregnant patients. They reported no significant difference in the maternal and neonatal outcomes in patients using aspart, glargine, and determir. However, insulin lispro was associated with higher birth weight and an increased rate of LGA newborns, similar to the findings of the previous meta-anaylsis. 33 Aspart: Aspart use in pregnancy has not been as well studied as has been with lispro. We have included four studies that evaluate the use of aspart in pregnancy (Table 4) ( Table 5).
The first published study was authored by Pettitt et al. 34 in 2003 (Table 4). Twenty-seven women with GDM were enrolled in a prospective cross-over trial. Area under the curve of glucose concentrations from the preprandial period to 240 minutes after a test meal consisting of 20% of daily caloric needs was measured. Either aspart or regular insulin was given the first day, which was followed by administration of the other insulin the next day. The area under the curve was significantly lower with aspart (p=0.018) but not with regular insulin (p=0.997), compared to no insulin use. One congenital malformation was reported in the aspart group. The authors reported no macrosomic infant in either group. 34 A large open-label, randomized, parallel-group trial by Mathiesen et al. 13 and Hod et al. 14 investigated 322 women with T1DM, who had been treated with insulin for more than 12 months and had preconception A1C <8%. 13,14 Subjects were randomized to aspart or regular insulin, coupled with NPH used as basal insulin. The primary study endpoint was major maternal hypoglycemia (requiring assistance with <55.8mg/dL or reversal of symptoms after food, glucagon, or intravenous glucose); this was not significantly lower in the group treated with aspart (RR 0.720 [95% CI 0.36-1.46]). Both groups showed decreased A1c during T1 and T2, followed by an increase at delivery and at 6 weeks' postpartum. No statistical significance in glycemic control was reported in T2 and T3. The incidence of congenital malformations was 4.3% and 6.6% for aspart and regular insulin, respectively. There were no significant differences between treatment arms in birth weight and rates of neonatal hypoglycemia requiring treatment (33.6% aspart vs. 39.7% regular insulin). In addition, the aspart treatment group had greater overall treatment satisfaction due to increased flexibility in treatment and is demonstrated by their willingness to continue on the study regimen after delivery.
Heller et al. 35 published a randomized, multicenter trial of 322 women affected by T1DM in 2010; 99 of these patients were randomized pre-pregnancy and 223 randomized early during the pregnancy, to either aspart or regular insulin for bolus therapy, in combination with NPH for basal therapy. They found no differences in A1c levels between treatment groups, but also demonstrated fewer, but not statistically significant, maternal hypoglycemic episodes with aspart use versus regular insulin (0.9 vs. 2.4 events per patient per year in the first half of pregnancy, and 0.3 vs. 1.2 events per patient per year in the second half of pregnancy). 35 See above Lispro section for results from Di Cianni et al. 31 Values statistically significant would be indicated with p-value ‡ ‡Unless otherwise specified, A1c is mean or T1 level.
*Authors reported on same study **Results converted from reported mmol/L to mg/dL †Rate in number of hypoglycemic episodes/year of exposure ‡Randomly assigned preconception ‡ ‡Randomly assigned in early pregnancy
Long acting insulin analogues
Glargine: To date, there are no randomized trials studying the safety and efficacy of glargine use during pregnancy. However, glargine is the long-acting insulin analogue that has been well studied through case reports and observational studies ( Table 6) ( Table 7).
The first reports of glargine use in pregnancy stem from a set of case reports published between 2002 and 2007. Few pregnant patients were switched from NPH to glargine due to recurrent hypoglycemic episodes. Others had either willingly continued using glargine despite the unknown effects or were not aware of their pregnancy until 6-12 weeks' gestation. No congenital malformations were reported, although two infants were LGA. [36][37][38][39][40][41][42] Values statistically significant would be indicated with p-value ‡ ‡Unless otherwise specified, A1c is mean or T1 level ◊Authors compared women who continued glargine throughout pregnancy to women who used glargine early in pregnancy and switched to NPH ‡Overall mean glucose levels *Authors did not specify T1DM vs T2DM for PGDM **2 hours post-breakfast glucose; post-lunch and post-dinner values were not significantly different †Values reported as median values (range) † †Authors compared glargine users to healthy controls for fetal and neonatal outcomes. Patient population used BBT with lispro/aspart with glargine/NPH. Values statistically significant would be indicated with p-value ◊Authors compared women who continued glargine throughout pregnancy to women who used glargine early in pregnancy and switched to NPH *Authors did not specify T1DM vs T2DM for PGDM †Values reported as median values (range) † †Authors compared glargine users to healthy controls for fetal and neonatal outcomes. Patient population used BBT with lispro/aspart with glargine/NPH.
In 2008, Gallen et al. 43 published an uncontrolled prospective audit of 115 pregnant women with T1DM treated with glargine. Of these, 69% were treated with glargine during the preconception period. Lispro was the bolus insulin in 42%, aspart in 45%, and regular insulin in 8%. Glycemic control assessed by changes in A1c levels improved from enrollment (8.1±1.7) to T3 (6.8±0.1). Severe maternal hypoglycemia, defined as requiring third-party assistance, was reported in 22%. In terms of neonatal outcomes, 47% (50/114) of neonates experienced hypoglycemia, mean birth weight was 3500±600g, and three had congenital abnormalities. 43 Similarly, Lepercq et al. 44 studied 102 women with T1DM treated with glargine and demonstrated improvement in mean A1C from T1 to T3 (6.7±1.2 to 6.2±0.9). Two congenital malformations were noted and LGA rate was 30%. 44 Henderson et al. 45 conducted a retrospective review of 240 women with T2DM and GDM and found a mean glucose level of 112±14.8mg/dL, birth weight of 3142±606g and 4 macrosomic infants. 45 Di Cianni et al. 46 retrospectively observed a cohort of 107 women who used glargine for at least one month preconception and during pregnancy. Six pregnancies ended with abortion, 4 of which were spontaneous. The comparison groups were 43 women who had continued with glargine throughout pregnancy and 58 women who started using glargine in early pregnancy and switched to NPH, based on individual center policies. All patients showed improvement in glycemic control during pregnancy measured as A1c, from 7.7±1.32 at first prenatal visit to 6.5±0.79 at end of pregnancy in glargine group and 7.6±1.09 to 6.5±0.91 in NPH group. Maternal hypoglycemia was reported in 9.3% and 12.1% in glargine and NPH groups, respectively. The rates of LGA, macrosomia, congenital malformations and neonatal hypoglycemia were comparable between groups. 46 Price et al. 47 conducted a case-control study in women with T1DM and GDM. Thirty-two women previously treated with glargine were selected (10 T1DM, 22 GDM) using either lispro or aspart for bolus therapy. Cases were matched for type and duration of diabetes, duration of insulin therapy during pregnancy, parity, maternal height, and weight at first prenatal visit, gestational age at delivery, fetal sex, and glycemic control. Third trimester glycemic control and incidence of daytime and nocturnal hypoglycemia were not significantly different between groups. Birth weight and rate of congenital abnormalities were similar. Overall incidence of macrosomia was similar: 37.5% in glargine group and 40.6% in control group (p>0.05). 47 Poyhonen-Alho et al. 48 conducted a case-control study of women with T1DM. Forty-two received glargine, while 49 were treated with NPH. The glargine group demonstrated a significantly greater 48 Similarly, rates of mild and severe hypoglycemia did not differ either between glargine and NPH in a smaller study from Imbergamo et al. 15 Glargine group had significantly improved fasting and 2-hour post-breakfast glucose levels during T1 and T2. However, the incidence of LGA was 46.7% and 27.6% in the glargine and NPH groups, respectively. No differences in maternal or neonatal outcomes were found in a retrospective analysis by Smith et al. 49 Fang et al. 50 studied 112 PGDM and GDM women; they compared glargine and NPH use in 2 different patient populations: PGDM and GDM patients. Regular insulin or lispro was used as the prandial insulin regimen. They found no significant differences in the rates of maternal hypoglycemia, T3 A1c levels, and mean birth weight. No congenital abnormalities were noted. In PGDM treated with glargine, 18.9% had LGA infants versus 50% in NPH group (RR 0.38, 95% CI [0.17-0.87], p=0.04). There was only one case of maternal hypoglycemia in the PGDM group treated with glargine. There were no cases of neonatal hypoglycemia in PGDM glargine group, while 25% of the neonates exposed to in utero to NPH (p=0.01) experienced neonatal hypoglycemia. Subgroup analysis found no differences in rates of LGA or of neonatal hypoglycemia in GDM groups between insulin glargine and NPH use. 50 Egerman et al. 51 evaluated outcomes for 114 women with PGDM and GDM women treated with NPH and insulin glargine. The only significant differences in maternal and neonatal outcomes was an increased incidence of shoulder dystocia in the NPH group (p=0.03). 51 A recent study published in 2010, was a prospective cohort study of 138 women with either PGDM or GDM; PGDM and GDM groups were analyzed separately. Maternal hypoglycemia rates were increased in NPH versus glargine. The PGDM group treated with glargine experienced no hypoglycemic episodes, while 23% of the NPH group experienced hypoglycemia (p<0.0001). The neonatal outcomes for the study groups were not different. 52 Callesen et al. 53 retrospectively observed 113 women with T1DM and compared 2 basal insulin analogues, glargine and detemir Median A1c levels at 8 weeks' and 33 weeks' for the 2 study groups were comparable. In both groups, 23% experienced at least one occurrence of severe hypoglycemia. Lower mean birth weight (p=0.05) and incidence of LGA infants (p=0.046) were demonstrated in the glargine group. 53 One meta-analysis of maternal and neonatal outcomes comparing the use of glargine vs NPH insulin incorporating eight studies was conducted by Lepercq et al. 54 in 2012. They reported no significant increased risk of any neonatal or maternal outcomes. They also found no difference in the glycemic control as measured by first and third trimester A1c between insulin glargine and NPH insulin. 54 Detemir: Detemir is a long-acting insulin analogue that is not as well studied as glargine. The first published report of detemir use in pregnancy was a study of 10 women with T1DM who were treated with detemir for a minimum of 3 months preconception. One patient experienced maternal hypoglycemia, 2 infants were LGA, one experienced neonatal hypoglycemia, and none had congenital malformations. Improved A1c was documented from the beginning of pregnancy (8.1±1.9) to the end (5.9±0.7) (no p-value reported). 55 A retrospective case series of 18 women with T1DM and T2DM was published by Shenoy et al. 56 (Table 5a & 5b). The authorsreported only one event of severe maternal hypoglycemia. Half of the infants were LGA, and 13 of 18 infants experienced neonatal hypoglycemia. Maternal glycemic control improved from preconception (8.6%) to T3 (7.0%). 56 A large open-label, randomized, parallel-group study of women with T1DM had results published by Mathieson et al. 57 & Hod et al., 58 ( Table 8) (Table 9). Patients were randomized to detemir (n=152) or NPH (n=158) up to 12 months preconception or at 8-12 weeks' gestational age and continued until 6 weeks postpartum. Inclusion criteria include A1c <8% at confirmation of pregnancy. Insulin aspart was used as the bolus insulin. The primary endpoint was A1c at 36 weeks' gestation and secondary endpoints included fasting glucose, major and minor hypoglycemia, and adverse events. The mean A1c at 36 weeks' was 6.27% and 6.33% for detemir and NPH groups, respectively (difference -0.06 [95% CI -0.21 to 0.08]). Mean fasting glucose was significantly lower in detemir group at 24 (p=0.012) and 36 weeks' gestation (p=0.017). Major hypoglycemia events were not significantly different (16% in detemir group, 21% in NPH group). See Glargine section above for results of Callesen et al. 53 Values statistically significant would be indicated with p-value ‡ ‡Unless otherwise specified, A1c is mean or T1 level.
*Authors reported on same study †Values reported as median values (range) Values statistically significant would be indicated with p-value *Authors reported on same study †Values reported as median values (range)
Discussion Lispro
Insulin lispro has an increased affinity for insulin-like growth factor (IGF-1) receptor versus human insulin, which is concerning for potential fetal growth-stimulating effects. 59 Lispro has the potential to pass via the placenta if it forms immune complexes with immunoglobulins. However, Jovanovic et al. 20 have shown no placental transfer via umbilical blood samples following intravenous administration of lispro during labor and similarly, Holcberg et al. 60 conducted an in vitro perfusion study that demonstrated no lispro in the umbilical cord. 20,58,59 The data from observational studies and small randomized trials that we have reviewed, suggest that lispro is safe to use in pregnancy. The majority of studies did not report significant differences in birth weight of neonates delivered from women having either PGDM or GDM, when comparing lispro to regular insulin exposure.
The study by Durnwald et al. 29 is the only publication we identified that demonstrated significantly greater birth weight in the lispro group versus regular insulin. 29 Similarly, Lapolla et al. 26 reported significantly higher rate of LGA infants in lispro group (55.1%) versus regular insulin (39.2%) (p<0.0267). They also found higher rate of macrosomia in the lispro group (14.5%) compared to regular insulin (11.5%), although the increase was not significant. 26 The two metaanalysis published by Gonzalez et al. 32 and Lv et al. 33 both reported an increased rate of LGA newborns, along with increased birth weight in the latter. The earlier study only included four studies and limited to T1DM, which does not fully provide compelling evidence. The metaanalysis conducted by Lv et al., 33 which included 24 studies with a total of 3724 women, provided stronger data to support the possibility that insulin lispro is associated with increased birth weight and rate of LGA newborns. However, this significance is not apparent with the use of insulin aspart in comparison to regular insulin. Considering larger wealth of studies on insulin lispro in pregnancy compared to aspart, the meta-anaylsis may not have been sufficiently powered to see the same effect. By contrast, the small randomized trial by Di Cianni et al. 31 reported birth weight to be significantly increased in the regular insulin group. 31 The majority of published data, however, show no difference in birth weight or rates of macrosomia or LGA infants.
There was no apparent correlation between duration of exposure to lispro and size of infants, since the rate of LGA was similar in those who continued lispro throughout pregnancy to those discontinuing during T1. 26 Lapolla et al. 26 further speculated these findings may be due to short-lived glycemic peaks of lispro, although this data was not recorded in their study. Garcia-Dominguez et al. 30 demonstrated a significant increase in neonatal hypoglycemia with lispro use, but the authors suggested that this may have been due to concomitant use of an insulin pump. 30 The remaining data we found did not indicate there was any difference in rates of neonatal hypoglycemia between those exposed to in utero to regular insulin versus lispro.
Another concern regarding lispro comes from a case series of 3 women from Kitzmiller et al. 60 who reported the development of diabetic retinopathy. 60 However, Loukovaara et al. 23 demonstrated that lispro had no impact on the progression of diabetic retinopathy. 23 Masson et al. 17 similarly reported that none of the patients developed retinopathy de novo, but 6 patients with established retinopathy required laser therapy during pregnancy. 17 Similarly, Buchbinder et al. 21 showed no change in retinopathy status in patients using lispro, but demonstrated change in 6 patients using regular insulin (extensive proliferative retinopathy developed in one patient, mild background retinopathy developed in in 3 patients, progression of retinopathy occurred in 2 patients). 21 Persson et al. 27 showed development of proliferative retinopathy in one patient in regular insulin group and similar development of mild to moderate background retinopathy in both groups. 27 In terms of rates of congenital malformations, none of the studies we reviewed demonstrated a difference between lispro and regular insulin use. Persson et al. 27 reported one congenital malformation with human insulin, but since randomization did not occur until 14 weeks gestation, this finding should not be attributable to treatments used in the study. 27 Also, Aydin et al. 25 demonstrated congenital anomalies in 9 infants born to women who were treated with regular insulin versus none in the lispro group, although differences were not significant. 25 There appears to be a clear advantage of using lispro for its reduction in incidence of maternal hypoglycemia. Garcia-Dominguez et al. 30 were able to demonstrate significantly lower rates of maternal hypoglycemia in lispro group versus regular insulin. 30 Interestingly, Persson et al. 27 showed a significant increase biochemical hypoglycemia (<54mg/dL) in the lispro group, although no episodes of severe hypoglycemia were demonstrated. 27 The studies observed postprandial glycemia consistently decreased, particularly 1-hour postprandially. 12,20,27,31 Of the studies that measured fasting glycemia, there were no differences between lispro and regular insulin. 12,20,27,31 As suggested by these results glycemic control measured by A1c was improved in lispro groups versus regular insulin in multiple studies, 22,23,26,29 except for the Garcia-Dominguez et al. 30 report showing greater A1c levels in the lispro group, p=0.022. 30
Aspart
Unlike lispro, aspart exhibits the same affinity for IGF-1 receptor as human insulin. 6 However, only a small body of literature exists on the use of aspart in women with either PGDM or GDM. Of the smaller of the 3 randomized studies, Pettitt et al. 34 found aspart to be superior in terms of glycemic profile over 240 minutes. 34 Another report demonstrated a significant improvement of 1-hour postprandial glucose levels with aspart when compared to regular insulin, suggesting a similar advantage in the control of postprandial glycemia as is provided by lispro. 31 A larger randomized trial reported by Mathiesen et al. 13 and Hod et al. 14 reported no difference in the incidences of major maternal hypoglycemia and congenital malformations. 13,14 Glycemic control was also comparable between groups. Aspart use resulted in greater overall treatment satisfaction and willingness to continue with the treatment, primarily due to increased flexibility of treatment. Thus, this regimen appears to facilitate treatment compliance. This may be particularly important when considering treating women with insulin, a management decision with the potential to affect disease progression and maternal and neonatal morbidity and mortality.
See the Lispro section above for the results of the meta-analysis by Lv et al. 33 Glargine Similar to lispro, glargine has an increased affinity for IGF-1 receptor compared to human insulin. 6 IGF-1 may have a role in the development of diabetic retinopathy, as well as development of tumors of bone, mammary and ovarian tumors, as supported by in vitro studies on human osteosarcoma cells showing the mitogenic potential of glargine. 7 Hence, the concern for glargine interfering with fetal development. 62 However, studies in rats and mice failed to show an increase in tumor formation with prolonged glargine exposure. 49,63 Furthermore, Pollex et al. 62 carried out an in vitro transplacental transfer study and demonstrated undetectable transfer of glargine at therapeutic concentrations of 150pmol/L. 64 The studies we have reviewed that investigated glargine use during pregnancy, supports its safety with possible advantages of improved efficacy of achieving fasting and postprandial glycemic targets. In addition, our review found support for glargine-associated reduction in maternal hypoglycemia. Negrato et al. 52 demonstrated that maternal complications tended to occur in women with PGDM; these included progression of retinopathy, nephropathy, preeclampsia, proteinuria and hypoglycemia. 52 A statistically significant improvement of mean A1c change from T1 to T3 was demonstrated by Poyhonen-Alho et al. 48 All studies reviewed, reported either no difference in the rate of maternal hypoglycemia or a significantly decreased rate compared to NPH. 15,[46][47][48]50,52,53 The data we identified also suggest that there is no difference in neonatal outcomes, including birth weight, neonatal hypoglycemia, and the incidence of congenital malformations. With one exception, there was a significant increase in shoulder dystocia in the glargine group in findings reported by Egerman et al. 51 The authors speculated the pharmacokinetic profile of glargine may reduce the transfer of metabolic fuel across the trophoblast, although evidence to support this hypothesis is lacking. 51
Detemir
Detemir has reduced affinity for IGF-1 receptor. 59 Its use in pregnancy has not been widely studied. The only comparative analysis published, compared maternal and neonatal outcomes to glargine. 53 Callesen et al. established comparable glycemic and pregnancy outcomes, except for significantly lower mean birth weight and incidence of LGA infants in the glargine group. This suggests that the peakless glycemic profile of glargine may confer some advantage in pregnancy.
The only randomized trial of over 300 subjects, compared detemir to NPH and found glycemic control and major hypoglycemic events to be comparable at 36 weeks' gestational age. The only statistically significant finding was a lower mean fasting glucose in the detemir group. 57,58 The limited body of data on detemir use during pregnancy thus prohibits us from drawing conclusions.
Basal-bolus therapy with rapid and long-acting insulin analogues
Umpierrez et al. 65 have demonstrated the superiority of glycemic control (regarding hyperglycemia and hypoglycemia episodes) with BBT using glargine and glulisine, over sliding-scale insulin therapy with regular insulin, in a randomized trial of 130 adults with T2DM [65]. The comparative studies we reviewed that utilized BBT comprised of rapid-and short-acting analogues with a longacting analogue, in their treatment regimens during pregnancy, have demonstrated no significant increase in maternal and neonatal risks compared to conventional therapy.
We have identified evidence indicating that the use of rapid-and long-acting insulin analogues is safe in pregnancy. Studies have established these hypoglycemic agents do not cross the placenta. 16,59,60,64 There appears to be clinical advantages of the prenatal use of insulin analogues, such as lispro use being associated with improved levels of postprandial glucose. Timely initiation of insulin treatment to achieve preprandial glucose targets of <90mg/dL and 2-hour postprandial glucose targets of <120mg/dL, have been associated with lower risks of fetal and maternal complications. 2,4,6,7 Studies in non-pregnant patients show that BBT compared to NPH plus regular insulin or sliding scale insulin replacement, is superior in terms of achieving glycemic targets and lower risk of severe hypoglycemia. 65 BBT is standard in non-pregnant individuals affected by T1DM or T2DM. Advantages of BBT when compared to conventional insulin therapy include the convenience of flexible pre-and post-dosing of bolus therapy with once daily dosing for basal insulin. Furthermore, the use of insulin analogues allows for stringent glycemic control during pregnancy with minimal glucose excursions throughout the day. 6 Our review provides support for use of BBT with insulin analogues when glycemic targets are not met with the use of nutrition therapy or oral hypoglycemic agents. The use of BBT during pregnancy is not only associated with improved perinatal outcomes, but it will also allow for continuing standard medical treatment from preconception until after the postpartum period. The only opposing evidence proposed by one meta-analysis suggesting that insulin lispro is associated with increased birth weight and LGA rates in newborns, while finding no other differences in any other maternal or neonatal outcome for lispro, aspart, glargine, and determir in comparison to regular and NPH insulin. While our review found a wealth of evidence to suggest otherwise, it propels us to conclude the demand for rigorous, high-quality, and sufficiently powered randomized clinical trials.
Conclusion
1. Insulin analogue use in pregnancy complicated by diabetes has been associated with decreases in A1C, less maternal and neonatal hypoglycemia, when compared to conventional insulins (regular and NPH).
2. Although these studies are inadequately powered to definitively conclude an absence of adverse effect associated with insulin analogue therapy in pregnancy, rapid-and long-acting insulin analogue therapy during pregnancy has not been associated with any adverse maternal and neonatal outcome. Further high quality and adequately powered randomized clinical trials need to be conducted to determine true associations of insulin analogues to adverse outcomes.
3. Adoption of BBT would facilitate continuity of standard insulin treatment from preconception through the postpartum periods. | 2019-03-15T02:58:27.922Z | 2015-11-23T00:00:00.000 | {
"year": 2015,
"sha1": "cae58746af813f16cdb65304f83f4e26ebf98fe3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15406/ogij.2015.03.00079",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f45a169e1ad00df20f16be962c796000e9a99c3a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236513593 | pes2o/s2orc | v3-fos-license | Case Series on Endogenous Klebsiella pneumoniae Endophthalmitis: More Than Meets the Eye
Endogenous endophthalmitis (EE) is a rare but potentially sight-threatening disease with an appreciable mortality rate. Diabetes mellitus remains the most frequently associated condition especially in the Asian population, which potentiates Klebsiella pneumoniae involvement. Endogenous Klebsiella pneumoniae endophthalmitis (EKE) usually has a poor final visual outcome despite treatment with intravitreal and systemic antibiotics. We report three cases of EKE with systemic involvement Klebsiella pneumoniae invasive syndrome (KPIS). KPIS was diagnosed in three patients with multiple comorbidities who presented with a blurring of vision and eye redness. Patient 1 was a 63-year-old Malay man diagnosed with left eye panophthalmitis with multifocal liver and prostate abscesses. He underwent drainage of the liver abscess and eventually evisceration of the left eye due to scleral perforation. Patient 2 was a 66-year-old Malay woman diagnosed with left eye endophthalmitis. Due to hemodynamic instability, vitrectomy was delayed and eventually sustained corneal perforation and eviscerated. The patient eventually succumbed to infection. Patient 3 was a 42-year-old Malay woman diagnosed with KPIS, renal abscess, lung abscess, and left endogenous endophthalmitis. She underwent a vitrectomy but her postoperative vision remained poor. All patients received multiple intravitreal antibiotics and systemic antibiotics. KPIS is frequently associated with catastrophic disabilities. Our cases highlight the importance of an early suspicion of systemic involvement in patients presenting with EKE. Prompt diagnosis, emergent radiographic evaluation, early adequate drainage, and appropriate treatment with antibiotics potentially improve survival and visual prognosis.
Introduction
Endogenous endophthalmitis (EE) is a rare but potentially sight-threatening disease with an appreciable mortality rate [1][2]. It is a sequel of hematogenous spread from a distant source that breaches the blood ocular barrier causing infection of the intraocular tissue [2]. A gram-negative organism, particularly Klebsiella pneumoniae, is the most common causative organism for EE among the Asian population, particularly the East Asians [2][3]. Endogenous Klebsiella pneumoniae endophthalmitis (EKE) usually has a poor final visual outcome despite treatment with intravitreal and systemic antibiotics [4][5].
We herein report three cases of EKE in which visual acuity (VA) did not improve after treatment. We discuss the clinical characteristics and treatment outcomes of each patient.
Case 1
A 63-year-old Malay gentleman with underlying diabetes mellitus, hypertension, and chronic kidney disease, presented with a one-week history of left eye blurring of vision, redness, and fever. Upon presentation, visual acuity was 6/12 in the right eye and light perception (PL) in the left eye with positive relative afferent pupillary defect (RAPD). The left eye was chemosed with oedematous cornea and fibrin in the anterior chamber ( Figure 1). Intraocular pressure (IOP) was 44 mmHg. done. However, one week later, his left eye was complicated with scleral perforation and underwent evisceration ( Figure 3).
Case 2
A 66-year-old Malay woman with underlying diabetes mellitus, hypertension, rectosigmoid carcinoma, and chronic liver cirrhosis presented to the hospital with a three-day history of left eye redness, blurring of vision, and intermittent fever.
Vision in the right eye was 6/12 and hand movement (HM) in the left eye. RAPD was negative. Left conjunctiva was injected, the cornea was oedematous, and the anterior chamber was deep with cells 4+ and fibrin ( Figure 4). There was no view of the left fundus.
FIGURE 4: Injected conjunctiva, oedematous cornea, and fibrin with subtotal cornea epithelial defect
An ultrasound scan of the left eye revealed vitritis with loculation. CECT brain and orbit showed left periorbital soft tissue thickening and enhancement suggestive of inflammatory/infective changes. There was also lens dislocation noted ( Figure 5). Abdominal ultrasonography (USG) showed liver cirrhosis but no abscess was detected. Her left eye eventually progressed to corneal perforation and was glued with tarsorrhaphy, as she refused all recommended procedures. She was eventually scheduled for left eye evisceration after hemodynamic stabilization. Unfortunately, the patient succumbed to infection.
Case 3
A 42-year-old Malay woman with underlying diabetes mellitus and bronchial asthma presented to the hospital with a sudden blurring of vision in the left eye for two days without any systemic symptoms. Visual acuity was 6/9 and PL on the right and left eye, respectively.
RAPD was negative. Assessment of the left anterior segment revealed cells 4+ and 1 mm of hypopyon level. IOP of the left eye was 8 mmHg. There was no view of the fundus, however, ultrasound of the eye revealed the presence of loculation with a flat retina. Vitreous and blood culture showed no growth. Klebsiella pneumoniae was eventually detected from lung biopsy tissue culture. CECT brain and orbit showed left periorbital soft tissue thickening and enhancement suggestive of inflammatory/infective changes ( Figure 6). CECT of the chest and renal system revealed multiple cavitating lung abscesses and renal collection ( Figure 7). She was diagnosed with KPIS with a renal abscess and left endogenous endophthalmitis. Systemic ceftriaxone was escalated to meropenem, as it has good coverage against most resistant gramnegative organisms. Fortified topical antibiotics were also commenced. The anterior segment inflammation reduced and the fibrin contracted following treatment (Figure 8).
FIGURE 8: Contracting debris and fibrin following treatment.
She underwent a left vitrectomy, however, her left vision remained poor with nonperception to light (NPL) in all four quadrants.
Discussion
Klebsiella pneumoniae is a well-known human nosocomial pathogen. Klebsiella pneumoniae is now the main cause of liver abscess reported in Hong Kong, Singapore, South Korea, and Taiwan [6]. In the past decades, the prevalence of Klebsiella pneumoniae invasive syndrome (KPIS) with extrahepatic complications has increased in Asia [7].
Endogenous Klebsiella endophthalmitis is a devastating ocular infection with most cases resulting in visual acuity of perception to light or worse being subjected to evisceration or enucleation [5].
All of our patients presented with mostly eye complaints rather than systemic symptoms. They were referred to the respective teams for co-management of systemic involvement upon further investigation. We would like to highlight that despite our patients presenting with eye symptoms as their chief complaint, we were alert to the possible systemic relationship and investigated and managed all of our patients ophthalmologically and systemically. Despite our maximum effort, one of our patients still succumbed to multiple organs dysfunction syndrome (MODS) due to the virulence of Klebsiella pneumoniae.
Two of our patients underwent evisceration, whereas one patient had NPL vision despite being given a comprehensive treatment of vitrectomy, multiple systemic antibiotics, intravitreal injections, and drainage of primary foci. Over the past three decades, the overall visual outcome in patients with EKE remains dismal despite early recognition and aggressive treatment. The overall rate of vision recovery surpassing counting fingers was around 22.64%-34% [12][13]. However, Hsieh MC et al. have found out that with early recognition, better outcomes were obtained with a good prognosis related to initial VA, female gender, and the number of intravitreal injections. Early intervention with pars plana vitrectomy did not change the visual outcome [13]. Our patients had delayed surgical intervention, as they were hemodynamically unstable and unfit for surgery.
Conclusions
Klebsiella pneumoniae invasive syndrome (KPIS) is typically seen in East Asians with the risk factor of diabetes. We discussed three cases, which include risk factors and the prognosis and management of this debilitating condition. A high index of suspicion should be held on this demographic of patients. Our case discussion highlights the importance of early suspicion of systemic involvement in patients presenting with endogenous endophthalmitis. Prompt diagnosis, emergent radiographic evaluation, early adequate drainage, and appropriate treatment with antibiotics potentially improve survival and visual prognosis.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2021-07-31T05:15:50.176Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "1a9f70317e9e21fd163dda9ef502ad99bd00639c",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/61733-case-series-on-endogenous-klebsiella-pneumoniae-endophthalmitis-more-than-meets-the-eye.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a9f70317e9e21fd163dda9ef502ad99bd00639c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249834577 | pes2o/s2orc | v3-fos-license | Thin-Film Transistors from Electrochemically Exfoliated In2Se3 Nanosheets
The wafer-scale fabrication of two-dimensional (2D) semiconductor thin films is the key to the preparation of large-area electronic devices. Although chemical vapor deposition (CVD) solves this problem to a certain extent, complex processes are required to realize the transfer of thin films from the growth substrate to the device substrate, not to mention its harsh reaction conditions. The solution-based synthesis and assembly of 2D semiconductors could realize the large-scale preparation of 2D semiconductor thin films economically. In this work, indium selenide (In2Se3) nanosheets with uniform sizes and thicknesses were prepared by the electrochemical intercalation of quaternary ammonium ions into bulk crystals. Layer-by-layer (LbL) assembly was used to fabricate scalable and uniform In2Se3 thin films by coordinating In2Se3 with poly(diallyldimethylammonium chloride) (PDDA). Field-effect transistors (FETs) made from a single In2Se3 flake and In2Se3 thin films showed mobilities of 12.8 cm2·V−1·s−1 and 0.4 cm2·V−1·s−1, respectively, and on/off ratios of >103. The solution self-assembled In2Se3 thin films enriches the research on wafer-scale 2D semiconductor thin films for electronics and optoelectronics and has broad prospects in high-performance and large-area flexible electronics.
Introduction
Two dimensional semiconductors have promoted the rapid development of electronics and optoelectronic devices due to their excellent charge transport properties and mechanical character [1,2]. The wafer-scale fabrication of well-ordered and uniform 2D semiconductor thin films are indispensable for large-area electronics and crucial to the practical application and development of 2D semiconductors [3][4][5]. High-quality 2D semiconductor thin films can be grown by chemical vapor deposition (CVD) [5][6][7]. However, the reaction conditions are stringent, and arduous transferring procedures are required to realize the transfer of thin films from the growth substrates to the targeted substrates. The transfer process is not only cumbersome and time-consuming, but it may also cause irreversible damage to the semiconductor device performance and reduce device yield.
Alternatively, uniform 2D semiconductor thin films can be economically prepared by a solution method from solution-processed 2D semiconductors colloidal inks [8,9]. Solutionprocessable 2D semiconductor electronics is an emerging research area, and substantial progress has been made. It has been reported that the electrochemical intercalation of quaternary ammonium ions is powerful in preparing stable 2D semiconductor inks such as graphite, black phosphorus, MoS 2 , In 2 Se 3 , and NbSe 2 [10][11][12][13][14]. Their thin films have a wide range of applications in the fields of superconductors, field-effect transistors (FETs), photodetectors, and spin electronics. The spin-coated and LbL-assembled MoS 2 thin films from electrochemically exfoliated MoS 2 nanosheets used as FETs channels show mobilities of ≈10 cm 2 ·V −1 ·s −1 and on/off ratios of >10 5 [13,15]. Such device performance is superior to other solution-processed 2D semiconductor thin-film devices. The vacuum-filtrated In 2 Se 3 thin films with thicknesses of 10 µm show ultrafast response, with rise and decay times of 41 and 39 ms, respectively, and efficient photoresponsivity (1 mA W −1 ) [16]. However, conventional, solution-based, thin-film deposition approaches confront the problems of uncontrollable film thickness, uneven deposition, and the coffee ring effect.
In this work, we propose LbL assembly as an effective method of fabricating scalable 2D thin films from electrochemically exfoliated nanosheets. LbL assembly is based on the alternating assembly of two species with complementary interactions (such as electrostatic attraction, hydrophobic interactions, or hydrogen bonds) and can prepare thin films, patterns, and heterostructures on any substrate. Various low-dimensional electronic nanofilms, including gold nanoparticles, single-walled carbon nanotubes, boronnitride, clay nanosheets, and MoS 2 , have been successfully assembled by LbL assembly with precise thickness control [15,[17][18][19][20][21]. Here, we assembled uniform In 2 Se 3 thin films by electrostatic adsorption between poly(diallyldimethylammonium chloride) (PDDA) and electrochemically exfoliated In 2 Se 3 . The single In 2 Se 3 flake and LbL-assembled In 2 Se 3 thin films, serving as active channel materials in FETs, possessed excellent device performance. The mobility and on/off ratio of the LbL-assembled In 2 Se 3 thin films were even better than the CVD-grown In 2 Se 3 thin film, showing the robustness of solution-processed electronics.
Synthesis of In 2 Se 3 Nanosheets
In 2 Se 3 nanosheets were synthesized by the electrochemical intercalation of quaternary ammonium ions. The electrochemical intercalation was performed in a 5 mg·mL -1 tetraheptylammonium bromide (THAB) acetonitrile solution, with the bulk In 2 Se 3 and carbon rod serving as cathode and anode, respectively. The intercalation voltage was 8 V. After the intercalation, the THAB-intercalated In 2 Se 3 was collected and sonicated in a 0.2 M polyvinyl pyrrolidone (PVP) solution (PVP: molecular weight of about 10,000) for 30 min to form a brown dispersion of In 2 Se 3 nanosheets. The In 2 Se 3 dispersion was subsequently centrifuged and washed with DMF several times to remove excessive PVP. The final In 2 Se 3 dispersion was centrifuged at 1000 rpm for 5 min, and precipitates were discarded. The supernatant was concentrated in DMF for characterization and thin-film assembly.
Fabrication of In 2 Se 3 Thin Films
The In 2 Se 3 thin films were assembled by LbL assembly. Before LbL assembly, the SiO 2 /Si substrates were pre-cleaned with acetone as well as ethanol and isopropyl alcohol. The substrates were treated with oxygen plasma at 100 W for 5 min to produce a superwetting surface. The substrates were firstly immersed in a positively charged PDDA solution (0.1 wt %) for 2 min to deposit single-layer PDDA chains and were then rinsed by ultrapure water and gently dried with the use of an air gun. The substrates attached with PDDA chains were then immersed in negatively charged In 2 Se 3 dispersion for 5 min, and In 2 Se 3 nanosheets were assembled in order on the substrates by the electrostatic interaction between the PDDA and In 2 Se 3 . Finally, the substrates were rinsed by ultrapure water to remove the loosely attached In 2 Se 3 nanosheets and were dried by air gun. The first cycle of the LbL assembly of the In 2 Se 3 thin film was then completed. It is worth noting that the concentration of the In 2 Se 3 dispersion was monitored by optical absorbance in order to assemble high-quality thin films. The single-layer, assembled In 2 Se 3 thin films were dense when the characteristic absorbance at 450 nm was about 0.6 after the In 2 Se 3 inks were diluted 500 times.
Fabrication of In 2 Se 3 FETs
The channels of the FETs were fabricated by nanofiber masks. Aligned polyurethane nanofibers, whose diameters were maintained at~500 nm, were printed on the substrates covered by In 2 Se 3 single flakes or thin films. Subsequently, metal coatings (5 nm/45 nm Cr/Au) with a 200 × 200 µm metal mask were deposited by thermal evaporation to create source and drain electrodes. Finally, the SiO 2 /Si substrates were immersed in the DMF solvent for 30 min and sonicated for 5 min to remove the polyurethane fiber, and the channel of the FET devices could be successfully prepared. In order to improve the contact between the electrode and the In 2 Se 3 , the devices were annealed at 200 • C in vacuum for 2 h before the test.
Synthesis and Characterizations of In 2 Se 3 Nanosheets
In 2 Se 3 nanosheets were synthesized by the electrochemical intercalation of quaternary ammonium ions, as shown in Figure 1a. The electrochemical intercalation was performed in a 5 mg·mL -1 tetraheptylammonium bromide (THAB) acetonitrile solution, with the bulk In 2 Se 3 and the carbon rod serving as cathode and anode, respectively. Driven by external voltage, the positively charged THA + ions were inserted into the bulk In 2 Se 3 and became fluffy and fell off from the cathode. The THAB-intercalated In 2 Se 3 were collected and sonicated in a 0.2 M polyvinyl pyrrolidone (PVP) solution (PVP: molecular weight of about 10,000) for 30 min to form a brown dispersion of In 2 Se 3 nanosheets (Figure 1b). There is a new XRD peak at a diffraction angle of about 6 in the THAB-intercalated In 2 Se 3 , indicating that the interlayer spacing of the THAB-intercalated In 2 Se 3 increased from 9.7 Å to 17 Å and further proved the successful insertion of the THA + ions (Figure 1c). PVP acts as a surfactant to stabilize the In 2 Se 3 nanosheet solution and prevent the agglomeration and sedimentation. The excessive PVP was removed by repeatedly washing with N,N-Dimethylformamide (DMF). The final In 2 Se 3 dispersion was centrifuged at 1000 rpm. for 3 min to sort the nanosheets. The sediments containing unexfoliated layered crystallites were discarded. The supernatant was concentrated in DMF for characterization and thinfilm assembly. AFM showed that the exfoliated In 2 Se 3 nanosheets had micron-level lateral dimensions (Figure 1d). A total of 90% of the In 2 Se 3 nanosheets have thicknesses of 2.2 nm, further confirming the few-layer nature of the In 2 Se 3 nanosheets and the uniformity of the thicknesses (Figure 1e). The lamellar structure of the In 2 Se 3 nanosheet was verified by a transmission electron microscopy (TEM) image (Figure 2a), and the selected area electron diffraction (SAED) patterns indicated the single crystalline characteristics of the In 2 Se 3 nanosheet (Figure 2b). The In 3d and Se 3d binding energy peaks of the electrochemically intercalated In 2 Se 3 shifted to higher values as compared with the bulk In 2 Se 3 due to the n-type doping induced by the insertion of the THA + ions (Figure 2c,d).
LbL Assembled In 2 Se 3 Thin Films
Well-ordered and uniform 2D semiconductor thin films are of vital importance to device performance. We chose LbL assembly to fabricate In 2 Se 3 thin films by sequentially adsorbing the PDDA solution and the In 2 Se 3 dispersion on SiO 2 /Si substrates through electrostatic interactions (Figure 3a). The zeta potential of the In 2 Se 3 dispersion was found to be −18.1 mV (Figure 3b). The intercalation of tetraheptylammonium ions led to the injection of electrons into the In 2 Se 3 crystal structures and the slightly negatively charged In 2 Se 3 nanosheets [22]. Before LbL assembly, the SiO 2 /Si substrates were pre-cleaned with acetone as well as ethanol and isopropyl alcohol and then treated with oxygen plasma at 100 W for 5 min to produce a superwetting surface. The substrates were alternatively immersed in the PDDA solution (0.1 wt %) and the In 2 Se 3 dispersion, with rinsing by ultrapure water and drying by air gun after each adsorption. The Raman characteristic peak originating from the A 1 (LO + TO) of the In 2 Se 3 thin film were consistent with the bulk In 2 Se 3 , while the A 1 (LO) phonon mode of the In 2 Se 3 thin film exhibited a small shift toward lower wavenumbers arising from the smaller vibration coherence length along the c-axis as a result of the weak van der Waals interaction (Figure 3c). The optical microscope image of the LbL-assembled In 2 Se 3 thin film revealed that the nanosheets in a wide range of films were evenly stacked and assembled into homogeneous thin films (Figure 3d). From the local AFM and SEM image of the LbL-assembled In 2 Se 3 thin film, we can deduce that the adjacent nanosheets were assembled on the substrate through broad-area, plane-toplane Van der Waals contacts (Figures 3e and 4). The TEM images of the LbL-assembled In 2 Se 3 thin film show that the adjacent nanosheets are stacked tightly together with mixed crystalline lattices on the boundaries and further demonstrate decent interfaces ( Figure 5). The number of in-plane grain boundaries in the LbL-assembled 2D semiconductor thin films were greatly reduced and will significantly improve charge transport performance.
Performance of FETs from Electrochemically Exfoliated In 2 Se 3 Nanosheets
To investigate the electric properties of solution-processed In 2 Se 3 , we further prepared In 2 Se 3 single-flake and In 2 Se 3 thin-film FETs on a 300 nm SiO 2 /Si substrate. The channels of the FETs were fabricated by nanofiber masks (Figure 6) [23]. First, we used the diluted and concentrated In 2 Se 3 dispersion to adsorb the sparse In 2 Se 3 nanosheets and dense In 2 Se 3 thin films on a pre-treated SiO 2 /Si substrate by LbL assembly. Then, polyurethane nanofibers were printed and metal electrodes were deposited in order to fabricate FET devices. Figure 7a shows the scanning electron microscope (SEM) image of the In 2 Se 3 single-flake FET with a length of 506 nm and an average width of 600 nm. The In 2 Se 3 single-flake is perfectly flat on the channel to ensure good contact between the electrode and the In 2 Se 3 nanosheet. The I sd -V sd output characteristics of the In 2 Se 3 single-flake FET showed a linear trend, indicating ohmic contacts between the In 2 Se 3 single-flake and electrode (Figure 7b). The forward and reverse I sd -V g transfer characteristics of the In 2 Se 3 single-flake FET showed a typical n-type behavior with an on/off ratio of 1.5 × 10 3 at V sd = 1 V (Figure 7c). The electron mobility of individual In 2 Se 3 nanosheets can be calculated to be 12.8 cm 2 V −1 s −1 from the linear−regime transfer characteristics using the following equation: where L and W are the channel length and width, and C s is the areal capacitance of 300 nm SiO 2 /Si. The channel length and width of the In 2 Se 3 thin film FET device are 549 nm and 200 µm, respectively (Figure 7d). The I sd -V sd output characteristics of the In 2 Se 3 thin film FET exhibited non-linear dependence on V sd due to the pinch-off effect of the FET channel (Figure 7e). The electron mobility of In 2 Se 3 thin films reached 0.2 cm 2 V −1 s −1 , with an on/off ratio of 7 × 10 4 at V sd = 1 V (Figure 7f). The carrier mobility of the In 2 Se 3 single flake is much higher than that of the In 2 Se 3 thin film due to the sheet-to-sheet contact resistance, and the device performance may average out in the percolating thin films. The observed clockwise hysteresis in the transfer characteristics of the In 2 Se 3 single-flake and thin-film FETs was attributed to charge trapping and detrapping at the interface between the In 2 Se 3 and the SiO 2 (Figure 7c,f) [24]. To further understand the relative effects of PDDA, the spin-coated In 2 Se 3 thin-film FETs were fabricated. The doping of the PDDA caused the positive shift of the threshold voltage and the lower maximum on-current as compared with the spin-coated In 2 Se 3 thin-film FET (Figure 8a,b). The performance of electrochemically exfoliated In 2 Se 3 single-flake and thin-film FETs were comparable to those made from other methods in terms of mobilities and on/off ratios (Table 1) [25][26][27][28][29]. A comprehensive comparison is provided in Table 1. The electrochemically exfoliated In 2 Se 3 single-flake FET performance was superior to some mechanically exfoliated In 2 Se 3 flake FETs [27]. The electron mobilities and on/off ratios of the LbL-assembled In 2 Se 3 thin film was close to those of the spin-coated In 2 Se 3 thin films and CVD-grown In 2 Se 3 thin films [12,28]. The outstanding device performances of both single flakes and thin films are attributed to the high-quality nanosheets with uniform sizes and thicknesses prepared by electrochemical intercalation.
Conclusions
In conclusion, we prepared high-quality In 2 Se 3 nanosheets through an electrochemical intercalation approach. Homogeneous In 2 Se 3 thin films were assembled by the alternate adsorption of PDDA and a nanosheet solution, driven by electrostatic attraction. FETs from solution-processed In 2 Se 3 single flakes and thin films showed satisfying performance and were comparable to those from CVD-grown In 2 Se 3 thin films, mechanically exfoliated In 2 Se 3 flakes, and spin-coated In 2 Se 3 thin films. LbL-assembled 2D semiconductor thin films are promising candidates for emerging large-area, flexible, and wearable electronic applications. | 2022-06-19T15:23:21.079Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "46e1d60d02df31d25ccf47bd18a7d10ec9fcaf50",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/13/6/956/pdf?version=1655431610",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9615756fc6534cfc16bfc547d055c087cdd963d1",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15448934 | pes2o/s2orc | v3-fos-license | How do spin waves pass through a bend?
Spin-wave devices hold great promise to be used in future information processing. Manipulation of spin-wave propagation inside the submicrometer waveguides is at the core of promoting the practical application of these devices. Just as in today's silicon-based chips, bending of the building blocks cannot be avoided in real spin-wave circuits. Here, we examine spin-wave transport in bended magnonic waveguides at the submicron scale using micromagnetic simulations. It is seen that the impact of the bend is relevant to the frequency of the passing spin wave. At the lowest frequencies, the spin wave continuously follows the waveguide in the propagation process. At the higher frequencies, however the bend acts as a mode converter for the passing spin wave, causing zigzag-like propagation path formed in the waveguide behind the bend. Additionally, we demonstrate a logic-NOT gate based on such a waveguide, which could be combined to perform logic-NAND operation.
Spin-wave devices hold great promise to be used in future information processing. Manipulation of spin-wave propagation inside the submicrometer waveguides is at the core of promoting the practical application of these devices. Just as in today's silicon-based chips, bending of the building blocks cannot be avoided in real spin-wave circuits. Here, we examine spin-wave transport in bended magnonic waveguides at the submicron scale using micromagnetic simulations. It is seen that the impact of the bend is relevant to the frequency of the passing spin wave. At the lowest frequencies, the spin wave continuously follows the waveguide in the propagation process. At the higher frequencies, however the bend acts as a mode converter for the passing spin wave, causing zigzag-like propagation path formed in the waveguide behind the bend. Additionally, we demonstrate a logic-NOT gate based on such a waveguide, which could be combined to perform logic-NAND operation.
S pin-wave devices [1][2][3][4][5] are deemed as the most promising candidates for the present electronic ones 6 . Mach-Zehnder type spin-wave logic gates are made of long magnonic waveguides, where spin-wave propagation is manipulated to implement logic operations [1][2][3][4] . Real spin-wave chips will inevitably integrate numerous functional units to do powerful computation. Bended components will have to be used to save space 7 . Consequently, spin-wave propagation in bended magnonic waveguides should be addressed.
Bance et al. reported the transmission of backward-volume (BV) spin-wave packets through a 90u circular bend without losses 8 ; nevertheless they did not identify the mode distribution of these spin waves, which is essential to understand the spin-wave propagation inside the bend. Quite recently, Dvornik et al. demonstrated the fact of the BV spin waves to follow a curved waveguide without noticeable losses at the bend 7 , and thus claimed that the spin waves adapt the bend with their wave vectors parallel to the local magnetization. By contrast, Clausen et al. observed the multimode excitation of Damon-Eshbach (DE) spin waves in the post-skew arm of a skewed waveguide 9 . Considering the similarity between the bended and skewed waveguides, the results reported by Dvornik et al. should be a special instance of spin-wave transport in bended waveguides, since only a single frequency is inspected there 7 . Vogt et al. realized efficient transmission of the DE spin waves through an analogous bend by introducing a novel method to align the magnetization inside the bend 10 , but they did not measure the mode distribution of the spin waves past the bend. To be used in nanoscale logic circuits, the DE spin waves are inferior to the BV ones, because they require an extra magnet to align the magnetization 7 .
In Mach-Zehnder type logical gates, an external local field is frequently used to tailor the parameters of the propagating spin waves, as done in Refs. 2-4. Unfortunately, Vasiliev et al. 11 pointed out that the spin-wave logic devices using an external field to control the spin-wave propagation would face intrinsic limitations, with respect to the operation efficiency by the applied field, as the waveguide is further downscaled. Therefore, it is necessary to find fresh means of constituting nanoscale interferometric logic gates. Hertel et al. 1 explored a novel route to topologically induce a phase shift of spin waves by introducing a domain wall, and utilized it to construct a Mach-Zehnder type logic-XOR gate. However, it is not easy to precisely manipulate a domain wall with the desirable structure in nanocale magnetic waveguides 2 .
In this Letter, we study the propagation of spin waves in bended magnonic waveguides with the magnetization aligned along the waveguide. We find that the spin waves do not always adapt to the waveguide, and only at some frequencies can the spin waves follow the waveguide without significant losses. At the other frequencies, antisymmetric width modes 9,12 are excited inside the bend once the symmetric modes emitted from the antenna pass through, and then multimode spin waves superpose in the horizontal arm of the waveguide, causing a spatial beating of these modes 13 . The beaten spin waves can be injected into two separate waveguide branches, where the injected beams have a phase shift of ,p at some specific frequencies. Further, we demonstrate an interferometric logic-NOT gate of the Mach-Zehnder type, whose logical input is encoded by the frequency of the carrying spin wave itself.
Results
The sketch of the present study is shown in Fig. 1. The magnonic waveguide is a two-dimensional submicrometer magnetic strip including a 90u circular bend. The size of the target system is marked in the figure. In this work, two kinds of waveguides with the widths equal to 200 and 100 nm are considered (i.e. w 5 100 and 200 nm). No bias field is applied to the waveguide, and thus the magnetization is aligned along the waveguide by the shape anisotropy, resulting in the BV geometry 14 for spin-wave propagation. A microstrip antenna for spin-wave excitation is placed in front of the bended section of the waveguide. A radio-frenquency field in the form of h ac (t) 5 H 0 ?sin(2pft), with the field amplitude H 0 ranging from 5 to 100 Oe and the frequency f in the gigahertz range, is supplied to the antenna, where H 0 is parallel to the x-or z-axis. (In practical devices, a microwave voltage is loaded onto the antenna, where the accompanying rf Oersted field couples with the magnetization and excites spin waves). Near both ends of the waveguide, the damping parameter is increased to reduce the back reflection of spin waves 8 . Micromagnetic simulations were adopted to investigate the spinwave dynamics in these bended waveguides (See Methods section for details). Figure 2 shows the snapshots of spin-wave distribution at a time after the steady state has been achieved. At low frequencies, the spin wave continuously follows the waveguide, with the wave vector fine adapting to the local magnetization 7 . The spin wave gone past the bend has the same spatial structure as that newly-radiated from the antenna, except for the decreased amplitude due to the intrinsic damping. At a slightly higher frequency, the scenario is substantially changed. Here, a zigzag-like propagation pattern is formed in the horizontal arm of the waveguide after the spin waves leaves from the bend, totally different from the continuous propagation picture for the spin wave before injection into the bend. This fact suggests that the spin wave does not always adapt to the waveguide. As the propagation manner of the spin wave changes from continuous to zigzag-like, the back reflection of the spin wave from the bend is enhanced. Actually, only for certain frequencies can the spin wave follow the waveguide without serious reflection loss. These findings reveal the limited transmission bandwidth of the spin-wave devices comprising folded components. For the three lowest frequencies shown in Fig. 2(a) and 2(b), the spin waves before injection into the bend have the same feature of spatial distribution, say, they are nearly uniform across the waveguide width and continuous along the waveguide. The reduction of the spin-wave wavelength with the increase in the frequency arises from the dipole-exchange nature of the dispersion relation of the spin waves in such submicrometer waveguides 15 . When the spin wave gets into the bend, the curvature of the bend distorts the spatial profile of the spin wave. Subsequently, the spin-wave propagation behind the bend is affected.
As the frequency increases, the situation becomes more complex. At first, the symmetric width modes with higher order numbers are stimulated by the antenna to accompany the fundamental mode, leading to self-focusing 16,17 of them in front of the bend. Next, the asymmetric modes are activated when the self-focused spin waves arrive at the bend. As a consequence, at least three branches of spin waves with different mode numbers coexist in the horizontal arm of the bended waveguide. The mutual interference among these spin waves produces fine mode patterns 13 . Remarkably, at particular frequency (e.g., 10 GHz for the 200 nm wide waveguide and 20 GHz for the 100 nm wide one), the beaten spin waves have very simple mode structure. Note that Figs. 2(a) and 2(b) correspond to two waveguides with different sizes. Thus, it is expected that the above stated discoveries represent a common feature of such submicron systems. Figure 3 illustrates the Fourier amplitude maps of the spin waves with various frequencies propagating in a bended waveguide. The top panels correspond to these spin waves with the lowest frequencies, where the input antenna only excites the fundamental mode, and in addition the mode amplitude decreases with the propagation distance in front of the bend due to the natural damping of the spin wave. Inside the bended section, the amplitude distribution for the three spin waves with distinct frequencies differs substantially from each other. With the increase in the frequency, the spin-wave beam becomes narrower and meanwhile gets closer to the outer edge of the bend. It is also seen that the higher the frequency, the stronger the reflection of the incident beam at the entry of the bend, causing the spin-wave beam in front of the bend to present wavy borders at the frequency of 7 GHz. The bottom panels are for the spin waves with higher frequencies. Here, the mode patterns typical of selffocusing are clearly seen in the vertical arm. The transverse modulation of the beam strength complicates the spatial distribution of the spin-wave amplitude inside the bended section, and consequently the excitation of the secondary spin waves at the exit of the bend is equally intricate, so that the mode structure of those spin waves behind the bend is beyond the reach of the prediction.
Discussion
To see how the mode patterns are formed, the dispersion relation of these spin waves existing in the waveguide is derived, as plotted in Fig. 2(a). The mode structures for spin waves at each frequency can be easily resolved, although the decay of spin waves with the propagation distance is not compensated. www.nature.com/scientificreports number equal to 1 and 3 are present, as expected. These modes, being symmetric about the axis of the waveguide 9,12 , can be strongly coupled to the antenna's field and acquire the largest spectral weights. In the horizontal arm, the 2 nd -order mode emerges in addition to the already existing symmetric modes, clearly indicating that this mode is excited due to the passage of the original spin waves through the bend. Note that, the higher the mode number, the higher the frequency at which the related mode starts to occur. In the low-frequency range, only the fundamental mode is stimulated by the antenna, and no additional modes are excited when it goes through the bend and meanwhile the reflection of the original spin wave from the entry of the bend is negligible. As a result, the original spin wave accommodates to the whole waveguide, as seen in Ref. 7. In the mediumfrequency range, although still only the fundamental mode is driven by the antenna, the asymmetric 2 nd -order mode is activated when the original spin wave flows into the bend, resulting in their coexistence in the horizontal arm of the waveguide. At the same time, the original spin wave is partially reflected from the bend's entry. As a consequence, the hybrid spin wave cannot always adapt to the waveguide with the wave vector parallel to the local magnetization. In the highfrequency range, more higher-order modes appear and cause multimode superposition, giving rise to finer interference patterns (see bottom panels of Figs. 2 and 3).
In the horizontal arm, the travelling waveguide modes can be formulated as follows, where the sine function describes the transverse profile of each mode, the cosine part the propagation character 16 , and the exponential part the spatial attenuation 18 . The parameter w* denotes the effective waveguide width 19,20 for all the modes, v the excitation frequency, k n x the longitudinal wave number of the n th -order mode, Q n the excitation phase, and C n the relative excitation efficiency. As reflected in Fig. 4, for certain ranges of frequency, multiple modes with different order numbers exist simultaneously. These modes interfere mutually 13 and engender distinctive beating patterns (see Fig. 3). Averaging the module of the sum of m n over a large time interval t, one gets, which depicts the spatial distribution of the total amplitude of the involved coherent modes 16 . The resulting theoretical mode patterns are in good agreement with those from micromagnetic simulations [as shown in Fig. 5(a)], proving that the zigzag-like patterns originate from the beating between the original symmetric modes and the reexcited asymmetric ones (See Methods for technical details). For the lowest frequencies, the antenna simply sends out the fundamental-mode spin wave. When this spin wave reaches the bend, it has homogeneous phase and group velocity across the waveguide width [see top panels of Fig. 2(a) and 2(b)]. Subsequently, it sees different magnetization orientations at the inner and outer sides of the waveguide once entering the bend [see Fig. 5(b)]. Furthermore, the exchange field due to the curled magnetization is stronger at the inner side than it is at the outer side of the bend (see Ref. 21). The dispersion relation of the spin waves inside the bended section is accordingly changed compared to that in the outside. The altered dispersion relation distorts the spatial profile of the spin wave injected into the bend, and in turn creates the asymmetry of the dynamic field inside the bend, which makes the excitation of asymmetric modes possible. From the point of view of mode conversion, the bend simply plays the role of a kind of defect in a regular waveguide 22 , which couples the symmetric modes radiated from the antenna to all the other modes of the waveguide available at a given frequency. For these frequencies slightly higher than the turnon value, the spin-wave beam inside the bend spreads almost all over the waveguide width, so that the antisymmetric modes cannot be excited. Comparatively, for those higher frequencies, the beam entering the bend occupies only a portion of the waveguide width, inducing asymmetric distribution of the spin-wave amplitude, and thus is capable of driving the modes with the same symmetry. This explains why the spin waves with the lowest frequencies can continuously follow the waveguide, but those with higher frequencies can not (cf. top and bottom panels in Fig. 3).
Combining the bended section with a double-branch unit, a Mach-Zehnder type logic-NOT gate is formed [cf. Fig. 5(a)]. In this architecture, the antennae for inductive excitation and detection of the propagating spin waves are directly used as the input and output terminals of binary data. In real devices, the frequency of the excitation microwave is encoded as an input: 0/1, with 0 being represented by a low-frequency and 1 by a high-frequency. The amplitude of the induced voltage is coded as an output: 0/1, with 0 represented by a low-amplitude and 1 by a high-amplitude.
Micromagnetic simulations verify the functionality of the NOT gate [cf. Fig. 6(b)]. At the frequency of 6.8 GHz, the spin-wave beams injected into the double branches have a slight phase shift. When they converge in the right arm, constructive interference 23,24 occurs and gives rise to high amplitude, i.e. logical 1. At 10 GHz, the spin-wave beams ramified into the double branches retain a phase difference of ,p, causing destructive interference 20 of these spin waves in the collection strip and giving low amplitude, i.e. logical 0. Note that the amplitude ratio of the two logic output signals is m z (1)/m z (0) , 3, which can be further increased by optimizing pertinent factors. It is seen that the bend itself in the waveguide behaves as a control element in the gate, so that no external control modules [1][2][3][4][5][25][26][27][28] are required any more, greatly simplifying the structure of the gate. Its operation is achieved solely by the bended waveguide as well as the excitation and probe antennae, efficiently improving the device speed by eliminating the external action cycle. We would like to point out that via frequency coding, the spin-wave filter developed by Kim et al. 29 can directly realize a logic-NOT gate, which also does not need an extra control unit for operation, thus representing a simple and even more compact architecture.
In principle, arbitrary logic operations can be performed by using a logic-NOT gate and at least one additional two-input logical gate 2,5,30 . Several such integrated gates have been acquired in previous works by combining the basic NOT gates in a suitable manner 2,3,5 . FFT amplitude 0 Max. Here, the same strategy is used to construct a logical NAND gate. In what follows, the detail is given. First, two identical NOT gates are fixed symmetrically about the x-axis, then their output arms are merged into a single collection arm to form the output terminal of the NAND gate, and finally the input antennae of the NOT units are directly used as the input terminals of the integrated gate, resulting in a Y-shaped structure of the logic-NAND gate, as shown in Fig. 6(c). Note that the spin-wave injection ends of our NAND circuit are not merged together, which is different from the situation in the seminal Ref. 2 where two Mach-Zehnder interferometric logic-NOT gates are connected in parallel to give a universal NAND gate. The reason is that for the frequency-coding scheme the spin waves flowing through individual NOT units of the NAND gate need to have different frequencies. The possibility of forming a NAND gate based on such NOT gates makes the bended magnonic waveguide very attractive as a building block for constructing practical logic devices.
In conclusion, we examine the spin-wave propagation in a bended magnetic waveguide on the submicrometer scale by means of micromagnetic simulations. The transmission of the spin waves through the bend is found to depend on the used frequency. For the lowest frequencies, the traveling spin waves continuously follow the entire waveguide 7 . For the higher frequencies, the spin waves propagate along complex zigzag-like path 9 formed in the waveguide when they pass through the bended section. In the later case, the bend functions as a mode converter for the transmitting spin waves. By use of the frequency-dependent property, we numerically demonstrate the functionality of a Mach-Zehnder interferometric logic-NOT gate based on the bended waveguide. We also suggest that it is possible to construct a logical NAND gate by combining such NOT gates 2,5 . These findings would be of importance for the design and optimization of magnonic devices involving bended waveguiding components.
Methods
Micromagnetic simulations. LLG micromagnetics simulator-a finite-difference code 31 , was adopted to numerically solve the Landau-Lifshitz-Gilbert equation-the equation of motion of magnetization 32 , by which the linear spin-wave dynamics 33 can be well described. The target waveguide was divided into a regular array of cubic meshes, which have a size of 5 or 2.5 nm. The material parameters typical of Permalloy were used, i.e., the saturation magnetization is 860 emu/cm 3 , the exchange stiffness is 1.3 3 10 26 erg/cm 3 , and the magnetocrystalline anisotropy was neglected. The Gilbert damping parameter is as high as 0.1 and 0.5 in the gray and blue end regions, respectively, and it is 0.01 in the region outside the absorbing ends. The equilibrium configuration of static magnetization in the bended waveguide under a zero external field is extracted by relaxing the system from artificially given states with saturated magnetization. In the ground state, the magnetization well follows the waveguide [see Fig. 5(b)] except in the end regions where the magnetization deviates from the long axis of the waveguide due to demagnetization.
A series of dynamic simulations using different values of the antenna width (5 to 60 nm), the amplitude of excitation field (H 0 , 5 to 100 Oe), and the mesh size (2.5 or 5 nm, both are smaller than the exchange length of Py) have been run to check the influence of these parameters on the transmission characteristics of the spin waves. Finally, no qualitative difference has been found among the simulation results. We thus estimate that the observed spin-wave dynamics well lie in the linear regime.
Experimentally, edge roughness in the real samples of a bended waveguide cannot be avoided for current microfabrication technologies. However, the real roughness can be numerically represented by the staircase-like edge steps along the curved boundary of a bended waveguide due to the rectangular mesh used in the finite-difference Fig. 1. The horizontal arm situated between the bend and the bifurcation is 500 nm in length. The double branches made of two 60u-arc-shaped strips are 600 nm long in the x-axis, and each of them is 80 nm wide. The strip connected to the double branches is ,430 nm long and 80 nm wide. Each component of the gate is 5 nm thick. The red-and green-colored microstrips are data input and output terminals, respectively. Inset: truth table for the gate. (b) Snapshots of spin-wave distribution inside the gate for the frequencies of (left panel) 6.8 GHz and (right panel) 10 GHz. Insets: each left subpanel shows the input signal loaded onto the radiation antenna, and each right subpanel plots the corresponding output signal detected by the reception antenna. If a low-frequency/high-frequency microwave signal (logic 0/1) is sent into the waveguide via the input antenna, the output antenna will receive a highamplitude/low-amplitude spin-wave signal (logic 1/0) due to the constructive/destructive interference of the spin waves in the collection strip. (c) The logical NAND gate composed of two NOT gates. The output arms of these NOT gates are connected together, forming the single output terminal of the NAND gate. The spin-wave injection antennae are used as the input terminals.
www.nature.com/scientificreports SCIENTIFIC REPORTS | 3 : 2958 | DOI: 10.1038/srep02958 micromagnetic simulations (see Ref. 34). The fact that various simulations in our study with meshes of different sizes give the same results suggests that the edge roughness does not play a significant role for the addressed phenomenon.
Theory. In Eq. (1), we assume that all the spin waves have the same values of D and w*, regardless of the mode number and frequency. The value of D is extracted by fitting the decay of the simulated FFT amplitude of the 5 GHz spin wave behind the bend (the top-left panel in Fig. 3) against the propagation distance using an exponential function. The values of k n x are derived from the simulated dispersion relation (Fig. 4) for the given frequencies (v), and w* and C n are left as free parameters. It is found that the value of w* 5 240 nm makes the theoretical amplitude maps agree well with the simulated ones. For the frequency of 7 GHz, considering only the 1 st and 2 nd modes, one finds that C 1 5 1, C 2 5 0.8, Q 1 5 0, and Q 2 5 (15/180)p allows the best agreement between the theoretical and simulation results. For 9 GHz, the 3 rd mode is additionally taken into account, and then C 1 5 1, C 2 5 1.2, C 3 5 0.8, Q 1 5 0, Q 2 5 p, and Q 3 5 (165/180)p leads to the best consistency. In the time-averaging procedure, the integration expressed by Eq. (2) is replaced by a summation, running over a time interval of t 5 5 ns discretized into 1250 time steps (i.e., the step length is 4 ps), to mimic the finite-difference nature of the simulations 31 . | 2016-05-12T22:15:10.714Z | 2013-10-16T00:00:00.000 | {
"year": 2013,
"sha1": "9f1012dcbb421fac0912aa1a73465857757542fa",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/srep02958.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f1012dcbb421fac0912aa1a73465857757542fa",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
231707591 | pes2o/s2orc | v3-fos-license | DEVELOPMENT OF EMOTIONAL INTELLIGENCE IN STUDENTS WITH SPECIAL EDUCATIONAL NEEDS
The aim of the study is to analyse teachers’ opinion about the development of emotional intelligence in students who have special educational needs in inclusive schools. The method of the questionnaire survey was chosen to achieve the research aim. The study employed two developed questionnaires based on the principle of Likert scale (1-5). The questionnaires were developed for teachers according to the parts of the structure of trait emotional intelligence for students aged 7 to 12 and 13 to 17 years. The questionnaires potentially reveal teachers’ opinion about several aspects of emotional intelligence development: how and how often teachers assess, organize targeted and non-targeted educational activities for individual parts of the structure of students’ emotional intelligence and promote awareness of their importance. After conducting the research, it was found that the development of parts of the structure of emotional intelligence was carried out differently depending on students’ age. Differences were recorded both between the developed parts of the structure and between applied educational activities. Both statistically significant and insignificant research data were obtained and it was found that the development of emotional intelligence for younger students was more oriented to their inner self-knowledge; while for older students, to the aspects of socialization.
Introduction
Emotions in people's lives play a very significant role. First, it must be mentioned that emotions manifest themselves constantly and differently. Different reasons and contexts can cause positive or negative emotions determining changes in people's behaviour, the effectiveness and direction of decision-making, relationship formation, and other aspects of daily life (Siegling, Saklofske, & Petrides, 2015;Salovey, Mayer, & Caruso, 2016). It should be emphasized that emotions, in contrast with feelings, are experienced unconsciously, their manifestation is involuntary, caused by external factors, which makes it almost impossible to conceal them (Pettinelli, 2012). In other words, emotions are an inevitable reflection of the person's state. Having realised the importance of emotions and emotional powers to people's quality of life, the recent decades witnessed a very strong focus on emotional intelligence as a separate construct (Dhani & Sharma, 2016).
Namely emotional intelligence and its structure are those determinants that allow not only to properly perceive one's own and other people's emotions and their consequences but also to properly use that information for various perspectives (Salovey et al., 2014;Petrides & Mavroveli, 2018;Rodrigues & Machado, 2019). Although the functions and main ideas of emotional intelligence do not differ substantially, there are several basic theories and models with different structures (Petrides & Furnham, 2001;Bar-On, 1997;Goleman, 1995;Salovey et al., 1990): 1) Goleman's theory; 2) Salovey-Mayer-Caruso (Ability) theory; 3) Bar-On (mixed) theory; 4) Petrides (trait) theory. The scientific literature reveals that the biggest difference between these theories of emotional intelligence lies not in their structure but in research instruments (Petrides, 2011;Mayer, Caruso, & Salovey, 2016). This means that choosing between different theories of emotional intelligence, members of the academic community and other researchers working in the field of emotional intelligence must first consider research instruments. Therefore, the essential theory of the article, which served as a basis for conducting the research participants' survey, was the trait emotional intelligence theory chosen due to several main criteria.
Contrary to other models of emotional intelligence, the structure of the trait model is least debatable with regard to its validity at both the theoretical and empirical levels (Petrides & Mavroveli, 2018). A large amount of scientific research (Dehghanan, Abdollahi, & Rezaei, 2014;Indradevi, 2015;Alegre, Perez-Escoda, & Lopez-Cassa, 2019) reveals that parts of the structure of the said model correlate with the main personality traits (openness, conscientiousness, extraversion, agreeableness, neuroticism), which means that this model has clearly defined links not with subjectively understood abilities, experiences as it is in other models but also with a specific structure of the personality (the Big Five personality traits). Other models lack similar substantiation and links because identification of the structure of other models with abilities, competencies, and the like is debatable. Researchers (Locke, 2005;Petrides, 2011;Siegling et al., 2015) question whether other emotional intelligence models can be identified with, for example, abilities or competencies due to their highly abstract structure and lack of integrity. It is also worth mentioning that research based on the trait emotional intelligence model reveals that structure of emotional intelligence which manifests itself in normal everyday conditions. Other models typically measure only the maximum potential of emotional intelligence. This means that the results of the research on trait emotional intelligence reveal not only the aspects of individual parts of the structure of emotional intelligence but also general aspects of its manifestation both under normal life conditions and at the maximum need for its manifestation. Finally, the research instruments of the trait emotional intelligence model measure not only the overall emotional intelligence quotient (EQ) but also various aspects related to success in life (the propensity to certain behaviour forms, mental parameters, individual strengths, etc.). This is so because this model has validated and scientifically grounded research instruments for different age groups, different structures of emotional intelligence for children, adolescents, and adults. For these reasons, the trait emotional intelligence model is also described as one of the most widely used models for determining potential success in life. Research based on other models of emotional intelligence lacks such variation in research instruments; therefore, most research related to emotional intelligence is conducted with adults or older people (Petrides, 2018;Siegling et al., 2015).
These days, emotional intelligence is often treated as a more important aspect determining success in life than general intelligence. Such a statement is formulated for several reasons. First, emotional intelligence is one of the key indicators in predicting success in life. Scientific research data demonstrate that people with high emotional intelligence are more successful in various aspects of life (relationships, workplace, school, and the like) (Bhootrani & Junejo, 2016;Ozer, Hamarta, & Deniz, 2016). The second reason is the manifestation of emotional intelligence in everyday life and its application in various fields: clinical, educational, organizational, etc. (Petrides & Mavroveli, 2018). This means that the manifestation of emotional intelligence takes place in many different contexts and at all age periods. It can be stated that the development of emotional intelligence is particularly important for students who have special educational needs. As stated by Kumar (2013), students who have special educational needs are inclined to emotional instability. According to the researcher, such students also face emotional problems when learning in inclusive settings: they often compare themselves with other students and assess themselves negatively, find it difficult to concentrate, are unable to express feelings and calm themselves. However, such conditions are favourable for the development of emotional intelligence due to on-going social interactions, different activities, and challenges. Students who have special educational needs and high emotional intelligence distinguish themselves by better academic performance and learning motivation, better planning and problem-solving abilities, a strong sense of friendship, appropriate behaviour, and a positive attitude towards school and learning (Kumar, 2013). Similar insights are provided by Biswal (2015), who states that inclusive education for students who have special educational needs is a great condition for developing their emotional intelligence and exercising their emotional powers.
The development of emotional intelligence in children who have special educational needs in inclusive schools can take place in two ways: through specialized programmes and activities aimed at developing emotional intelligence and by experiencing various emotions during the lessons and interacting with teachers and other students. Scientific research (Taylor, Oberle, Durlak, & Weissberg, 2017;Gershon & Pellitter, 2018;Carissoli & Villani, 2019) data reveal that integration of programmes developing social, behavioural, and emotional powers into school curricula results in manifold outcomes. Students who have completed these programmes distinguish themselves by better mental health, social abilities, academic performance, are more positive about themselves, and are less prone to antisocial behaviour and use of harmful habits. It can be assumed that integration of such programmes into the curriculum can have a positive effect on the emotional intelligence of students who have special educational needs. Emotional intelligence can be developed in inclusive schools even when similar targeted programmes are not applied due to constantly experienced different emotions in schools as well as social interactions with teachers and other students during the (self-)educational process. Such links occur constantly: during individual and group work, during breaks, during trips, and the like. Therefore, proper student-teacher communication is of great importance for the manifestation and management of students' emotions (Grams & Jurowetzki, 2015). Teachers' ability to manage students' emotions enables to create conditions in the classroom, which are favourable not only for teaching but also for learning, allows orientation to target students (including students who have special educational needs), their academic performance, proper communication with classmates, and their emotional powers (Mainhard, Oudman, Horsntra, Bosker, & Goetz, 2018). Such emotional intelligence development programmes and perspectives of communication with teachers are particularly important for students who have special educational needs and learn in inclusive classrooms. It can be stated that in the school context, emotional intelligence can be developed both during its targeted development by applying various specialized programmes and during non-targeted education by interacting and helping to manage emotional powers throughout the whole (self-) educational process.
Relevance of the study is grounded on the conception that emotional intelligence is one of the key factors in creating a successful life. Its development is especially important for students who have special educational needs (due to their emotional sensitivity, frequent social exclusion, and the like). Higher emotional intelligence allows students not only to function more successfully at school but also in other areas of life. For these reasons, it is important to identify how the development of emotional intelligence of students of different ages takes place in inclusive schools.
The research object is the development of emotional intelligence in students who have special educational needs, based on the trait theory.
The research aim is to analyse teachers' opinion about the development of emotional intelligence in students who have special educational needs in inclusive schools.
Problem questions: What parts of the structure of emotional intelligence are most often developed in inclusive schools according to teachers? What educational activities do teachers apply while developing emotional intelligence in students who have special educational needs? Does the development of emotional intelligence differ depending on students' age?
Research methodology and organization
The method of the questionnaire survey was chosen to achieve the aim of the study. During the research, two developed questionnaires formulated according to the trait emotional intelligence theory were applied. The questionnaires were developed for teachers according to the said parts of the structure of emotional intelligence of the said theory for students of two age groups: students aged from 7 to 12 years and students aged from 13 to 17 years. The questionnaires reveal teachers' opinions on several aspects of developing emotional intelligence: 1) how and how often teachers assess individual parts of the structure of students' emotional intelligence; 2) what targeted educational activities for different parts of the structure of students' emotional intelligence teachers organise and how often they organise them; 3) what non-targeted educational activities for different structural parts of students' emotional intelligence teachers organise and how often they organise them; 4) how and how often teachers promote students' awareness of the importance of emotional intelligence and its parts. In order to determine teachers' opinions on the above-mentioned aspects, both questionnaires were designed based on the principle of the Likert scale (1-5). Following this principle, 4 questions are attributed to every structural part of the emotional intelligence, which possibly reveal, for example, how often teachers' assessment, targeted education, non-targeted education, and promotion of awareness of importance were applied to the structural part of emotional intelligence adaptation.
The first questionnaire was designed to identify the opinion of teachers educating students aged 7 to 12 years about the development of students' emotional intelligence. Given that the structure of emotional intelligence of students of this age consists of 9 parts, the questionnaire consisted of 36 items (4 questions for each part of the structure of emotional intelligence). The same principle was applied for compiling the second questionnaire intended for revealing the opinions of teachers working with students aged 13 to 17 years. The emotional intelligence structure of students of this age consists of 15 parts; therefore, 60 items were formed in this questionnaire.
Teachers' opinions about the development of structural parts of emotional intelligence, which are characteristic solely to a certain age group, are assessed without a comparison with the opinions of other teachers (working with children of different ages). Teachers' opinions are compared with each other when characteristic parts of the structure of emotional intelligence (adaptation, expression, perception, regulation of emotions, self-confidence, and selfmotivation) are present for both age groups. All items are based on Petrides' and Mavroveli's (2001; conceptions of the parts of the structure of the trait emotional intelligence theory. During the research, the following data collection methods were used: 1) the method of the analysis of scientific literature, which was applied to reveal theoretical insights and to form the research design; 2) the questionnaire survey method for collection of empirical research data. Data analysis methods: 1) methods of descriptive mathematical statistics with SPSS 21.0 programme (Mann-Whitney U test, Wilcoxon signed-rank test, standard deviation of the mean).
The study involved 130 teachers-specialists (N=130); 92 (N=92) of them were teachers educating students aged 7 to 12 years, of whom 37% (N=34) were primary education subject teachers and 63% (N=58) were teachers-specialists (2 primary education teachers-speech therapists, 19 primary education subject teachers as well as non-formal education specialists, 37 teachers-tutors). There were 38 (N=38) teachers educating students aged 13 to 17 years, of whom 66% (N=25) were basic education subject teachers and 34% (N=13) were basic education subject teachers-specialists (4 teachers-special educators, 9 teachers who also worked as non-formal education specialists). The research sample was formed by applying purposeful sampling according to the following criteria: 1) research participants have contact hours with students who have special educational needs; 2) the participants of the research are teachers of formal education subjects; and 3) they participate in the study voluntarily. The research sample was oriented to teachers and teachers-specialists, as they spend the most contact time with students who have special educational needs; besides, their activities during lessons can be richer and more diverse due to the possibilities to differentiate the curriculum and include targeted activities developing emotional intelligence.
To conduct the research, verbal consents of the heads of institutions and research participants were obtained. The ethics of research was ensured following several principles: 1) the names of the institutions that participated in the research are not mentioned either in the results or in other texts of this article; 2) the personal data of the research participants or the encryptions of those data are not provided in the texts of the study; 3) the research results are presented only as general statistical indicators, without revealing the indicators of different institutions or their differences; and 4) the results of the study are published only in scientific articles and presented at scientific conferences.
Research results
First, the parts of the structure of emotional intelligence, which are characteristic of students of both age groups are analysed. All the results are analysed according to individual parts of the model rather than based on the general construct of emotional intelligence. The research participants assessed six parts characteristic of the structure of emotional intelligence of students in both age groups. All parts are related to the above-mentioned educational activities. The frequency of the research participants' approval revealed how often the manifestation of structural parts of emotional intelligence is assessed; then, how often they are developed through targeted activities; how often, through non-targeted activities; and, finally, how often students who have special educational needs are encouraged to understand the importance of the respective part. It is important to emphasize that these peculiarities of emotional intelligence development were identified for students who had special educational needs and learned in inclusive schools.
The results of the study related to the activities of assessing manifestation of parts of the structure of emotional intelligence with regard to teachers are presented in Table 1. It is emphasized that assessment in this study is understood as a subjective and informal phenomenon and it is presented as the first activity of emotional intelligence development, because such an assessment may determine other perspectives of developing the part of the structure. Analysing the results of the study in terms of assessment, it was noted that teachers assessed 4 out of 6 parts with higher scores for students aged 7-12 years. The statistically significant difference between these parts in the adaptation part was recorded as well. Teachers who took part in the study indicated that greater expression of emotions and self-motivation was assessed among older students (13-17 years old). This means that older students are potentially more emotional and more able to motivate themselves for a respective activity. The smallest difference between the parts of emotional intelligence in the aspect of teachers' assessment was found in the parts of self-esteem (3.97) and self-motivation (5.13); the largest, in the parts of adaptation (11.15) and regulation of emotions (8.89). According to the teachers, students of different ages are similar in their self-esteem and the ability to motivate themselves and most different in their abilities to adapt and regulate emotions. According to the results of the research, it can be stated that the majority of structural parts of emotional intelligence in the aspect of teachers' assessment manifest themselves more among students aged 7 to 12 years.
After identifying how the research participants assess manifestation of parts of the structure of emotional intelligence, the analysis was undertaken to find out how targeted activities for the development of the said parts were organized. These research results are also compared in the aspect of students of different ages (see Table 2). It is accentuated that in this study, targeted education is defined as organization or performance of certain targeted activities intended for the development of emotional intelligence. The analysis of the research results revealed a similar analogy with the peculiarities of teachers' assessment in the previous analysis of results. It was found that the major part of the structure of emotional intelligence through targeted activities is developed for younger students (7-12 years old). Self-esteem and self-motivation by applying targeted activities are more developed in older students. This means that self-motivation is possibly one of the priority abilities in inclusive schools for 12-17 year olds. A statistically significant difference of data was recorded only in the part of self-esteem. Analysing the results of the study, it was observed that the smallest difference with regard to targeted education was found between perception of emotions (4.22) and self-motivation (0.82). The largest difference was identified between self-esteem (13.61) and regulation of emotions (12.35). Based on the results of the study, it can be stated that the targeted development of the parts of emotional intelligence in students of different ages is most similar in the parts of perception of emotions and selfmotivation; and least similar, in the parts of self-esteem and regulation of emotions. This means that older students are given more attention in developing their self-confidence; while younger students, in developing regulation of their emotions. It can be stated that emotional intelligence in inclusive schools is more often purposefully developed in younger (7-12 years old) students.
The third part of the questionnaire was oriented to the development of parts of emotional intelligence through non-targeted activities. This means that the essence of the third part of the questionnaire is to find out how the research participants develop emotional intelligence through usual educational (lesson) activities (see Table 3). Analysing the results of the study, it was noticed that in this part of the study, two statistically significant differences of data were identified. Such differences are recorded in the parts of expression of emotions and perception of emotions. The results of the research reveal that parts of the structure of emotional intelligence in the aspect of non-targeted education are more often developed in younger students (7-12 years old). Based on the results of the study, it was found that the smallest difference in the aspect of non-targeted development of emotional intelligence is between self-esteem (6.00) and regulation of emotions (4.79). The largest difference was identified between the perception of emotions (17.07) and expression of emotions (14.58). This means that parts of the structure of emotional intelligence of students of different ages in terms of non-targeted education are most similar by the parts of selfesteem and regulation of emotions; and least similar, by the parts of perception of emotions and expression of emotions. This may mean that in the lower grades, more creative educational activities are carried out, allowing younger age students to self-develop their perception and expression of emotions. Based on the results of the study, it can be stated that the teachers who participated in the study more often organize non-targeted activities that possibly develop younger age students' emotional intelligence.
In the fourth part of the analysis of results, it was identified how often the research participants encouraged students to perceive what personal benefit was obtained from different structural parts of emotional intelligence as individual constructs and why it was important to strengthen these parts (see Table 4). It is highlighted that promotion of perception of importance is described as teachers' targeted actions increasing students' needs and willingness to develop their emotional intelligence. The analysis of the results of the study revealed indicators that are almost analogous to the ones in the previous analysis. This means that the perception of the importance of the parts of the structure of emotional intelligence is more often promoted in younger age students (7-12 years old). It was observed that statistically significant data were recorded only in the adaptation part. Analysing the results of the study, the smallest statistical difference was found in the parts of self-esteem (0.34) and perception of emotions (8.46). The largest statistical difference was identified in the parts of regulation of emotions (11.77) and adaptation (11.15). This means that the frequency of promoting the importance of parts of the structure of emotional intelligence in students of different ages is most similar in the aspect of self-esteem and perception of emotions; the most different, with regard to regulation of emotions and adaptation.
In the fifth part of the analysis of the research results, the research results by rank are presented. This means that after evaluating every part of the structure of emotional intelligence according to the sums of scores of selected variants, their total number is obtained. A higher number determines a higher position in the ranking (higher number means that education is applied most). Such ranking reveals the most frequently and the most rarely developed parts of the structure of emotional intelligence (see Table 5). The results are presented according to the rank of the number of scores in the descending order, having identified which parts of the structure of emotional intelligence are most often assessed, purposefully and non-purposefully developed, and the perception of the importance of which parts is promoted most often. The parts with the highest frequency of peculiarity of education are marked by a lower number; with the lowest frequency, by a higher number. Table 5. Distribution of parts of the structure of emotional intelligence by rank in the aspect of both age groups of students
No. of ranking
Parts of the structure of emotional intelligence 1 Self-motivation 2 Self-esteem 3 Perception of emotions 4 Regulation of emotions 5 Adaptation 6 Expression of emotions Analysing the results of the study, a distinct distribution of parts of the structure of emotional intelligence in all age groups is observed. Based on the results of the research, the following ranking of parts of the structure was determined: 1) self-motivation; 2) selfesteem; 3) perception of emotions; 4) regulation of emotions; 5) adaptation; 6) expression of emotions. Such a ranking reveals that students who have special educational needs and learn in inclusive schools are most educated in the aspect of emotional intelligence towards internal self-encouragement and self-suggestion to perform certain activities.
Based on the results of the study, the most frequently developed parts of the structure of emotional intelligence of younger students are ranked in the following sequence: 1) adaptation; 2) relationships with peers; 3) regulation of emotions; 4) low impulsivity; 5) perception of emotions; 6) self-motivation; 7) self-esteem; 8) affect management; 9) expression of emotions. The distribution of peculiarities of education is also presented: 1) non-targeted development of emotional intelligence; 2) promoting awareness of the importance of parts of its structure; 3) targeted development of emotional intelligence 4) subjective assessment of parts of its structure. Analysing the results of the study, the same aspects were revealed with regard to older students. The most developed parts of trait emotional intelligence are ranked as follows: 1) self-esteem; 2) impulsivity control; 3) social awareness; 4) self-motivation; 5) management of other persons' emotions; 6) optimism; 7) assurance; 8) stress management; 9) expression of emotions; 10) perception of emotions; 11) regulation of emotions; 12) adaptation; 13) interpersonal relationships; 14) empathy; 15) happiness. The revealed distribution of peculiarities of education for older students is ranked in the following order: 1) organization of targeted activities developing emotional intelligence; 2) non-targeted development of emotional intelligence; 3) promoting awareness of the importance of parts of the structure of emotional intelligence; 4) subjective assessment of parts of the structure of emotional intelligence.
The analysis of the research results has revealed that the most frequently used activities developing emotional intelligence in inclusive schools are related to targeted and non-targeted development of its structural parts. It can be assumed that for these reasons, the development of emotional intelligence in inclusive schools takes place through all activities and that the development of emotional intelligence in pupils who have special educational needs in inclusive schools possibly takes place independently of its assessment or activities promoting the perception of its importance.
Discussion
The analysis of the research results has revealed that the teachers who participated in the study more often develop emotional intelligence in younger age students. It is emphasized that such education is also applied to older students, but the comparison in the age aspect shows that such education for older students takes place less often and not so holistically. This can be determined by several factors. First, along with the increasing age of students, the need for higher academic performance is also increasing. Therefore, educating older age students, more attention is paid not to the development of emotional intelligence but to academic performance and traditional subjects. Such educational activities are targeted, oriented to the academic performance, and are more homogeneous, contrary to the educational processes of younger age students, where highly creative and manifold education is applied. Another reason is that emotional intelligence is most favourably developed at a younger age (Uzsalyné & Pécsi, 2016). It is especially important to create a safe environment for students, in which they could freely express their emotions, talk about them not only with each other but also with older persons, adults (Firestone, 2016). It can be assumed that the results of this study correlate/ relate to the data provided by Uzsalyné and Pécsi (2016): more frequent development of emotional intelligence was found in lower, primary education classes. Based on the results of frequency of the research participants' approval, it can be stated that their activities are a favourable condition for developing emotional intelligence of students who have special educational needs and of younger students' emotional intelligence. As stated by Firestone (2016), such conditions can be individual and group work, learning through play, a wide range of non-traditional lessons, and the like.
The results of the survey of upper-year students have revealed that the main focus is on those parts of the structure of emotional intelligence that are more oriented to interpersonal relationships and socialization: social awareness, empathy. Similar results were reported by Mavroveli, Petrides, Rieffe, & Bakker (2007). Researchers have revealed that at this age, students have highly expressed competences and parts of the structure of emotional intelligence related to cooperation and relationships with peers. It has also been found that considerable attention is paid to stress management. Basically, such perspectives are a positive aspect for several reasons. First, based on Smetana, Robinson, & Rote (2015), socialization in the second decade of life is a very important part of human life because namely during this period people are exposed to a large number of various conditions determining further and multifaceted development. Students of this age experience constant and diverse changes: both physiological and mental as well as changes in communication and relationship with the environment. Students' abilities to adapt to on-going change can also be revealed through socialization, constant communication with peers and older people, learning to accept that change (Grusec & Hastings, 2015). This means that the development of the parts of the emotional intelligence structure, which are associable with socialization processes, is very important for students of this age. The factor of developing stress management abilities also has a substantial impact on adolescents. According to the data of the World Health Organization (2019), during adolescence, various factors causing stress are encountered, which include a greater need for autonomy and recognition among peers, perception of gender, and easier access to various kinds of information and technologies. This means that stress coping abilities that are developed in inclusive schools are of particular importance. The results of the study revealed that mostly developed parts of the structure of emotional intelligence among older students encompassed some of the most important aspects of lives of people of this age.
Several difficulties were encountered during the study. One of them was the researcher's involvement in the data collection process. Due to the sample of the study and the participants' busyness factor, it was not possible to participate in the surveys directly and clearly answer the research participants' questions. Another difficulty was formulating the suitable research sample. Not all inclusive school teachers educate students who have special educational needs, and not all schools that have such students benevolently give permission to conduct the research. A large share of the research sample work not only as teachers but also hold other positions in the same institution (for example, work as form tutors, social pedagogues, and the like). It is important to emphasize that research participants were selected according to respective selection criteria. A share of the research sample was subject to the administration of the educational institution after obtaining the consents of prospective research participants.
The conducted study has certain weaknesses. First, the research instrument is not a validated questionnaire, although it is designed according to the structure of trait emotional intelligence for different age groups. There appears a possibility that not all teachers who took part in the research understood questionnaire concepts or given examples properly. This could have affected the results. Another weakness is that the research sample consisted only of teachers or teachers-specialists directly contacting with students who have special educational needs. The decision to include only teachers and to exclude other specialists who did not have lessons was mainly determined by the orientation of the research -the activities organized during the lessons. However, there remains a possibility that other professionals would provide different data that are little related to lesson activities.
Preparing similar studies in the future, it is recommended to use more diverse data collection and analysis methods, to select additional research instruments (e.g., structured interview, content analysis, and the like) that are related to the chosen emotional intelligence theory for even greater data saturation. It is also advised to ensure a more active participation of the researcher in the data collection process, because investigating emotional intelligence not as an integral phenomenon but studying its separate structural parts, it is very important to clearly perceive and distinguish between their differences in order to give clear answers to the questions arising to the research participants, this way ensuring the informativeness of the research data. The study sample should also include child support professionals and other school staff who may have useful data for the study. Finally, it is recommended that similar studies should be conducted in other cities, towns, and the like, this way revealing general similarities and differences with regard to results. It is advised that educational institutions interested in the development of students' emotional intelligence should clearly and purposefully name the theory and the model of emotional intelligence according to which education will be carried out. Different models of emotional intelligence have different structures and research instruments, and in the long run, these are very important factors.
Summary
Simas Garbenis, Renata Geležinienė, Šiauliai University, Lithuania Greta Šiaučiulytė, Šiauliai Region Association of Social Pedagogues, Lithuania Emotions are becoming an increasingly relevant construct for professionals, academicians and researchers in various fields. It is namely emotions that are the determining factor initiating change in people's behaviour, the effectiveness and direction of decision-making, the formation of relationships, and other aspects of daily life (Siegling, Saklofske, & Petrides, 2015;Salovey, Mayer, & Caruso, 2016). In other words, emotions are an inevitable reflection of the person's state. Realizing the importance of emotions and emotional powers to people's quality of life, recent decades witnessed a very strong focus on emotional intelligence as a separate construct (Dhani & Sharma, 2016).
Although the functions and basic ideas of emotional intelligence principally do not differ, there are several basic theories and models with different structures (Petrides & Furnham, 2001;Bar-On, 1997;Goleman, 1995;Salovey et al., 1990): 1) Goleman's theory; 2) Salovey-Mayer-Caruso (Ability) theory; 3) Bar-On (mixed) theory; and 4) Petrides (trait) theory. The scientific literature reveals that the biggest difference between these emotional intelligence theories lies not in their structure but in research instruments (Petrides, 2011;Mayer, Caruso, & Salovey, 2016). The essential theory of the article, which served as a basis for conducting the research participants' survey, was the selected theory of trait emotional intelligence. It was chosen for several different criteria: 1) parts of the structure of this theory model correlate with the basic personality traits (openness, conscientiousness, extraversion, agreeableness, neuroticism), making up the Big Five personality structure; 2) in contrast to other emotional intelligence models, the model of trait emotional intelligence reveals that structure of emotional intelligence which manifests itself in normal everyday conditions rather than in situations where emotional powers are used to the maximum; 3) the research instruments of this model measure not only the total emotional intelligence quotient (EQ) but also various aspects related to success in life.
These days, emotional intelligence is often treated as a more important aspect determining success in life than general intelligence. Based on scientific research data, it can be stated that people with high emotional intelligence are more successful in various aspects of life (Bhootrani & Junejo, 2016;Ozer, Hamarta, & Deniz, 2016). Manifestation of emotional intelligence in everyday life and its application in various fields (clinical, educational, organizational, etc.) are perceived as everyday phenomena (Petrides & Mavroveli, 2018).
It can be stated that the development of emotional intelligence is particularly important for students who have special educational needs. Such students face emotional problems when learning in inclusive settings as well: they often compare themselves with other students and assess themselves negatively, have difficulty collecting their thoughts, are unable to express feelings, emotions, and calm themselves (Kumar, 2013). However, such conditions are favourable for the development of emotional intelligence due to on-going social interactions, different activities and challenges. Students who have special educational needs and high emotional intelligence distinguish themselves by better academic performance and learning motivation, better planning and problem-solving abilities, strong perception of friendship, appropriate behaviour, and a positive attitude towards school and learning (Kumar, 2013;Biswal, 2015).
Based on these statements, the main aim of the research was set -to analyse teachers' opinion about the development of emotional intelligence of students who have special educational needs in inclusive schools.
After conducting the research, it was found that the development of emotional intelligence was carried out differently depending on students' age. Differences were recorded both between the developed parts of the structure of emotional intelligence and between the applied educational activities. Both statistically significant and insignificant research data were obtained and it was identified that the development of emotional intelligence for younger students was more oriented to their inner selfknowledge; while for older students, to the aspects of socialization. In inclusive schools, the main activities of emotional intelligence development are carried out through its targeted and non-targeted development. | 2020-12-24T09:02:19.036Z | 2020-12-18T00:00:00.000 | {
"year": 2020,
"sha1": "485237ca9fe657aeb59bdb1b976b162d62ff3463",
"oa_license": "CCBY",
"oa_url": "http://socialwelfare.eu/index.php/sw/article/download/525/388",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d1fd88c84bdb4e33097332518a5ba5c6edf09696",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.