id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
3000609
pes2o/s2orc
v3-fos-license
Two-step synthesis of fatty acid ethyl ester from soybean oil catalyzed by Yarrowia lipolytica lipase Background Enzymatic biodiesel production by transesterification in solvent media has been investigated intensively, but glycerol, as a by-product, could block the immobilized enzyme and excess n-hexane, as a solution aid, would reduce the productivity of the enzyme. Esterification, a solvent-free and no-glycerol-release system for biodiesel production, has been developed, and two-step catalysis of soybean oil, hydrolysis followed by esterification, with Yarrowia lipolytica lipase is reported in this paper. Results First, soybean oil was hydrolyzed at 40°C by 100 U of lipase broth per 1 g of oil with approximately 30% to 60% (vol/vol) water. The free fatty acid (FFA) distilled from this hydrolysis mixture was used for the esterification of FFA to fatty acid ethyl ester by immobilized lipase. A mixture of 2.82 g of FFA and equimolar ethanol (addition in three steps) were shaken at 30°C with 18 U of lipase per 1 gram of FFA. The degree of esterification reached 85% after 3 hours. The lipase membranes were taken out, dehydrated and subjected to fresh esterification so that over 82% of esterification was maintained, even though the esterification was repeated every 3 hours for 25 batches. Conclusion The two-step enzymatic process without glycerol released and solvent-free demonstrated higher efficiency and safety than enzymatic transesterification, which seems very promising for lipase-catalyzed, large-scale production of biodiesel, especially from high acid value waste oil. Background Biodiesel fuel (fatty acid methyl ester or ethyl ester) is renewable and biodegradable and has "environmentally friendly" features; for example, it can be produced from animal and vegetable oils, and the number of carbon atoms present in the exhaust is equal to that initially fixed from the atmosphere [1]. The number of researchers studying biodiesel has steadily increased during the past decade, and methods of large-scale biodiesel production based on acid or alkaline catalysis have been widely used. However, they have many well-known drawbacks, including the difficulty of recycling glycerol, the need to eliminate catalyst and salt and their energyintensive nature. Designed to overcome these drawbacks, enzymatic methods of producing fatty acid methyl ester (FAME) or fatty acid ethyl ester (FAEE) from soybean oil and alcohol are afforded and dominated by transesterification reaction. The advantages of transesterification are that the enzyme can be reused and that the operating temperature is lower (40°C) than that in other techniques. Its disadvantages are that the catalysis activity of lipase is inhibited by alcohol, the immobilized enzyme is blocked by a by-product of glycerol and the production intensity of lipase is decreased by organic solvents such as n-hexane [2] and tert-butyl alcohol [3,4]. Safer and more environmentally friendly enzymatic methods can be developed on the basis of solvent-and glycerol-free catalysis. When the catalytic process does not involve an organic solvent, lower cost, higher substrate concentration and greater production volume can be achieved. For example, Du et al. [5] studied Novozyme 435-catalyzed transesterification of soybean oil and methyl acetate directly for biodiesel production, and a yield of 92% was obtained. Moreover, Watanabe et al. [6,7] transferred acid oil to FAME without use of an organic solvent by a two-step conversion process involving hydrolysis of acylglycerol by C. rugosa lipase followed by esterification of free fatty acid (FFA) to FAME by immobilized C. antarctica lipase. Inhibition of glycerol on immobilized lipase was eliminated during hydrolysis, and a high degree (98%) of esterification, 40 cycles reusing immobilized lipase, was achieved. In the studies by Watanabe et al., esterification was carried out in two steps once more. The first esterification step was performed at 30°C with 1.0% weight lipase in the reaction mixture. The second esterification step was performed in the presence of absorbing solvent under conditions similar to those in the first step, following dehydration of the first esterification product. Previous studies have documented the reaction condition and stability of immobilized lipase in solvent-free catalytic systems, but the level of production has not been fully considered, such as the high costs of immobilization of lipase and the long time required for esterification (each batch reaction took 24 hours). Here we describe a promising two-step enzymatic process for the conversion of soybean oil to FAEE: (1) Soybean oil is hydrolyzed to fatty acid catalyzed by Yarrowia lipolytica lipase crude broth, and FFA is distilled from hydrated soybean oil mixture; and (2) FAEE is esterified from FFA and ethanol by the immobilized Y. lipolytica lipase on fabric membranes. The entire esterification process without organic solvent takes only 3 hours, uses only one kind of comparatively cheap lipase and the inhibition of glycerol disappears. Hydrolysis reaction The efficiency of hydrolysis in lipase-catalyzed systems is affected by many factors, including lipase, the ratio of oil to water and the degree of mixing of oil and water. To find optimal conditions for hydrolysis of acylglycerols using sodium stearate as an emulsifier to promote mixing, we studied the effects of lipase concentration, the amount of water in the reaction solution and the course of the hydrolysis reaction. Effect of emulsifier Sodium stearate is an emulsifier that promotes the mixing of oil and water. Since sodium stearate can be converted to stearic acid by the addition of acid at the end of the reaction, it hardly affects the results of hydrolysis. Various quantities of sodium stearate solution were mixed with soybean oil, and the reaction was started by adding lipase broth. The degree of hydrolysis after 36 hours was >90% with 5% (wt/vol) sodium stearate addition, in contrast to 60% without sodium stearate addition ( Figure 1A), and oil and aqueous phases mixed well. A B C Figure 1 Effect of emulsifier, water content and lipase concentration on hydrolysis reaction. (A) Effect of emulsifier on hydrolysis reaction. The reaction system comprised 2 g of soybean oil, 200 Uw lipase broth, 1.2 g of water and 5% (wt/vol) added sodium stearate at 40°C and spun at 130 rpm. No emulsifier was added for the control reaction. (B) Effect of water content on hydrolysis reaction. The reaction system comprised 2 g of soybean oil, 200 Uw lipase broth, various quantities of water to produce water, oil ratios as indicated and 0.1 g of sodium stearate at 40°C for 36 hours and spun at 130 rpm. (C) Effects of lipase concentration on hydrolysis reaction. The reaction system comprised 2 g of soybean oil, various quantities of lipase broth to produce concentrations as indicated, 1.2 g of water and 0.1 g of sodium stearate at 40°C for 36 hours and spun at 130 rpm. Effect of water content Excess water slows the velocity of hydrolysis because it reduces the concentration of lipase. The maximal efficiency of reactions was observed for ratios 0.25 to 1.5 (volume ratio) of water to oil. Larger ratios decreased the hydrolysis reaction and the efficiency of lipase ( Figure 1B). Effect of lipase concentration Hydrolysis degree was determined for lipase concentrations ranging from 25 to 400 Uw per 1 g of soybean oil. For 25 Uw/g lipase, the hydrolysis degree was low. As the lipase concentration was increased, the degree of hydrolysis also increased, reaching a maximal value >90% for the enzyme concentration 100 Uw/g ( Figure 1C). Further increases in enzyme concentration did not increase the degree of hydrolysis. Course of hydrolysis reaction For large-scale hydrolysis, lipase was added (concentration 100 Uw/g oil) to a system containing 200 g of soybean oil, 20,000 Uw lipase broth, 120 ml of water and 10 g of sodium stearate. Acid value and hydrolysis degree are shown in Figure 2A. At prophase, the hydrolysis velocity was high and the hydrolysis degree reached 65% after 10 hours, 90% after 36 hours, and 92.5% at 48 hours, respectively. Total acid value at 48 hours was 185 mg of KOH per 1 g of oil. The composition of triglycerides, diglycerides, glycerol monoesters and fatty acids was analyzed by thin layer chromatography (TLC) ( Figure 2B). Esterification reaction Before catalyzed esterification, the viscosity of the reaction mixture was analyzed. The fatty acid or mixture of 2.82 g of fatty acid and 587 μl of ethanol had the lowest viscosity (8.75 cP and 8.75 cP, respectively) compared to soybean oil and mixture of 2.82 g of soybean oil and 587 μl of ethanol (46.5 cP and 18.75 cP, respectively). These viscosity data may show that fatty acid and ethanol were commixed well and can be esterified by lipase catalysis in the absence of organic solvent as Watanabe et al. reported [7]. So, fatty acids distilled from hydrolysis mixture, characterized by approximately 147 to 153 ppm water content, were prepared for esterification. The amount of immobilized lipase, the ratio of ethanol to fatty acid, the manner of ethanol addition and water content were all examined in a lipase-catalyzed esterification reaction to find the optimal conditions for the reaction. Effect of amount of immobilized lipase The esterification activity of immobilized lipase, prepared as described in the Lipase activity determination section, was 150 Ue per 1 g of membrane. The amount of immobilized lipase was determined as the weight percentage of lipase to fatty acid. As lipase content increased from 0% to 18% (wt/vol), the esterification degree increased gradually ( Figure 3A). The increase in lipase content beyond 12% did not result in a significant increase in esterification degree during a 3-hour reaction, indicating that this value is appropriate for catalysis in this system. Effect of ratio of ethanol to fatty acid The addition of excessive alcohol to the reaction mixture is often used to enhance reaction velocity and esterification degree. However, in this study, maximal esterification was obtained with an ethanol:fatty acid ratio of 1:1, and higher ratios produced a lower degree of esterification ( Figure 3B). This finding suggests that excessive ethanol suppresses lipase activity. Effect of manner of ethanol addition Ethanol serves as a reaction substrate; however, at high concentrations, it also denatures proteins, including enzymes. We tried adding ethanol in a series of steps to minimize its denaturing effect. A 2:1 molar ratio of ethanol to fatty acid was achieved by 1-step, 3-step, 5step and 10-step addition methods. The esterification reaction was inhibited when ethanol was added by the one-step method, and the esterification degree was about 50%. Moreover, the esterification degree did not increase with the extension of the reaction time. However, the esterification degree increased to 81.6% as larger numbers of steps were used ( Table 1). Effect of water Water has important dual roles in esterification systems: (1) It is essential to maintaining lipase conformation and catalytic activity, and (2) it is the product of esterification and affects the equilibrium state of the esterification or hydrolysis reactions. So, the effects of the addition or removal of water were investigated in the esterification system. Figure 4 shows a direct comparison of the yield of FAEE with the addition of various water concentrations from approximately 0% to 10% (vol/vol). The results showed that water content of < 0.5% had almost no effect on esterification. On the other hand, the efficiency of esterification was low for water content >1% ( Figure 4A). However, if the immobilized lipase membrane was taken out, dried at 40°C and reused in fresh esterification, its catalytic activity resumed ( Figure 4A). This finding suggests that excess water (>0.5%) inhibits esterification by affecting the reaction equilibrium and that the inhibition is abolished by removing the water released in the esterification process. Molecular sieves were added in the esterification system to absorb and remove released water. Molecular sieves with content ranging from 0% to 27% (wt/vol) were added experimentally to the system. The addition of 9% molecular sieves resulted in an increase in the degree of esterification from 85% to 90% ( Figure 4B). Course of esterification reaction Esterification was performed at 30°C in a 50-ml screwcap tube containing 2.82 g of FFA distilled from soybean oil hydrolysis products and 0.33 g of immobilized lipase membrane, with the addition of 580 μl of ethanol using a three-step method. Esterification degree and acid value were determined for the overall process ( Figure 5A), and each sample was developed on a TLC plate by using petroleum ether, ethyl ether and acetic acid ( Figure 5B). The results indicate that the reaction velocity was fast, the degree of esterification reached 85% after 3 hours and the product was pure, that is, it consisted solely of FAEE. Stability of immobilized lipase catalysis system Immobilized lipase membrane prepared as described in the Methods section displayed good long-term stability. The degree of esterification was still 82% after 25 batches were run, and then it declined rapidly to 42% by the 30th batch ( Figure 6). A total of 66.6 g of FAEE were produced in 29 batches catalyzed by immobilized lipase membrane. The collected reaction solution was treated as described in the Esterification reaction section. The final product was obtained with 95% recovery, with the following composition as determined by GC analysis: 15.4 g of palmitic acid ethyl ester, 4.4 g of stearic acid ethyl ester, 18.8 g of oleic acid ethyl ester, 55.2 g of linoleic acid ethyl ester, and 6.2 g of α-linolenic acid ethyl ester per 100 g of FAEE. Discussion Y. lipolytica lipase has been applied frequently [8,9] for the degradation of hydrocarbons and the hydrolysis of esters. It retains its activity in many organic solvents to catalyze esterification, transesterification and resolution of racemic mixtures [10][11][12][13]. These characteristics of Y. lipolytica lipase make it a good candidate for the catalysis of FAEE (that is, biodiesel) production. Here the more efficient system of producing biodiesel was achieved by performing hydrolysis followed by esterification catalyzed by Y. lipolytica lipase. The conversion rate for the esterification reaction between fatty acids and ethanol catalyzed by Y. lipolytica lipase was 85% in a 3-hour reaction. In contrast, the conversion rate for esterification by C. antarctica lipase was only 70% in a 3-hour reaction; 5 hours were needed to reach 85% conversion. Moreover, when the method described in the Methods section was used to analyze esterification with C. Antarctica-immobilized lipase by a substrate of lauryl alcohol and lauric acid, esterification activity of 4,000 Ue/g was observed. A concentration of 60 Ue/g fatty acids was used to catalyze esterification in the Watanabe et al. report [7]. In contrast, the esterification activity of lipase was 18 Ue/g fatty acid in the present paper, with 150 Ue/g fixed fabric membrane. Consequently, a high-speed reaction and low consumption of lipase are realized when Y. lipolytica lipase is used. In the esterification reaction, the Y. lipolytica lipase activity is inhibited by ethanol. When the molar ratio of ethanol to fatty acid was higher than 1:1, the esterification rate was reduced from 72.7% to 54.4% or less ( Figure 3B). The inhibitory effect of ethanol was reduced significantly when a multistep addition method was used (Table 1). We compared the inhibition of esterification by methanol and ethanol using the same molar concentrations and the same three-step addition method. After 3 hours, the conversion rates of methanol and ethanol, as measured by acid-base titration, were 53.6% and 82.6%, respectively. This finding differs from that of Watanabe et al. [7], who found that the inhibitory effect of methanol on immobilized enzyme in a lipase-catalyzed esterification system was much higher than that of ethanol, which might be due to the difference in lipase source. A B The esterification reaction can be promoted by controlling the water content in the mixture. The conversion rate was increased to 5% if a molecular sieve was added to remove water, and the rate was lowered by approximately 5% if 3% (vol/vol) water was retained during the reaction (Figure 4). The inhibitory effect of water on esterification could be restored following the removal of water by the molecular sieve or after drying lipase membranes finished the esterification. So, we suggest that water had no effect on enzyme activity, but affected only the esterification equilibrium. In contrast, in the Candida sp. lipase-catalyzed transesterification system studied by Tan et al. [2], a certain amount of water promoted the reaction. One possible explanation is that water is first involved in hydrolysis of triglycerides, and then the lipase catalyzes esterification of the hydrolyzed fatty acids. Further study is needed to determine whether water is really involved in the transesterification reaction. Interestingly, unsaturated fatty acids are preferable for use with Y. lipolytica lipase. We found that reactions in which oleic acid was subtracted could be conducted for 91 batches in the same esterification system [14], whereas fatty acid distillation can reach only 25 batches. This preference is also reflected in the composition of fatty acids and fatty acid ethyl esters resulting from the two-step process. The fatty acid composition revealed by GC analysis was 20.0% palmitic acid, 4.9% stearic acid, 24.2% oleic acid, 47.0% linoleic acid and 3.9% linolenic acid, whereas fatty acid ethyl esters consisted of 15.4% palm ethyl ester, 4.4% stearic acid ethyl ester, 18.8% ethyl oleate, 55.2% ethyl linoleate and 6.2% ethyl linolenate. The proportions of saturated vs. unsaturated fatty acid ethyl esters were 19.8% and 80.2%, respectively, in contrast to corresponding proportions of 24.9% and 75.1% for fatty acids. Although the two-step catalysis system has resolved the inhibition of glycerol and removed the reaction solvents, hydrolysis efficiency is low. In the present study, the major parameters affecting the hydrolysis reaction catalyzed by Y. lipolytica lipase were optimized and the oil/water interface was increased by the addition of an emulsifier; however, the hydrolysis degree of 91.9% was obtained within 48 hours. There is a possible reason that Y. lipolytica lipase is a 1,3-specific lipase and does not act on the 2 site of the ester bond of triglyceride. Only the 2 site of the ester bond was actually shifted spontaneously to either the 1 or the 3 site, and the hydrolysis reaction was continued [15]. In practice, such spontaneous shifts happen slowly in aqueous media. Thus nonspecific lipase must be prepared through the selection of other kinds of lipase or by using some other molecular biological method. Conclusions In comparison to transesterification methods, the enzymatic method described here has some advantages [2]. It has a reduced inhibitory effect on lipase activity and independence from petroleum sources, since ethanol is used as a substrate. Fatty acids and ethanol are well mixed on the basis of their similar polarity, and esterification reactions can be completed within 3 hours. It leads to improved safety of production by avoiding the use of low-boiling-point organic solvents such as petroleum ether and tert-butanol alcohol. Conducting a twostep method does not produce glycerol during esterification, does not mask the enzyme and extends the life of the immobilized enzyme. More efficient industrial production is realized, since only one low-cost lipase is used. The acid-base titration method was used to track the course of esterification, which is convenient for inspection in an industrial setting. These advantages indicate that the two-step protocol used in this study may be applicable to an industrial process for the production of biodiesel fuel from vegetable oil, especially from high acid value waste oil. Raw materials Soybean oil was purchased from Qinhuangdao Jinhai Grain & Oil Industrial Co., Ltd. (Qinhaungdao, China). Its fatty acid composition was palmitic acid 20.03%, 4.85% stearic acid, 24.17% oleic acid, 47.03% linoleic acid and 3.92% α-linolenic acid. Olive oil was purchased from Sinopharm Chemical Reagent Co., Ltd. (Shanghai, China). Heptadecanoic acid methyl ester (chromatographically pure) was from Sigma, USA. Yarrowia lipolytica strain was from China Agricultural University. (The strain was deposited at the China General Microbiological Culture Collection as CGMCC 2707.) Its lipase was produced by Qinhuangdao Leading Science & Technology Co., Ltd. Qinhuangdao, China. Soybean powder was obtained from a local market. All other reagents were obtained commercially and were of analytical grade. Lipase preparation The Y. lipolytica strain CGMCC 2707 was stored at -80°C in tubes containing 25% (vol/vol) glycerol solution. For the preparation of inoculum, cells were transferred to YPD medium (20 g of tryptone, 10 g of yeast extract and 20 g dextrose per liter autoclaved for 15 minutes at 121°C) two times for activation, and they were then incubated at 28°C. Activated cells were inoculated into fermentation medium, which contained 60 g of soybean powder, 90 g of soybean oil, 2.5 g of K 2 HPO 4 , 0.5 g of MgSO 4 ·7H 2 O and 2 g of (NH 4 ) 2 SO 4 per liter of distilled water. Thirty-liter cultures were grown in a 50-l fermentor with agitation at 500 rpm and 1:1 vvm air flow at 28°C , and pH was adjusted to 6.5 by using 10 N KOH. The lipase produced reached 8,000 Uw/ml after 90 to 110 hours of fermentation (Uw refers to the hydrolysis activity of lipase). Lipase solution was obtained by the removal of cells by centrifugation (4,000 × g for 20 minutes). Lipase in the supernatant was precipitated by the addition of three volumes of acetone. The precipitate was washed with acetone and dried at room temperature. The activity of the enzyme powder was 140,000 Uw/g. Immobilization of lipase on fabric membrane Lipase was immobilized by using an established immobilization procedure [16]. Briefly, 0.1 g of fabric (approximately 9 cm 2 ) was presoaked for 1 hour in 10 ml of coimmobilization solution consisting of 0.5 g of gluten, 0.2 g of lecithin, 0.2 g of polyethylene glycol 6000 and 0.1 g of magnesium chloride. Fabric membranes were dried at room temperature and used as supports for the immobilization of lipase. Membranes were added into 10 ml of enzyme solution (5,000 to 10,000 Uw/ml), stirred for 2 to 3 hours, taken out and dried at room temperature under a vacuum. The activity of immobilized lipase, determined by using an olive oil emulsion method after grinding at 0°C, was 10,000 Uw/g membrane. Lipase activity determination Hydrolysis activity of lipase (Uw) was determined by using the olive oil emulsion method. One hydrolysis activity unit was defined as the amount of enzyme required to release 1 μM fatty acid per minute under assay conditions [6]. The esterification activity of lipase (Ue) was determined by using a lauric acid and lauryl alcohol reaction system. One esterification activity unit was defined as the amount of enzyme required to release 1 μM lauric lauryl ester per minute under assay conditions. The substrate was an equimolar mixture of lauric acid with lauryl alcohol at a final concentration of 0.1 mM in nhexane solvent. The reaction was initiated by adding 0.01 g of lipase (pure or diluted depending on the activity of lipase), continued by incubation for 20 minutes at 40°C and stopped by the addition of 15 ml of ethanol. Enzyme activity was determined by titration of the remaining lauric acid with 100 mM sodium hydroxide. Esterification activity was calculated on the basis of the release of lauric lauryl alcohol using the following formula: where V 0 and V NaOH are, respectively, the volumes of NaOH consumed by titration of the mixture at the beginning (0 minutes) and end (20 minutes) of the reaction. Hydrolysis reaction Small-scale hydrolysis was conducted at 40°C in a 50 ml screw-cap tube containing 2 g of soybean oil, 200 Uw lipase broth and 1.2 ml of water, with agitation on an orbital shaker (180 rpm) for 28 to 48 hours. At defined intervals, 0.8 ml of the reaction mixture was removed and separated into oil and water phases by centrifugation (10,000 × g for 5 minutes). The oil phase was analyzed as described in the Analytical methods section. Large-scale hydrolysis was conducted by filling a 1-l reaction vessel with 200 g of soybean oil, 20,000 Uw lipase broth, 120 ml of water and 10 g of sodium stearate and then agitating the mixture at 180 rpm at 40°C. When the degree of hydrolysis reached 90%, the reaction mixture was acidified with 3 N sulfate until the pH of the water layer was 4.5. The water layer was then removed, and the remaining oil layer was washed twice with hot water (70°C to 80°C). The oil layer was vacuum-distilled at 93 to 98 kPa, and distillates were collected at 220°C to 260°C. The resulting fatty acid fraction was used for esterification as described below. Esterification reaction Fatty acid was esterified using immobilized lipase membrane in 50-ml stoppered flasks without organic solvent. The reaction was performed with 2.82 g of oleic acid or FFA and 0.33 g of lipase membrane, and 192 μl of ethanol were added every 1 hour (oleic acid:ethanol molar ratio, 1:0.3) until theoretical molar ratio was reached. The mixture was incubated with agitation at 130 rpm at 30°C. Molecular sieves ( Figure 5A) were added for 1 hour to eliminate water. Immobilized lipase and fatty acid were preheated in a 30°C incubator for 30 minutes, and the reaction was started by the addition of ethanol to the system. Experiments were replicated three or more times, and the results are presented as mean values. Adsorbed water and lipase membranes were recovered from the reaction solution by filtration, and 15% (wt/ vol) NaOH solution was added according to the amount of remaining fatty acid. The solution was stirred slowly for 30 minutes and then left undisturbed so that the aqueous and organic phases could separate. The organic phase was washed twice with two volumes of water to remove unreacted ethanol and dehydrated by decompression distillation. The final product, ethyl ester (biodiesel), was obtained with 95% recovery. Analytical methods TLC Silica gel plates (Whatman Inc. Shaihai, China) were heated at 110°C for 1 hour prior to use. Oil phase samples obtained as described in the small-scale hydrolysis reaction section were dissolved in acetone to form a 10 mg/ml solution, and 10-μl capillary spots were subjected to TLC analysis. The spots were sprayed with a 20 volume percent solution of sulfuric acid in ethanol developed with petroleum ether/ethyl ether/acetic acid (80:30:1 ratio, volume fraction) and visualized by heating at 100°C for 30 to 50 seconds. Gas chromatography was conducted to quantify the composition of fatty acids and FAEEs. At a predefined time, 20-μl samples were taken and centrifuged. A quantity of 5 μl of the upper phase thus obtained was dissolved in n-hexane and analyzed using a GC-2010 gas chromatograph (Shimadzu, Kyoto, Japan) equipped with a capillary column (HP-INNOWax columns, 30 m-0.25 mm-0.25 μm; J & W Scientific Columns, Agilent Technologies, Palo Alto, USA) and a flame ionizing detector. Injection was performed in split mode (1:30), with injection and detection temperatures of 260°C and 280°C, respectively. Samples (1 μl) were injected at an oven temperature of 240°C and held for 10 minutes. The carrier gas was nitrogen at a flow rate of 30 ml/ min. Hydrolysis degree was calculated as the acid value of the hydrolyzed oil sample as a percentage of the saponification value of soybean oil. The degree of esterification was calculated as the reduction of acid value (obtained by titration of aliquots of mixture taken at the beginning and end of the reaction) as a percentage of fatty acid value. The viscosity of the oil or reaction mixture was measured using a viscometer (RVDV-II+PRO; Brookfield Engineering Laboratories, Middleboro, MA, USA).
2014-10-01T00:00:00.000Z
2011-03-02T00:00:00.000
{ "year": 2011, "sha1": "99f4790a7eb78c2ffd8aeed4c23b1641220607af", "oa_license": "CCBY", "oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/1754-6834-4-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f9f26f46664f6fdc4664167dcc28596c72a5ab2", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
52844380
pes2o/s2orc
v3-fos-license
Rediscovering Deep Neural Networks in Finite-State Distributions We propose a new way of thinking about deep neural networks, in which the linear and non-linear components of the network are naturally derived and justified in terms of principles in probability theory. In particular, the models constructed in our framework assign probabilities to uncertain realizations, leading to Kullback-Leibler Divergence (KLD) as the linear layer. In our model construction, we also arrive at a structure similar to ReLU activation supported with Bayes' theorem. The non-linearities in our framework are normalization layers with ReLU and Sigmoid as element-wise approximations. Additionally, the pooling function is derived as a marginalization of spatial random variables according to the mechanics of the framework. As such, Max Pooling is an approximation to the aforementioned marginalization process. Since our models are comprised of finite state distributions (FSD) as variables and parameters, exact computation of information-theoretic quantities such as entropy and KLD is possible, thereby providing more objective measures to analyze networks. Unlike existing designs that rely on heuristics, the proposed framework restricts subjective interpretations of CNNs and sheds light on the functionality of neural networks from a completely new perspective. Introduction The ever-increasing complexity of Convolutional Neural Networks (CNN) and their associated set of layers demand deeper insight into the internal mechanics of CNNs. The functionality of CNNs is often understood as a series of projections and a variety of non-linearities to increase the capacity of the model (Hinton 2009;Nair and Hinton 2010;Ramachandran, Zoph, and Le 2017;Zheng et al. 2015). Despite the fact that the prediction layer of CNNs (e.g., the Softmax layer) and the loss functions (e.g., Cross Entropy) are borrowed from the Bayesian framework, a clear connection of the functionality of the intermediate layers with probability theory remains elusive. The current understanding of CNNs leaves much to subjective designs with extensive experimental justifications. We informally argue that subjectivity is inherent to problems defined over real numbers. Accordingly, the confusion existing in the functionality of CNNs reflects the aforementioned theoretical subjectivity. Since real vector spaces are Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. unbounded and uncountable, they require strong assumptions in the form of prior information about the underlying data distribution in a Bayesian inference framework. For example, fitting a Gaussian distribution to a set of samples requires that the prior distribution on the location parameter be non-vanishing near the samples. In this scenario, an uninformative prior needs to be close to the uniform distribution over real numbers; a paradoxical distribution. Since the real line is unbounded and uncountable, the choice of the model and its prior distribution is always highly informative (Jaynes 1968). Although the choice of the prior in univariate distributions is not a practical issue, the adverse effects of subjective priors are more evident in high dimensions. When the sample space is large and the data is comparatively sparse, either careful design of a prior or an uninformative prior is needed. Note that in the context of CNNs, the architecture, initialization, regularization, and other processes can be interpreted as imposing some form of prior on the distribution of real data. (Jaynes 1957) shows that the correct extension of entropy to real distributions does not have a finite value. By switching to finite state distributions (FSDs), the entropy value is calculable and finite, providing potential for an information-theoretic treatment. In contrast to distributions defined over real numbers, working with FSDs makes the problem of objective inference theoretically more tractable. In problems where the data are represented by real numbers, the values can be treated as parameters of a finite-state distribution, therefore each sample represents a distribution over some finite space. Discrete modeling of the sample space reduces the complexity of the input domain, and treating the inputs as distributions reduces the chance of overfitting since every sample represents a set of realizations. In the case of natural images, the aforementioned modeling of input data is justified by the following observation. In conventional image acquisition devices, the intensity of pixels can be interpreted as the probability of presence of photons in a spatial position and some wavelength. Therefore, a single image is considered as the distribution of photons on the spatial plane with finite states when the number of pixels is finite. In this paper, we present a framework for classification with the key feature that unlike existing models inference is made on finite-state spaces. Classification of FSDs are attractive in the sense that it sets up the requirement for com-position of classifiers, since the output of Bayesian classifiers are FSDs. To construct a Bayesian FSD classifier we borrow concepts from the Theory of Large Deviations and Information Geometry, introducing the KullBack-Leibler divergence (KLD) as the log-likelihood function. The composition of Bayesian classifiers are used to serve as a multilayer classification model. The resulting structure deeply resembles CNNs, namely modules similar to the core CNN layers are naturally derived and fit together. Specifically, we show that the popular non-linearities used in deep neural networks, e.g., ReLU and Sigmoid (Nair and Hinton 2010), are in fact element-wise approximations of some normalization mapping. Moreover, we show that the linearities amount to calculating the KLD, while max pooling is an approximation to the marginalization of the indices. In our framework, there exists a natural correspondence between types of the nonlinearity and pooling. In particular, Sigmoid and ReLU correspond to Average Pooling and Max Pooling, respectively, while each pair is dictated by the type of KLD used. The models in our framework are statistically analyzable in all the layers; there is a clear statistical interpretation for every parameter, variable and layer. The interpretability of the parameters and variables provides insights into the initialization, encoding of parameters and the optimization process. Since the distributions are over finite states, the entropy is easily calculable for both the model and data, providing a crucial tool for both theoretical and empirical analysis. The organization of the paper is as follows. In Section 2, we review related work on FSDs and the analysis of CNNs. In Section 3, we describe the construction of the proposed framework and a single layer model for classification and explain the connections to CNNs. In Section 3, we describe the extension of the framework to multiple layers. Also, we introduce the extension to the convolutional model and provide a natural pooling layer by assuming stationarity of the data distribution. Furthermore, we explain the relation between vanilla CNNs and our model. In Section 4, we evaluate few baseline architectures in the proposed framework as a proof of concept, and provide an analysis on entropy measurements available in our framework. Related Work A line of work on statistical inference in finite-state domains focuses on the problem of Binary Independent Component Analysis (BICA) and the extension over finite fields, influenced by (Barlow, Kaushal, and Mitchison 1989;Barlow 1989). The general methodology in the context of BICA is to find an invertible transformation of input random variables which minimizes the sum of marginal entropies (Yeredor 2011;Yeredor 2007;e Silva et al. 2011;Painsky, Rosset, and Feder 2014;Painsky, Rosset, and Feder 2016). Although the input space is finite, the search space for the correct transformation is computationally intractable for high-dimensional distributions given the combinatorial nature of the problem. Additionally, the number of equivalent solutions is large and the probability of generalization is low. In the context of CNNs, a body of research concerns discretization of variables and parameters of neural networks Soudry, Hubara, and Meir 2014;Courbariaux, Bengio, and David 2015). (Rastegari et al. 2016) introduced XNOR-Networks, in which the weights and the input variables take binary values. While discretization of values is motivated by efficiency, the optimization and learning the representation of the data are in the context of real numbers and follow similar dynamics as in CNNs. To formalize the functionality of CNNs, a wavelet theory perspective of CNNs was considered by (Mallat 2016) and a mathematical baseline for the analysis of CNNs was established. (Tishby, Pereira, and Bialek 2000) introduced the Information Bottleneck method (IBP) to remove irrelevant information and maintain the mutual information between two variables. (Tishby and Zaslavsky 2015) proposed to use IBP, where the objective is to minimize the mutual information between consequent layers, while maximizing the mutual information of prediction variables and hidden representations. (Su, Carin, and others 2017) introduce a framework for stochastic non-linearities where various non-linearities including ReLU and Sigmoid are produced by truncated Normal distributions. In the context of probabilistic networks, Sum Product Networks (SPNs) (Poon and Domingos 2011;Gens and Domingos 2012;Gens and Pedro 2013) are of particular interest, where under some conditions, they represent the joint distribution of input random variables quite efficiently. A particularly important property of SPNs is their ability to calculate marginal probabilities and normalizing constants in linear time. The efficiency in the representation, however, is achieved at the cost of restrictions on the distributions that could be estimated using SPNs. (Patel, Nguyen, and Baraniuk 2016) constructed Deep Rendering Mixture Models (DRMM) generating images given some nuisance variables. They showed that given that the image is generated by DRMM, the MAP inference of the class variable coincides with the operations in CNNs. Proposed Framework We set up our framework by modeling the input data as a set of "uncertain realizations" {x (i) } n i=1 over D symbols. To be precise, we define an uncertain realization x (k) as a probability mass function (pmf) over D states with non-zero entropy, and similarly a certain realization is a degenerate pmf over D states. To demonstrate an example of interpreting real-valued data as uncertain realizations, consider a set of m-pixel RGB image data. We can view each pixel as being generated from the set {R, G, B} and further interpret the value of each channel as the unnormalized log-probability of being in the corresponding state. If we normalize the pmf of each pixel, we can interpret the image as a factored pmf over 3 m states and each pixel a pmf over D = 3 states. Formally, we define a transfer function Φ : R ν → ∆ D , where ∆ D is the D-dimensional simplex and ν is the dimension of the input vector space. In the previous example, each pixel is mapped from R 3 (i.e., ν = 3) to ∆ 3 (i.e., D = 3). Therefore, the entire image is mapped from R 3m to ∆ (3 m ) . In general, the choice of Φ depends on the nature of the data and it can either be designed or estimated during the training process. Although probability assignment to a certain realization given a model is trivial, the extension to uncertain real-izations requires further considerations. We consider Moment Projection (M-Projection) and Information Projection (I-Projection) and observe that both projections are used to obtain probabilities on distributions in two established scenarios, namely Sanov's Theorem and the Dirichlet Distribution. Sanov's theorem (Sanov 1958) and the Probability of Type classes (Method of Types) (Cover and Thomas 2012;Csiszár 1998) use the KLD associated with I-Projection of the input distribution onto the underlying pmf (1) to calculate the probability of observing empirical distributions. On the other hand, the Dirichlet distribution uses the KLD associated with M-Projection (2) to asymptotically assign probabilities to the underlying distribution. We use the following approximations for probability assignments to distribution x given the distribution q ∈ ∆ D where D(x||q) is the KLD. Inspired by the aforementioned probability assignments, we regard both types of KLD as the main tool for probability assignment on distributions in our model. We denote the KLD associated with I-Projection and M-Projection as I-KLD and M-KLD, respectively. Later, we will show that approximations to ReLU-type networks and Sigmoidtype networks are derived when employing M-KLD and I-KLD probability assignments, respectively. We define a single layer model for supervised classification as an example of using M-KLD. Constructing the I-KLD models follows a similar construction and is briefly described in Section 3. Let model M be a mixture of a set of probability distributions {M v } V v=1 over D symbols, each representing the distribution of a class, To calculate the membership probability of an input x (k) in class v following the Bayesian framework, we have Note that the KLD term is linear in log(x (k) ). We can break the operation in (5) into composition of a linear mapping Divg(.) and a non-linear mapping LNorm(.), where the i-th components of the outputs are defined as To formally define Divg and LNorm, let us define the logarithmic simplex of dimension V denoted by ∆ V as Setting up the domain of Divg and the parameters as where w i,: is the i-th row of the matrix w, we define the function Divg as where each row of W contains a distribution and H(W ) calculates the entropy of each row. The weights W and biases B being the parameters of the model, are randomly initialized and trained according to some loss function. Unlike current CNNs, the familiar terms in (8) such as the linear transformation W and the bias term B are not arbitrary. Specifically, W x is the cross entropy of the sample and the distributions, while B is the logarithm of the mixing coefficient p in (3). The entropy H(W ) can be thought as the regularizer matching the Maximum Entropy Principle (Jaynes 1957). The H(W ) term biases the probability on distributions with the highest degree of uncertainty. The non-linear function LNorm : R V → ∆ V is the Log Normalization function whose v-th component is defined as Note that the function LNorm(.) is a multivariate operation. The behavior of LNorm in one dimension of the output and input is similar to that of ReLU. Furthermore, α in (8) demonstrates the certainty in the choice of the model. For example, when α = 0, equal probability is assigned to all input distributions, whereas when α is large, a slight deviation of the input from the distributions results in a significant decrease in the membership probability. We refer to α as the concentration parameter, however in all the models presented we fix α = 1. Multilayer Model, Convolutional Model, and Pooling The model described in the previous section demonstrates a potential for a further recursive generalization, i.e., the input and output of the model are both distributions on finite states. We extend the model simply by stacking single layer models. The input of each layer are in ∆ , therefore the lognormalization performed by LNorm is crucial to maintain the recursion. The multilayer model FNN(x) (Finite Neural Figure 1: The Structure of the KL Convolution layer and the Normalization. The filters and input/output tensors represent factorized pmfs. The composition of these layers is equivalent to a Bayesian classifier, in which the log-likelihood is calculated by KLD and further normalized by the LNorm layer. Network) is defined as the composition of Divg and LNorm layers, where the superscript l denotes the layer index and L is the total number of layers. To elaborate, after each couple of layers, the input to the next layer are the log probabilities of membership to classes. Therefore, one can interpret the intermediate variables as distributions on a finite set of symbols (classes). In the case where I-KLD is used as the probability assignment mechanism, the input to the layers must be in the probability domain, therefore the nonlinearity reduces to Softmax, which in one dimension behaves similar to the Sigmoid function. Note that the entropy term in I-KLD is not linear with respect to the input. We focus on the M-KLD (ReLU-activated) version, however, the concepts developed herein are readily extendable to the I-KLD (Sigmoid-activated) networks. Convolutional Model: One of the key properties of the distribution of image data is strict sense stationarity, meaning that the joint distribution of pixels does not change with translation. Therefore, it is desirable that the model be shiftinvariant. Inspired by CNNs, we impose shift invariance by convolutional KLD (KL-Conv) layers. In our convolutional model, a filter F of size R × S × D represents a factorized distribution with R × S factors, each factor representing a pmf over D states. The distribution Q associated with the filter F is where Q r,s is a single factor over D states defined by the values in F r,s,: , a is a R × S neighborhood of pixels and a d is the d-th state. In other words, the values across the channels of the filter represent a pmf and sum up to 1. In the RGB image example provided previously, the factors of the filters compatible with the input layer are over 3 states. The input x l of the layer l is log-normalized across the channels. We model the input x l with D l channels as a factorized distribution where each pixel represents a factor. The distribution Q is shifted along the spatial positions and the KLD of the filter distribution and each neighborhoods of pixels are calculated. As an example, we define the KL-Conv operation associated with M-KLD as where F 1:V represents the set of filters in the layer (each filter representing a distribution), H(F ) is the vector of distribution entropies, is the convolution operator used in conventional CNNs and α ∈ R + is the concentration parameter. The non-linearity is applied to the input x across the channels in the same manner as in the multilayer model, i.e., x l+1 r,ŝ,: = LNorm(x l r,ŝ,: ) The overall operation of KL-Conv and LNorm layers is illustrated in Fig.1. Pooling: We define the pooling function as a marginalization of indices in a random vector. In the case of tensors extracted in FNNs, the indices correspond to the relative spatial positions. In other words, the distributions in the spatial positions are mixed together through the pooling function. Assume x l is the input to the pooling layer, where x l r,s,: ∈ ∆ V . The input is in the logarithm domain, therefore to calculate the marginalized distribution the input needs to be transferred to the probability domain. After marginalization over the spatial index, the output is transferred back to the logarithm domain. We define the logarithmic pooling function x l+1 = LPool(x l ; p r,s ) as where p r,s is the probability distribution over the relative spatial positions and supp(.) denotes the support. In the usual setting of pooling functions and our model, p is assumed to be a uniform distribution and the support of the distribution represents the pooling window. Note that the log( exp(.)) term in (15) is approximately equivalent to the Max function as the variables in the exponent deviate. Therefore, we hypothesize that Max Pooling in conventional CNNs is approximating (15). Evidently, the output of the pooling function is already normalized and is passed to the next layer. In the case that I-KLD is used, the input is in the probability domain and the pooling function will be identical to average pooling. Input Layer: The model presented so far considers finite state probability distributions as input to the layers. In the case of natural images, we chose to normalize all the pixel values to the interval (0, 1). Each pixel value was interpreted as the expectation of a binary random variable with range {0, 1}. As a result, each filter with m total number of variables is a probability distribution over a space of 2 m states. Note that our model is not restricted by the choice of the type of input distribution. Depending on the nature of the input, the user can modify the distribution represented by filters, e.g., distributions on real spaces. Parameterization As explained, the parameters of the model represent parameters of distributions which are constrained to some simplex. To eliminate the constraints of the parameters, we use a "Link Function", ψ : R D → ∆ D , mapping the "Seed" parameters to the acceptable domain of parameters, i.e., logarithmic/probability simplex. The link function impacts the optimization process and partially reflects the prior distribution over the parameters. While the parameters are updated in R D uniformly, the mapped parameters change according to the link function. The filters in our model are factorized distributions and each component is a categorical distribution. Additionally, the biases are categorical distributions, therefore we use similar parameterization for biases and filter components. In general, the filters of the model are obtained by F r,s,: = ψ(θ r,s,: ) where θ r,s,: are the seed parameters of the filter in the spatial position r, s and across all the channels, F r,s,: ∈ ∆ D represents the channels of the filter in the r, s position, β ∈ R V is the seed parameter of bias and B ∈ ∆ V is the bias vector. Since the filters and biases comprise categorical distributions, we avoid complicating the notation by limiting the discussion to the parameterization of categorical distributions. We suggest two forms of parameterization of a categorical distribution π ∈ ∆ D , namely log-simplex and spherical parameterizations. Log-Simplex Parametrization We define the link function with respect to the natural parametrization of a categorical distribution, where the seed parameters are interpreted as the logarithm of unnormalized probabilities. Therefore, the link function is defined as the Softmax function where θ is the seed parameter vector and the subscript denotes the index of the vector components. Writing down the Jacobian of (18) we can observe that the Jacobian only depends on π and does not depend on the denominator in (18) and the link function is invariant to translation of θ along the vector (1, 1, . . . , 1). Log-Simplex parameterization completely removes the effect of the additional degree of freedom. Initialization: We initialize each factor of the filters by sampling from a Dirichlet distribution with parameters equal to 1. Therefore, the distribution's components are generated uniformly on some simplex. We speculate that the initialization of the model should follow maximizing the mixing entropy or Shannon-Jensen Divergence (JSD) of the filters in a given layer where V is the total number of filters, π (v) is the v-th filter and p v is the corresponding mixture proportion. There is a parallel between orthogonal initialization of filters in conventional CNNs and maximizing ∆H in M-KLD networks. In the extreme case where filters are degenerate distributions on unique states and the filters cover all possible states, ∆H is at the global maximum and the M-KLD operation is invertible. Similarly, orthogonal initialization of conventional CNNs is motivated by having invertible transformations to help with the information flow through the layers. Since it is hard to obtain a global maximizer for JSD, we minimize the entropy of individual filters (second term in (20) by scaling the log-probabilities with a factor γ > 1. We set γ = log(#filters) as a rule of thumb. Finally, the Bias seed components are initialized with zeros, indicating equal proportion of mixture components. Spherical Parameterization Here, we present an alternative parameterization method attempting to eliminate the learning rate hyper-parameter. Assume that we parameterize the categorical distribution π by the link function The expression in (21) maps θ to the unit sphere S D−1 ⊂ R D , where the square of the components are the probabilities. The mapping defined in (21) ensures that the value of the loss function and predictions are invariant to scaling θ. The Jacobian of (21) is It is evident from (22) that the norm of the gradient has an inverse relation with θ . Scaling θ is equivalent to changing the step size, since the direction of gradients does not depend on θ . Additionally, the objective function is not dependent on θ , therefore the gradient vector obtained from the loss function is orthogonal to the vector θ. Considering the orthogonality property, updating θ along the gradients always increases the norm of the parameter vector. As a consequence, the learning rate decreases at each step of the iteration; independent of the network structure. Initialization: The seed parameters are initialized uniformly on S D−1 . The standard way of generating samples uniformly on S D−1 is to sample each component from a Normal distribution N (0, 1) followed by normalization. Experimental Evaluations We experimented with our model on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky and Hinton 2009) We first compared the performance of the original networks with their corresponding transformed architectures in finite states. We excluded certain layers from our transformation, e.g., the Dropout and the batch normalization (BatchNorm) (Ioffe and Szegedy 2015), since we do not yet have a clear justification for their roles in our model. We did not use weight decay (Krizhevsky, Sutskever, and Hinton 2012), regularization, and the learning rate was fixed to 1 in the FCNNs. FCNNs were parameterized with logsimplex and spherical schemes for comparison. Experiments with I-KLD was excluded, since they achieved lower accuracy compared to M-KLD. We justify this observation by considering two facts about I-KLD: 1) Since the input is in the probability domain, the nonlinearity behaves similar to Sigmoid, therefore the gradient vanishing problem exists in I-KLD. 2) As opposed to LNorm, exp(LNorm(.)) is not convex and interferes with the optimization process. Table 1 demonstrates the performance achieved by the baselines and their FCNN analogues. For all the conventional CNN networks, the data was centered at the origin and ZCA whitening was employed. Additionally, the original optimized learning rates were used to train the CNNs. The weights in all the models were regularized with 2 norm, where in the case of NIN and VGG the regularization coefficient is defined per layer. VGG was unable to learn without being equipped with BatchNorm and Dropout layers. In the case of NIN, we could not train NIN without Dropout and BatchNorm, therefore we rely on the results reported in (Lin, Chen, and Yan 2013) on vanilla NIN (without Dropout and BatchNorm) trained on CIFAR10 for 200 epochs. Figure 3 in (Lin, Chen, and Yan 2013) reports the test error of vanilla NIN on CIFAR10 as roughly 19%, which is similar to the results obtained by the Finite counterpart. The final test error reduces to 14.51% over the number of epochs, which is unknown to us. The vanilla NIN results on CIFAR100 are not available in the original paper. FCNNs achieved lower performance in VGG and NIN architectures which are equipped with Dropout and BatchNorm. Note that FCNN performs without regularization, data preprocessing, hyper-parameter optimization, and change of learning rate. The results show that the finite state models' performance is at the same scale as CNNs, considering the simplicity of FCNNs. Spherical parameterization performs better than Log Simplex in NIN-Finite and Quick-CIFAR-Finite networks, whereas in VGG-Finite Log Simplex is superior. We do not have a definite explanation for the difference in performance of parameterizations in different architecture settings. However, the results show that none are objectively superior as they stand. Entropy of Filters and Biases To analyze the behavior of the networks, we performed a qualitative analysis on the trend of the bias entropies and the filter entropies. In our experiments, M-KLD was used as the linearity. Since the input is represented by log-probability in the cross entropy term of M-KLD, the filter distribution naturally tends to low entropy distributions. However, in Figure 2, we observe that the average entropy of some layers starts to increase after some iterations. This trend is visible in the early layers of the networks. Since high entropy filters are more prone to result in high divergences when the input distribution is low-entropy (property of M-KLD), the network learns to approach the local optimum from low entropy distributions. The entropy of the input tensors of late layers are larger compared to that of the early layers, and start decreasing during the learning process. Therefore, the entropy of the filters decreases as the entropy of their input decreases. The entropy of bias distributions contain information about the architecture of networks. Note that the bias component is the logarithm of the mixing coefficients. Degeneracy in the bias distribution results in removing the effect of the corresponding filters from the prediction. The increase in the entropy of the biases could also demonstrate the complexity of the input, in the sense that the input distribution cannot be estimated with a mixture of factorized distributions given the current number of mixture components. Conclusion Our work was motivated by the theoretical complications of objective inference in infinite state spaces. We argued that in finite states objective inference is theoretically feasible, while finite spaces are complex enough to express the data in high dimensions. The stepping stones for inference in high dimensional finite spaces were provided in the context of bayesian classification. The recursive application of Bayesian classifiers resulted in FNNs; a structure remarkably similar to Neural Networks in the sense of activations (ReLU/Sigmoid) and the linearity. Consequently, by introducing the shift invariance property (Strict Sense Stationarity assumption) using the convolution tool, FCNNs as finite state analogue of CNNs were produced. The pooling function in FCNNs was derived as marginalizing the spatial position variables. The Max Pool function was explained as an approximation to the marginalization of spatial variables in the log domain. In our work, it is evident that there exist a correspondence between M-KLD, ReLU and Max Pool and similarly between I-KLD, Sigmoid and Average Pool. In the context of classic CNNs, diverse interpretations for layers and values of the feature maps exist whereas in FNNs the roles of layers and the nature of every variable is clear. Additionally, the variables and parameters represent distributions, making the model ready for a variety of statistical tools, stochastic forward passes and stochastic optimization. The initialization and parameterization of the model points clearly and directly to the objective inference literature (Jeffreys 1946;Jaynes 1968), which would potentially reveal further directions on how to encode the functionality objectively. Open Questions: The pillar of our framework is assigning probabilities to uncertain events. We directed the reader to the literature that justifies usage of both KLD forms in asymptotic cases. I-KLD is used to assign probabilities to empirical distributions, while M-KLD is assigning probability to the true distribution, given some empirical distribution. The concentration parameter roughly represents the number of empirical data in both probability assignments. The following questions are subject of future investigations. 1. The experiments show that using M-KLD as opposed to I-KLD results in higher performance. How could one theoretically justify the performance gap? 2. Could both schemes of probability assignment be incorporated in the learning process? 3. The normalizing factors in the nonlinearities represent the probability of the observation given the mixture distribution of filters. Can they be included in the objective to train without supervision?
2018-09-26T15:46:53.000Z
2018-09-26T00:00:00.000
{ "year": 2018, "sha1": "1b2892a344793216497845338430f5328acc744b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1b2892a344793216497845338430f5328acc744b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
56153001
pes2o/s2orc
v3-fos-license
Relationship of photosynthesis and related traits to seed yield in oilseed Brassicas The physiological basis of yield in oilseeds Brassicas needs to be investigated, and the contribution of these traits to its yield is difficult to decipher. Eight cultivars of Brassica belonging to 3 species viz. B. juncea, B. napus and B. carinata based on significant differences in yield were tested over two years. Net photosynthesis, transpiration, stomatal conductance and water use efficiency were investigated on 3 and 4 fully expanded leaf on the main stem and related to yield. Average photosynthetic efficiency (umolms) was higher in RLC1 (36.1), GSC6 (36.3) and PC5 (33.8) cultivars. Impact of environment was inconspicuous. However interactions (GxY) were significant for the studied photosynthetic traits except Pn. Lower transpiration rates were associated with higher water use efficiency in RLC1 (5.69), GSL1 (5.44) and GSC6 (5.40). Positive correlation between SY and Pn (0.385) was recorded for the first time in Brassicas although the magnitude of association was low. Quality mustard cultivar (RLC1, B. juncea) and amongst B. napus GSC6 (canola) and Hyola PAC401 (hybrid, canola) were higher yielders due to relative high Pn, more efficient utilization of water and chlorophyll content. Indeterminate growth habits of the cultivars indicated highest contribution to Pn by leaves during flowering as compared with early siliquae formation. Environment had a profound impact on the yielding ability and the photosynthetic traits. INTRODUCTION Rapeseed and mustard (Brassicas spp.) is the second important oilseed crop of the country after soybean and plays significant role in Indian oil economy by contributing about 27% to the total oilseed production.A major breeding objective for oilseeds is yield improvement.An increased understanding of the physiological basis for seed yield could enhance utilization of physiological traits as selection criteria for yield improvement (Chongo and McVetty 2001).Photosynthesis, a major determinant for total dry matter production in a crop species, has often been related to seed yield of crop plants with a view of selecting plants with high net photosynthesis (Pn) to improve yield.For high yield, a significant portion of the dry matter produced should be partitioned into the harvestable component.Cultivars could be improved if selection was directed towards genotypes with high yield potentials and high Pn rates, but with constant Tr.However, correlations between yield and leaf Pn rate are rare, even though photosynthesis is the source of total dry matter production (Lawlor, 1995).In soybean (Gylcine max.L.) cultivars, high yields were associated with high leaf Pn but no genetic differences were found in Pn in wheat (Triticum aestivum L.) or its relatives, while the relationship between Pn per unit leaf area and seed yield was poor in barely (Hordeun vulgare L), pea ISSN : 0974-9411 (Print), 2231-5209 (Online) All Rights Reserved © Applied and Natural Science Foundation www.ansfoundation.org(Pisum sativum L.) and Brassica napus (Chongo and McVetty, 2001).The lack of correlation between photosynthesis and yield has been attributed to measuring photosynthetic rates on single leaves for a short period time, which does not adequately represent seasonal canopy photosynthesis or the total sink and photosynthetic capacity per unit leaf area (Richards, 2000;Kumar and Chopra, 2014).Leaves are the source of photosynthesis in Brassicas, though they senesce rapidly during siliquae development.Leaves establish the sink potential via structures such as number of siliquae/plant or number of seeds/siliqua and remobilization of photosynthates during their senescence, but eventually stems and siliquae became important sources of photosynthesis (Uddin et al., 2012).Photosynthesis partly depends on water and chlorophyll during assimilation, which is important for seed yield.Pn was associated with other physiological traits as they relate to yield and holds importance for yield improvement in oilseed crops.The objective of this study was to measure net photosynthetic rates (Pn), transpiration (Tr), stomatal conductance (Cs), water use efficiency in eight oilseed popular and recommended varieties and relate them to seed yield.were used in the study.Sowing was done on 11 th November 2011 and 1 st November in 2012.1.5 kg seed per acre was used for rapeseed-mustard and seeding was done with a drill at 4-5 cm depth.Each variety consisted of 5 rows of 3 m row length .Row to row and plant to plant distance was 30×10 cm for B.juncea and B.carinata while 45×10cm for B.napus.Thinning was done three weeks after sowing to maintain plant to plant distance as per requirement.All the recommended agronomic and protection practices were followed to raise a healthy crop Three plants per replication were randomly tagged to measure photosynthetic rates.Seasonal weather data has been recorded (Table 1).Rainfall of 65.2 mm in 6 days during 2011-12 and 155.6mm rainfall in 13 days during 2012-13 in comparison to101.6 mm of normal rainfall during same period at PAU. Rainfall was above average during 2 nd crop season.Gas exchange measurements: Gas exchange measurements were done using a portable photosynthesis system with an infra red gas analyzer in a closed system with 1-L chamber (Model LI -6200, Licor, Inc., Lincoln, NE).The measurements were conducted in the morning between 11 AM to 2 PM on 3 rd and 4 th fully expanded leaves on the main shoot during the reproductive phase at 100days after sowing (Harper and Berkenkamp, 1975).Leaves should be dry without moisture or dew on them.All the leaves selected were fully sunlit prior to photosynthetic rate measurements .The photosynthetic radiations were between 1400 and 1800 µmolm s -1 .Water use efficiency was calculated as the ratio of photosynthesis per unit leaf area to transpiration. Experimental The aboveground plant material in each plot was harvested by hand using a sickle, placed in sacks and allowed to air dry in the field.The dried samples were weighed to determine biological yield and the samples were threshed from which clean seeds were obtained and weighed for seed yield.Data analyses on photosynthetic characters were performed on means, which were averaged from the three measurements conducted on each leaf per plant per replicate.The character means for each replication were subjected to analysis of variance (ANOVA) for the factorial randomized complete block design.Means were compared using least significant differences at 5% level.The correlation coefficients among different characters were also computed.All analyses were performed using SAS Institute, Carry, NC. RESULTS AND DISCUSSION Significant differences (p<0.05) in Pn rates existed in the Brassica cultivars in the present investigation.The range of Pn was 31.7-37.7 µmolm -2 s -1 during the 1 st crop season while 30.5-37.5 µmolm -2 s -1 during 2 nd crop season.PBR210 possessed 35.2 µmolm -2 s -1 (B.juncea), 37.7 µmolm -2 s -1 in GSC6 (B.napus) and 31.7 µmolm -2 s -1 in PC5 during 2011-12.RLC1 had Pn of 37.5 µmolm -2 s -1 ,GSC6 of 35.0 µmolm -2 s -1 and PC5 34.8 µmolm -2 s -1 during 2012-13.Although non-significant differences were found in Pn over the years of study (Table2).Average of two years indicated Pn of 36.1 µmolm -2 s -1 in RLC1, 36.3 µmolm -2 s -1 in GSC6 and 33.3 µmolm -2 s - 1 in PC5.Mean Pn rates were 1.5% higher during 1 st year.Interaction of G x Y for Pn was non-significant.RLC1 (B.juncea) and GSC6 (canola, B.napus) poss e s s e d h i g h e s t P n w h i l e c v .P B R 2 1 0 possessed comparable Pn rates over the years.The observations at 100 days after sowing i.e. flowering and siliquae formation are consistent with the findings of other studies in which leaves were reported to be important sources of Pn up to flowering when stems and siliquae become more significant exporters of Pn (Uddin et al., 2012).Increase in Pn was due to increased chlorophyll content (Liu et al., 2012, Sharma et al., 2014).Further, genes associated with cell proliferation, photosynthesis and oil synthesis were unregulated which revealed photosynthesis contributed to increased seed weight and oil content.Cultivars differed significantly for Cs which was lowered during 2 nd crop season except in GSC6 (0.808).Cs was relatively higher in GSL1 and declined drastically over the years (Table 2).Average Cs was highest in PBR91 (0.682), GSC6 (0.767) and PC5 (0.707).Mean Cs was 14.1% higher during 1 st crop season.Re-evaluation of published data and genotypes with contrasting stomatal behavior (Tomimatsu and Tang, 2012) assumption that effects of single factors are multiplicative and uniform across species (Damour et al., 2010).Internal CO 2 concentration was higher in PBR91, GSC6 and PC5 during 2011-12 while during 2 nd crop season again PBR91 in B.juncea and GSC 6 in B. napus registered higher Ci.Genotypic average indicated higher Ci of 211.5 umolCO 2 mol -1 in PBR91, 214.6 umolCo 2 mol -1 in GSC6 and 218.4 umolCO 2 mol -1 in PC5.Mean Ci of the cultivars was 0.63% higher in 2012-13.Transpiration rates (Tr) were higher in PBR91 (7.7 mmolm -2 s -1 ), GSC5 (8.3 mmolm -2 s -1 ) and 7.2 mmolm -2 s -1 in PC5 during 2011-12 while PBR210 had Tr of 6.6 mmolm -2 s -1 and GSC6 (7.4 mmolm -2 s -1 ) during 2012-13.Avearge rate of Tr were comparable in PBR210 and RLC1 and higher in PBR91amongst the B.juncea cultivars.Similarly, comparable Tr was recorded in GSC6 and GSC5and also in GSL1 and Hyola PAC401 amongst the B.napus cultivars.However, mean of the years indicated Tr higher by 12.1% in the 2 nd crop season.This could be ascribed to wet year and erratic rainfalls.Amongst the cultivars Tr were comparable in PBR210 and GSC6 over the years. Vapour pressure differential (Vpdl) was higher during 2 nd crop season except in PBR210, GSC5 and HyolaPAC401 (Table 3).Average Vpdl was 1.46kPa in PBR210, 1.58 kPa in Hyola and least 1.32 kPa in PC5.Genotypes and environment did not register significant differences for this trait however, interaction between G×Y were significant .Mean Vpdl was 6.2% higher due to higher rainfall.Elevated Vpdl lowers gs to variable extent, which might decrease Ci affecting both carboxylation rates and Rubisco activation in fluctuating irradiance (Kaiser et al., 2014) Leaf temperature varied significantly within the cultivars.Only 3.5% higher mean temperature was recorded during 2011-12 than 2012-13.Average leaf temperature was comparable in PBR210 and RLC1, GSC6, Hyola-PAC401 and PC5.Leaf temperature and CO 2 affect rates of dynamic photosynthesis more strongly than Vpdl (Sharma et al., 2012;Kaiser et al., 2014). The differences in Tr or WUE on the leaf were significant among different cultivar of Brassica spp.A range of 4.8 -5.8 µmolCO 2 / mmol H 2 O in WUE (2012-13) have been recorded in the present investigation.Mean WUE of the cultivars was 5.6% lower than the 2 nd crop season.Lower Tr and more water retention therefore higher WUE was recorded in the different cultivars during 2012-13 and impact of environment was also significant (Table 4).Among the B. juncea cultivars ,lower average transpirational rates were related to higher WUE in PBR210 and RLC1.GSL1,a non canola cultivar had lowest Tr of 6.5 mmolm -2 s -1 and high WUE of 5.44 µmolCO 2 / mmol H 2 O amongst the B.napus cultivars .WUE was higher in GSC6 than GSC5 though both the cultivars reported comparable Tr.Hyola PAC401, a hybrid canola possessed average Tr of 6.6 mmolm -2 s -1 and WUE of 5.21 µmol CO 2 / mmol H 2 O.These results indicated that water was more effectively utilized for assimilate production during the reproductive phase and in this sense the differences in seed yield amongst the cultivars were related to Tr and WUE.Linear regression between different components of net photosynthesis indicated differential association among them (Figs. 1 and 2). Conclusion Correlation between seed yield and leaf photosynthetic rates has been observed for the first time in Brassicas.High yielding cultivars displayed high net photosynthetic rates, utilized water more efficiently at flowering and early siliquae formation stage, and produced relatively higher seed yield, suggesting the importance of leaves/source which is not limiting during this phase.Cultivars in the present study exhibited indeterminate growth habits and the measurements were conducted only up to early siliquae formation which eliminated any assessment for siliqua photosynthesis in accounting for the potential differences in the traits studied.Therefore incorporation of photosynthesis by developing siliquae could improve the assessment of physiological traits in oilseeds in the future. Crop season Tempera- ture ( o C) Relative humidity (%) Rainfall (mm) Number of rainy days Fig. 1.Relationship between different components of photosynthesis. Table 5 . Correlation coefficients for photosynthetic and related traits with seed yield in Brassica cultivars at 100 DAS. Table 5 . Highly significant correlation existed between Cs and Pn (o.548**), Ci and Cs (0.632**), Tr and Pn (0.506), Tr and Ci (0.585**).Vpdl showed highly negative correlation with Cs (-0.921**) but positive association with Ci (0.687**).WUE was negatively correlated with Ci (-0.758**),Tr (-0.526*) and ct (-0.543*).Seed yield was positively correlated with Pn (0.378) ,Cs (0.202), WUE (0.213) though the magnitude of association was low indicating variations were due to genetic differences not environmentally affected as the interaction between G x Y were significant only for water use efficiency in the present investigation.However, correlation between seed yield and single leaf photosynthesis were not observed byCongo and McVetty (2001)in B.napus.
2018-12-06T20:16:38.528Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "364d2b7db7ba2a3ffe2990bf2e7e814b72337b1b", "oa_license": "CCBYNC", "oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/695/651", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "364d2b7db7ba2a3ffe2990bf2e7e814b72337b1b", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
267317924
pes2o/s2orc
v3-fos-license
Updated systematic review and network meta-analysis of first-line treatments for metastatic renal cell carcinoma with extended follow-up data Immune checkpoint inhibitor (ICI)-based combination therapies are the recommended first-line treatment for metastatic renal cell carcinoma (mRCC). However, no head-to-head phase-3 randomized controlled trials (RCTs) have compared the efficacy of different ICI-based combination therapies. Here, we compared the efficacy of various first-line ICI-based combination therapies in patients with mRCC using updated survival data from phase-3 RCTs. Three databases were searched in June 2023 for RCTs that analyzed oncologic outcomes in mRCC patients treated with ICI-based combination therapies as first-line treatment. A network meta-analysis compared outcomes including overall survival (OS), progression-free survival (PFS), objective response rate (ORR), and complete response (CR) rate. Subgroup analyses were based on the International mRCC Database Consortium risk classification. The treatment ranking analysis of the entire cohort showed that nivolumab + cabozantinib (81%) had the highest likelihood of improving OS, followed by nivolumab + ipilimumab (75%); pembrolizumab + lenvatinib had the highest likelihood of improving PFS (99%), ORR (97%), and CR (86%). These results remained valid even when the analysis was limited to patients with intermediate/poor risk, except that nivolumab + ipilimumab had the highest likelihood of achieving CR (100%). Further, OS benefits of ICI doublets were not inferior to those of ICI + tyrosine kinase inhibitor combinations. Recommendation of combination therapies with ICIs and/or tyrosine kinase inhibitors based on survival benefits and patient pretreatment risk classification will help advance personalized medicine for mRCC. Supplementary Information The online version contains supplementary material available at 10.1007/s00262-023-03621-1. Introduction The treatment of metastatic renal cell carcinoma (mRCC) has changed considerably with the development of immune checkpoint inhibitors (ICIs) [1,2].To date, five different ICI-based systemic combination therapies, including ICI + ICI or ICI + tyrosine kinase inhibitor (TKI), have been recommended as first-line treatment options for mRCC based on the International mRCC Database Consortium (IMDC) risk classification [1].However, no head-to-head phase 3 randomized controlled trials (RCTs) have compared the efficacy of different ICI-based combination therapies, making optimal treatment selection difficult.Several network meta-analyses (NMAs) have investigated the efficacy and safety profiles of these combination therapies, suggesting that pembrolizumab + lenvatinib provides the greatest overall survival (OS) benefit [3][4][5]. However, heterogeneity in patient populations (i.e., different proportions of patients in the IMDC risk categories) and insufficient follow-up have made OS comparisons unreliable.Recently, the survival data of some of these RCTs were updated with additional follow-up data [6][7][8][9].Therefore, this study present updated an NMA using this updated survival data to compare the efficacy of first-line ICI-based Takafumi Yanagisawa and Keiichiro Mori have contributed equally to this work. Extended author information available on the last page of the article combination therapies in patients with mRCC, stratified by IMDC risk classification. Methods The protocol of this study has been registered in the International Prospective Register of Systematic Reviews database (PROSPERO: CRD42023440048). Search strategy This systematic review and NMA was conducted based on the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement and PRISMA for NMA (Supplementary Table 1) [10,11].PubMed®, Web of Science™, and Scopus® databases were searched in June 2023 to identify studies investigating oncologic outcomes in mRCC patients treated with ICIbased combination therapies as a first-line treatment.The detailed search words were listed in Supplementary Fig. 7 and Supplementary Appendix 1. Subsequently, we reviewed abstracts from recent major conferences, such as the American Society of Clinical Oncology and the European Society for Medical Oncology, to include trial updates.The outcome measures of interest were OS, progression-free survival (PFS), objective response rates (ORRs), complete response (CR) rates, and treatment related adverse events (TRAEs). The titles and abstracts were independently screened by two investigators.Potentially relevant studies were subjected to full-text review.Disagreements were resolved by establishing consensus among co-authors. Inclusion and exclusion criteria Studies were included if they investigated patients with mRCC (Participants) and compared the efficacy of guideline-recommended ICI-based combination therapies (Interventions) with the efficacy of standard of care at the time of study enrollment (Comparisons) to assess their differential effects on OS, PFS, ORRs, CR rates, and/or TRAEs (Outcome) in RCTs (Study design).Studies lacking original patient data, reviews, letters, editorial comments, replies from authors, case reports, and articles not written in English were excluded.Relevant references of eligible studies were scanned for additional studies of interest. Data extraction Two authors independently extracted the relevant data as follows: studies and the first author's name; publication year; inclusion criteria; agents, dosage, and control arms; median age; number of patients stratified by IMDC risk classification; follow-up periods; TRAE, ORRs; CR rates; and duration of response rates.Hazard ratios (HRs) and 95% confidence intervals (CIs) from Cox regression models for OS and PFS were extracted.All discrepancies were resolved by establishing consensus among the co-authors of this study.As the CLEAR trial failed to show the superiority of everolimus + lenvatinib over sunitinib alone, only data on pembrolizumab + lenvatinib versus sunitinib were extracted [12]. Risk of bias assessment We evaluated the quality and risk of bias of eligible RCTs according to the Cochrane Handbook for Systematic Reviews of Interventions risk-of-bias tool (RoB version 2) (Supplementary Fig. 1) [13].The risk-of-bias assessment of each study was independently performed by two authors. Statistical analyses All eligible RCTs reported the oncologic and safety outcomes in overall population as well as patients stratified by IMDC risk classification (favorable and intermediate/poor risks).We conducted an NMA using random-effect models for direct and indirect treatment comparisons across outcomes [14,15].Contrast-based analyses were applied with estimated differences in the log HR and the standard error calculated from the HRs and CI [16].The relative effects were presented as HRs or odds ratios (ORs) and 95% CI [14].Different regimens were ranked in terms of OS, PFS, ORRs, CR rates, and TRAEs rates using the surface under the cumulative ranking (SUCRA) [14].Additionally, we performed subgroup analyses for each outcome separately in patients with favorable or intermediate/poor risk.Network plots were created to illustrate the connectivity of the treatment networks.All statistical analyses were performed using R version 4.2.2 (R Foundation for Statistical Computing, Vienna, Austria). Study selection and characteristics The PRISMA flow chart detailing our study selection process is shown in Supplementary Fig. 7.An initial literature search identified 8,548 records.After removing duplicates, 6,425 records remained for title and abstract screening.After screening, we performed a full-text review of 47 articles, leading to the final identification of 5 RCTs including 7 updates comprising 4,206 mRCC patients treated with ICI-based combination therapy [6-9, 12, 17-23].The study and patient demographics of eligible RCTs are described in Table 1.All five RCTs provided data on differential OS, PFS, ORRs, and CR rates stratified by IMDC risk classification.Sunitinib alone, nivolumab + cabozantinib, nivolumab + ipilimumab, pembrolizumab + lenvatinib, pembrolizumab + axitinib, and avelumab + axitinib were included in this NMA.After updating the follow-up, median follow-up duration ranged from 33.6 to 67.7 months. Risk of bias assessment All included phase 3 RCTs had a low risk of bias or some concerns (Supplementary Fig. 1).The quality assessment was conducted using the AMSTAR2 checklist; overall confidence in the results of this NMA was "High" (Supplementary Appendix 2) [24] Network meta-analysis of oncologic outcomes Network plots for all oncologic outcomes were depicted in Supplementary Fig. 2. The results of treatment rankings are summarized in Table 2. TRAEs Compared to sunitinib alone, only ipilimumab + nivolumab were associated with significantly more favorable TRAEs (Supplementary Fig. 4).Treatment rankings revealed that ipilimumab + nivolumab had the highest likelihood of providing the most favorable TRAE profile. Discussion At present, ICI-based combination therapies (ICI + ICI or ICI + TKI) are the major first line treatment for mRCC [1,2].However, the survival data, particularly OS data, available for IC + TKI is insufficient, rendering comparisons between the survival benefits of ICI + ICIs and ICI + TKI difficult [3][4][5].Therefore, in our NMA, we compared these combination therapies based on recently reported long-term follow-up data and demonstrated several important findings.First, nivolumab + cabozantinib was associated with favorable OS outcomes during long-term follow-up.Second, pembrolizumab + lenvatinib had inferior OS benefits compared to nivolumab + cabozantinib or nivolumab + ipilimumab, despite being associated with extremely favorable PFS, ORR, and CR outcomes.Third, avelumab + axitinib was associated with superior OS and ORR, thus representing the best treatment option for patients at favorable risk.Fourth, nivolumab + ipilimumab was associated with the best CR rates and favorable OS outcomes among patients at intermediate/poor risk, despite having inferior ORR outcomes.Fifth, of the TRAEs evaluated for all regimens, not only any TRAEs but severe TRAEs were shown to be the most favorable with ipilimumab + nivolumab.Based on recently reported long-term follow-up data, the Kaplan-Meier survival curves were reported to become increasingly less separate between pembrolizumab + axitinib or lenvatinib and the control treatments after approximately 3 years of follow-up, but remained distinct between nivolumab + cabozantinib and the control treatments.Therefore, nivolumab + cabozantinib is likely favorable over pembrolizumab-based therapies.ICI + TKI combinations have emerged as key treatment strategies for enhancing tumor responses and improving survival outcomes.TKIs can enhance the effectiveness of ICIs by affecting tumor microenvironments via their antiangiogenic effects, thereby increasing cytotoxic T-cell activity and infiltration [25].ICIs are also believed to reciprocally enhance the benefits of TKIs [26].Additionally, RCC is immunogenic and proangiogenic, and the immune system is believed to play a major role in promoting tumor resistance to TKIs in RCC [5,27,28]. In the context of TKI resistance, cabozantinib needs to be considered in combination with nivolumab-based therapy and has been associated with long-term efficacy in RCC.Unlike conventional TKIs, cabozantinib is a multi-TKI exhibiting broad-spectrum activity against VEGFR, GAS6, MET, AXL, MER, and TYRO3 [29, 30].Notably, MET and AXL (both known to be involved in the survival, proliferation, infiltration, and metastasis of tumor cells as well as in the mechanisms of tumor resistance to molecularly targeted agents) were reported to be overexpressed in RCC.In addition, HGF, a MET ligand secreted mainly from mesenchymal cells in tumor tissues, exerts a wide array of physiological effects, including promoting tumor cell proliferation and inhibiting tumor cell apoptosis [31,32].GAS6, an AXL ligand expressed under serous fasting states resulting in tumor cell growth arrest, is involved in tumor metastasis and infiltration [33,34].Therefore, activation of the HGF-MET and GAS6-AXL pathways promotes tumor survival, proliferation, infiltration, and metastasis [35][36][37], and blocking VEGF can lead to MET and AXL activation. Several reports have suggested that cabozantinib promotes a tumor microenvironment conducive to robust immune responses and is thus synergistic with ICIs.Cabozantinib inhibits HGF-induced PD-L1 expression in renal cancer cell-injected mouse models [38], indicating that it can prevent tumor cell immune escape through HGF/c-MET signaling.Moreover, the BAS6/AXL pathway is involved in the immunoinhibitory effects mediated by regulatory T (Tregs) or natural killer (NK) cells [39,40], and the VEGFR pathway is involved in immunosuppression by promoting T-cell migration, inhibiting dendritic cell maturation, and promoting Treg and myeloid-derived suppressor cell (MDSC) maturation.These findings suggest that inhibition of AXL and VEGFR promotes antitumor immunity [41].Notably, treatment with cabozantinib increases the expression of major histocompatibility complex (MHC) class I antigens in MC38-CEA mouse tumor cells and the number of peripheral CD8 + T-cells while decreasing the number of Tregs and MDSCs in a MC38CEA mouse colon cancer model [42].Cabozantinib + ICI combination therapy is shown to have synergistic antitumor effects, resulting in reduced numbers of MDSCs alongside an increase in CD8 + T-cells and the ratio of CD8 + T-cells/Tregs in a mouse model of metastatic castration-resistant prostate cancer (mCRPC) [43].Furthermore, a phase II trial in patients with metastatic, triple-negative breast cancer showed that cabozantinib continuously increased the number of circulating CD3 + T-lymphocytes while continuously decreasing CD14 + monocytes, suggesting that cabozantinib treatment led to bolstered antitumor immunity [44].In summary, MET signaling is assumed to inhibit tumor immune responses by increasing PD-L1 expression, promoting the differentiation of T-cells into Tregs, increasing immunoinhibitory enzyme IDO-1 activity, and promoting the production of the immunosuppressive cytokine TGF-β [31, 32] AXL signaling is assumed to inhibit the antitumor activity of activated macrophages, dendritic cells, and NK cells [33,34].Therefore, cabozantinib therapy targets the tumor vasculature and tumor cells, inducing potent immunomodulatory effects that render it suitable for use in IC + TKI combination therapies [30]. However, these findings should be interpreted cautiously, particularly those on OS, as different TKI regimens and/or anti-PD-L1 antibodies were used.Moreover, the study populations varied among the studies, and subsequent treatment rates may have greatly affected the results.In interpreting the results reported herein, caution should be exercised to take into account factors that may have worked in favor of nivolumab + cabozantinib as well as in disfavor of pembrolizumab + lenvatinib, which, in turn, may account in part for the discordance between the OS and PFS/ORR outcomes with these regimens.Additionally, of note, patients treated with anti-PD-1/PD-L1 antibodies accounted for a greater proportion of the study populations in the KeyNote-26 (55.9%) and KeyNote-581 (54.6%) trials than in the Check-Mate-9ER trial (31%).This may have positively affected those treated with sunitinib and decreased the difference in OS between those treated with pembrolizumab + lenvatinib or axitinib combinations and those treated with sunitinib alone.Furthermore, patients with favorable IMDC risk accounted for approximately 22% of the study population in the CheckMate-9ER trial, but > 30% of the study population in the KeyNote-426 and -581 trials, which may have affected the OS findings.Those with poor IMDC risk accounted for approximately 20% of the study population in the CheckMate-9ER trial but only 10% in the KeyNote-426 and -581 trials.In the KeyNote-581 trial, the HR for OS slightly favored sunitinib alone (HR, 0.85) over ICI + TKI combination therapy in a subgroup analysis of patients with an intermediate IMDC risk.Therefore, it is speculated that of all patients with intermediate-risk IMDC, more patients with a relatively favorable prognosis (who benefited more with sunitinib alone) were enrolled in the KeyNote-581 trial.Notably, the KeyNote-581 trial had more censored cases at 36 months, which coincides with the fact that the difference in the Kaplan-Meier survival curves began to diminish.In addition, among the patients treated with nivolumab + cabozantinib, approximately 7% and 8% discontinued treatment due to AEs associated with cabozantinib and nivolumab, respectively, indicating good overall tolerance [45].In contrast, approximately 26% and 29% patients discontinued treatment due to adverse events associated with lenvatinib and pembrolizumab, respectively [45].The study results may also have been affected by whether patients with RCC complied with their long-term treatments, as initial treatment with TKI + ICI may be effective. Our risk-stratified analysis enabled us to characterize the efficacy of each treatment regimen and generate additional insights.We demonstrated that avelumab + axitinib was the best treatment option for patients with favorable IMDC risk and led to good OS and ORR outcomes.Meanwhile, nivolumab + ipilimumab produced the best CR rates among those at intermediate IMDC risk.Although many factors may have contributed to these results, the presence of angiogenic and immunogenic molecular subsets among patients with RCC is of special interest.The angiogenic and immunogenic subsets account for the majority and minority of those with favorable IMDC risk, respectively.In contrast, the immunogenic subset accounts for a greater proportion of those with poor IMDC risk than the angiogenic subset [46], suggesting that the best treatment option for RCC may vary depending on patients' pretreatment risk.However, the paucity of study data available for analysis only allowed patients with intermediate/poor IMDC risk to be assessed in this study.This led to a heterogenous population requiring separate analysis as two distinct risk groups, and therefore caution is needed when interpreting our results.Additionally, avelumab + axitinib has not been recommended as a preferred regimen in major guidelines, given its failure to meet the primary endpoint in the JAVELIN Renal 101 study.Indeed, a comparison of OS Kaplan-Meier curves for favorable-risk patients in the four RCTs evaluated in this review shows that the OS curves begin to separate between the control (sunitinib) and the treatment (ICI + TKI) groups only 2 years after study initiation even in the JAVELIN Renal 101 study in which the treatment appeared to fare marginally better than the control.Again, the duration of ICI therapy was not restricted in the JAVELIN Renal 101 study but was limited to 2 years in the other three RCTs, suggesting that 2 years of ICI therapy may not be adequate and a longer duration of ICI therapy may be required in favorablerisk patients with favorable prognosis.In other words, the results from analysis of favorable-risk patients in this review may have primarily reflected differences in duration of ICI therapy among the RCTs compared.Thus, this limitation needs to be taken into account when interpreting the results of the present analysis and the results for favorable-risk patients should be deemed inconclusive and referred to only as a guide pending results of a final OS analysis becoming available from the JAVELIN Renal 101 study. Despite its comprehensive nature, this study had several limitations.First, this NMA depended on the reporting quality and reliability of the reviewed trials, which may have suffered from bias, thus limiting the validity of its findings.Second, although the study used indirect treatment comparisons of RCT outcomes, it was not intended to replace head-to-head comparisons in clinical trials.Furthermore, given that the present analysis found it difficult to adequately adjust for these differences in patient characteristics among the RCTs evaluated, it should be noted that this may account in part for the discordance between the OS and PFS/ORR outcomes in its analysis of oncological outcomes.Third, CR rates vary largely depending on a prior history of nephrectomy; those not having undergone nephrectomy had larger tumor volumes, which likely contributed to decreased CR rates, and vice versa.Fourth, considering that some of the updated data included in this analysis remain to be published, this meta-analysis may have suffered from missing data.Fifth, while a brief analysis of TRAEs was performed for the treatment options evaluated, no detailed analysis of AEs was performed in this review primarily focused on their efficacy profiles.The caveat is therefore that in choosing among the ICI-based treatments, full consideration needs to be given not only to their respective oncological efficacy but to their respective safety profiles and potential AEs.Sixth, the COSMIC-313 trial was excluded from the present analysis because of the lack of OS data despite its favorable PFS and improved progressive disease rates/ORRs [47].Moreover, the COSMIC-313 trial has also been associated with an increased incidence of AEs and low CR rates, thus raising concerns about whether PFS outcomes actually translate into improved OS.Therefore, long-term follow-up is required to obtain robust OS data for this RCT.Finally, considering that the RCTs evaluated in this study offered a limited range of effective options as second-or later-line treatment, and that ICI rechallenge may not be an option (in light of the negative results from the CONTACT trial), selection of an appropriate first-line treatment is critical [48,49]. Conclusions The present analysis, based on updated follow-up data, revealed the varying efficacy of ICI combination therapies.Our updated NMAs revealed that the OS benefits of nivolumab + ipilimumab was not inferior to those of other ICI + TKI regimens.The outcomes of this regimen in patients with intermediate/poor IMDC risk were comparable to those in the overall study population.These findings may provide guidance for patients and clinicians in treatment decisions while also addressing other aspects of personalized medicine.Further studies on the oncologic outcomes of ICI-based combination therapies based on IMDC risk would help enrich our findings. Fig. 1 Fig. 1 Forest plots showing the results of NMA among the overall population for OS, PFS, ORR, and CR in mRCC patients treated with firstline ICI-based combination therapy Fig. 2 Fig. 2 Forest plots showing the results of NMAs for OS, PFS, ORR, and CR in mRCC patients with intermediate/poor risk treated with first-line ICI-based combination therapy Table 1 Study demographics and oncologic outcomes of included RCTs of 1st-line ICI-based combination therapy for mRCC
2024-01-31T06:17:08.132Z
2024-01-30T00:00:00.000
{ "year": 2024, "sha1": "8edb5eabc40535149e87ef2997880a4b7afa6976", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00262-023-03621-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e6b2cd8717c9bd9683d65f84c3e45908df183822", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252612986
pes2o/s2orc
v3-fos-license
Structural Potting of Large Aeronautic Honeycomb Panels: End-Effector Design and Test for Automated Manufacturing Structural potting is used to prepare honeycomb panels to fix metallic elements, typical in aircraft doors. In this paper, a full procedure for structural potting using robotic arms is presented for the first time. Automating this procedure requires the integration of, first, machining operations to remove the skin layers and prepare the potting points and, then, resin injection into the honeycomb cells. The paper describes the design, prototyping, and testing of specific end-effectors. Different end-effectors were explored to ensure efficient injection. The results obtained with the prototypes show that the potting quality is adequate to accomplish the required process checks for industrial manufacturing. The injection process time can be reduced by a factor greater than 3.5, together with the extra assets associated with the automation of complex tasks. Therefore, structural potting automation is demonstrated to be feasible with the end-effectors proposed for milling and injection, which are ready for use with conventional robotic arms in manufacturing lines. Introduction Parts made of carbon fiber-reinforced polymers (CFRPs) are a must in the present aeronautical industry, as a result of the need to limit energy consumption to reduce both the operational cost (essentially fuel consumption, directly related to structural weight) and the carbon footprint of the entire aircraft life-cycle. Design optimization studies try to make a systematic approach to overall aircraft design based on CFRP options [1]. This scenario obliges one to reconsider the whole cost chain in the manufacturing process of components, and studies have addressed large fuselage parts [2]. In recent years, the traditionally high-cost composite parts are have been revealed to be cost-effective in different industry fields [3,4], mostly driven by the reduction of the manufacturing cycle time by automating the processes, despite the costs of both the associated equipment and the raw material. Using CFRP parts offers the opportunity and the need to reduce the number of joints in aircraft assemblies and design larger parts, with the best example being the case of the fuselage. This option, by rapidly increasing the footprint of the part, results in complex manufacturing operations, as well as requiring stock and assembly planning. There also exists a recurring problem in the aeronautical sector related to production times, and larger parts can help to reduce lead times. However, manufacturing CFRP parts is basically a manual, highly-skilled, and labour-intensive process, resulting in a high cost-rate balance. It is widely accepted that the only way to achieve the required production rates at limited costs is to automate some of the complex manufacturing processes for CFRP parts [5,6]. A well known case is that of automated tape laying machines. These are large and complex machines tailored to manufacture specific shapes; they are also very expensive and not capable of adapingt to the production of many different parts. However, the success of their deployment is based on both the productivity increment, which reduces the payback time, and in their making the manufacturing process time more stable, helping in production planning. Another sound reason for automation is the inherent repeatability of the process, making the failure rate more predictable and even lowering it drastically. Therefore, automation can directly reduce production costs. The combination of automation of complex manufacturing and large parts brings specific challenges to production processes and requires complex planning, which has been presented by some authors as a 'process orchestration system' [7]. These solutions usually include robotic arms, either fixed or mobile, to provide an extra-long stroke axis [8]. The suitability of using a heavy-duty robotic arm dedicated to machining CFRPs was studied for trimming [9] and milling [10], and quantified in terms of surface damage, delamination, and form deviations, depending on feed rate and cutting velocity. The general accuracy of robotic milling has been analysed by different authors [11,12]. Process planning including robotic arms is already integrated in software control packages for ISO 14649 and ISO 10303-238 [13]. The specific static and dynamic characteristics of robotic arms require dedicated planing of machining operations to improve accuracy by using a multi-axis compensation mechanism [14] and position control depending on force [15], kinematic analysis of the arm [16,17], or the effective stiffness of the arm [18]. Alternatively, a dedicated design of robotic arm for machining can improve the stiffness and dynamic properties, as well as the costs [19], and specific robotic designs [20,21] have been analysed to better understand their dynamic responses. Automating Structural Potting Aircraft manufacturing includes some parts that require several production stages before assembly, which must be completed with very specialised procedures and become high-value parts. Aircraft doors and hatches are made with thick CFRP-honeycomb sandwich panels and require filling some regions with epoxy resins to make a monolithic point for fixing metallic elements, a process known as structural potting. The potting operation is typically carried out by hand, which requires trained personnel and creates a bottleneck in both the production chain and the productivity rate. As explained in more detail in Section 2, structural potting basically consists of injecting epoxy resin into honeycomb panels, which is an activity that is very demanding in terms of manual procedures, is time consuming, and involves manipulating chemicals. Indeed, the resulting quality is highly dependent on the staff skills. Automating the potting processes would immediately reduce the process time, the wasted material, and the quality-default rate. At the same time, it would increase repeatability and manipulation safety. Therefore, automation would help in the overall process efficiency. Automating this procedure requires the integration of, first, machining operations to remove the CFRP skin and prepare the potting points, and then, resin injection into the honeycomb cells. The procedure includes aggressive operations (removal of skin layers) and complex filling operations because the thick cores have different cell shapes, sizes, and depths. The structural quality depends critically on both the possible delamination and damage done to the skin layers and the proper integrity of the cells and solid filling of the cell channel with no voids, which would compromise the structural strength. Moreover, the structural quality and integrity of the whole part depends on the individual resin-filled points created, of which there can be many in the same part. Dealing with the complex steps involved in the potting process requires a skilled handcrafted procedure. Indeed, to ensure the quality of the potting, a post-process quality check based on some non-destructive (ND) inspection method is applied before accepting the part. To the knowledge of the authors, there is only one commercial option offering solutions for robotic potting processes, from the company Airborne (Airborne Development BV, Den Haag, The Netherlands). Based on a robotic cell for resin dispensing, it is applied only into flat panels with reduced thickness [22,23]. The dispenser nozzle (ViscoTec Pumpen Dosiertechnik GmbH, Töging am Inn, Germany) controls the volume flow of the resin system, and was specifically developed to help the flow by adding hollow glass micro-spheres to reduce the density (Von Roll Holding AG, Breitenbach, Switzerland). This situation drastically reduces the range of possible applications, because the structural demands are limited due to the materials employed. In this work, the automation of structural potting is demonstrated by using endeffectors developed for skin removal and resin injection. In Section 2, the study case selected in the paper is presented using a landing-gear door of the Airbus model A320. In Section 3, the different end-effectors are described. One prototype was developed for milling. Three designs are discussed for injection and two prototypes were tested for single-cell and vacuum-assisted multi-cell injection. In Section 4, the results of the tests conducted with the different tools are presented and discussed. For injection, both options were successful, with the process time being clearly favourable to the vacuum-assisted multi-cell injection option. The conclusions highlight that the end-effectors developed are a real full solution for the structural potting process automation. Analysis Case: Potting of an Aircraft Door Fuselage parts are a typical object of CFRP manufacturing, as shown in the early papers in the 1980s [24], where very few aircraft parts were considered for the new materials. Ref. [25] presented an Airbus A320 cargo door built with pre-pregs with the goal of reducing the manufacturing process cost. Large panels with curvatures adapted to the airplane's envelope are typically manufactured as sandwich panels. The application of sandwich structures using fibre-reinforced polymers was already widespread in the 1990's in many fields; however, the manufacturing techniques were badly developed and poorly documented [26]. Passenger and cargo doors, as well as hatches, include sandwich panels with a thick honeycomb core. However, the unavoidable drawback is the need for the insertion of fixing points for latches, hinges, guides, etc., which must be firmly attached to the panels. The skins are too thin to offer enough pull-out strength. The core, made of thin-walled cells, does not offer appropriate bulk material to firmly grab the inserted parts. The solution comes by locally converting the honeycomb into a monolithic part at the fixing points. This is achieved by filling the selected honeycomb volume with some structural resin material in an operation called 'structural potting'. The resin must be injected into the honeycomb so that the panel cell walls become the internal frame of the new structural monolithic volume. Note that the idea behind this is to create a solid insertion that is firmly connected to the whole panel and ready to be used as an anchoring volume to fix, e.g., metallic elements, including new drilling or threading operations in the new local potted region. From the early work [27] about 'fully potted' inserts in 1998, tests and numerical models (namely finite element models) have been used to study the pull-out strength of the metal insertions in thick CFRP laminates [28] or resin-filled cores in sandwich panels [29][30][31][32][33][34][35]. Based on the same type of analysis, some authors [36,37] developed parametric analysis for sizing the insert to the panel to help in the design stage. The case considered in this study corresponds to the structural potting of the main landinggear door of the Airbus model A320. The door envelope is about 1750 mm × 1680 mm, 60 mm thick, and about 6 kg; see Figure 1. The honeycomb core is made of meta-aramid polymer impregnated with phenolic resin and cured (NOMEX ABS5676 G600, DuPont de Nemours Inc., Wilmington, DE, USA), with panel thickness 60 mm ± 0.25 mm and density 48 kg/m 3 . The door combines two types of cells with different sizes and shapes (so called types OX and EX HRH-10). The NOMEX honeycomb composite is a high-performance structural component of sandwich panels, preferred in applications for aircraft, aerospace, automobiles, etc. [38]. The sandwich part is subjected to multiple operations with the aim of locating the metallic parts on it. After machining, different spots of different shapes, depths, and sizes are potted to complete the hatch. Besides other actions, there are 107 pocket milling operations with 39 mm diameter each: 33 operations in the internal side and 74 in the external side, see Figure 1. The CFRP skin is removed by milling and the honeycomb cells are exposed; after removing the burrs (see Figure 2-left), it is ready for potting at the 107 locations, as well as in other regions. The spots will be potted by injecting epoxy resin in the cells, filling the honeycomb; see Figure 2-right. The procedure typically requires two operators and careful handling to ensure that the cell-walls are not damaged and voids are not created in the channels. Excess resin must be scraped and sanded down after curing. The required structural quality requires using only accepted resins (according to the composite required qualification AIMS/IPS 11-01-005, EADS Airbus GmbH, Hamburg, Germany) and following an appropriate technical practice, including adequate curing procedures (according to the instructions for manufacture of monolithic parts with thermoset prepreg materials AIPI 03-02-019, EADS Airbus GmbH, Hamburg, Germany). Additionally, the CFRP skins are also checked to evaluate if any delamination appears just after milling, or if the honeycomb core cells have been damaged. All burrs must be removed before resin filling. The filling integrity can only be guaranteed by a dedicated inspection, as discussed in Section 4. End-Effector Design and Prototyping To deal with realistic door structures with important thickness values involved (60 mm after curing the skin), curved envelopes, and many potting points (cf. Figure 1), the automation of the potting procedure requires solving two stages: skin removal and cell filling. The tools (end-effectors) developed for skin removal and resin injection in a processing line are presented in the next sub-sections. These end-effectors are ready to mount on typical robotic arms capable of reaching any spot of interest on the door; see Figure 3. The end-effector includes sensors to provide reference values to help fine-tune the operation. This capability offers the necessary feed-back information to complete the pre-defined arm trajectory adapted to the nominal part shape. End-Effector Design for CFRP Skin Removal The door under consideration has a CFRP skin made of one layer of polyester fabric peel-ply (100 g/m 2 ) and one layer of biaxial (twill fabric) carbon pre-preg 12K (400 g/m 2 ). The CFRP skin can be removed by a conventional milling operation in a separated process line; see Figure 4-left. The total skin thickness is about 0.6 mm. Removing the thin skin layers using a robotic arm requires fine control of the end-effector positioning. Besides the position calibration of the arm's trajectories, a dedicated gauge to mark the surface level must be included in the end-effector for an effective process. The reason for this is that the nominal position in the projection plane is easy to achieve within a tolerance of 0.05 mm, provided by the trajectory control of the robotic arm. However, due to the curvy surface and manufacturing tolerances, the depth to remove the skin efficiently is not well defined, since the milling tool has to penetrate the surface normally to avoid delamination effects, as well as to avoid depleting the honeycomb cells. The end-effector includes a compact spindle as well as a displacement gauge. It is located on the arm on top of the surface at the nominal position. The gauge (Potentiometric Displacement Sensor model 8712-10, Burster praezisionsmesstechnik gmbh & Co. kg, Gernsbach, Germany) touches and marks the surface reference to define the tool depth trajectory. The gauge provides a value with 0.01 mm resolution in typical ranges below 7 mm. The arm then relocates the milling tool according to the predefined path, using updated reference mark for depth. The design of the end-effector is based on a 90º alignment for the spindle and gauge. The spindle is set perpendicular to the robotic wrist link flange to provide maximum rigidity to the end-effector assembly while milling; see Figure 5-left. After locating the gauge and measuring the skin surface position, the end-effector is relocated with the milling tool facing the skin by a straight change of relative trajectory in the arm with the conventional control software. This end-effector mounted on the robotic arm provides limited stiffness compared to a conventional milling centre. Furthermore, the milling parameters, mostly the torque-speed range of the tool, are much more limited. Indeed, the cutting tool must be a single-type choice to avoid tool changing operations. The end-effector must therefore be focused on the specific skin, hole size, and depth case considered. This is the situation in the case study, with 107 equal operations with a 39 mm diameter pocket milling. The cutting was carried out by circle interpolation to the specific diameter. The end-effector assembly design and prototype are shown in Figure 5. A single hollow cask, 230 × 145 × 145 mm 3 , made of aluminium 6061 T6, is directly fastened to the arm flange to provide a rigid frame to hold both the milling head and gauge, with a total weight of 27 kg. The end-effector requires the gauge power, signal, and pneumatic lines, and includes a vacuum nozzle to remove fibre debris, which are very damaging to all the equipment. End-Effector Design for Single Cell Injection The possibility of automating processes including epoxy filling or gluing requires developing specific resin systems that provide an effective solution, such as the recent solid system for gluing referred to in [39]. In this work, the same resin system used for manual processes was selected for the comparison study. Considering the need to fully fill the cells with resin, preventing any voids that would reduce the structural strength, the natural option is to consider direct injection into each individual cell. An end-effector dedicated to this task requires a thin long nozzle (straw-like) as an injector, which is first introduced inside each cell to a certain practical depth, which is then filled with resin while the injector is pulled out. Then, the injector moves to the next cell, repeating as necessary. It is also possible that panels will have cells with different sizes (Figure 4-right), requiring the replacement of nozzles adapted to each specific size, as well as changes in the injection-displacement velocity. The potting is based on an epoxy resin dual component system. This requires mixing the two components in the right proportion, and mixing a homogeneous product ready to be used in the limited time while the viscosity is well suited for the injection stage. For the prototyping stage, the option was to develop an end-effector with a complete injector system. The end-effector includes a set of two cartridges for the epoxy components (model 7702971, 1:1 ratio, 2 × 300 mL, Nordson EFD, East Providence, RI, USA) and the mixing stage. The mixing is based on a static in-line double entry channel pipe. The mixing is based on the continuous flux of the resin components through the internal spiral elements. Therefore, the mixing quality depends on input control (for the right mixing proportions) and the pipe length (to ensure homogeneous conditions). To reduce the disposal of plastic consumables, a stainless-steel spiral tube mixer (model 7700126, Nordson EFD, East Providence, RI, USA) was selected. The use of cartridges requires mounting them and using an actuator to push out the product with pistons. The endeffector was equipped with a linear stage (model UA15-CNC, 300 mm stroke, SUHNER Schweiz AG, Bremgarten, Switzerland) and a servomotor (model 1FK7042-2AF71-1RB0, Siemens Aktiengesellschaft, Munich, Germany). The set can provide up to 3.5 cm 3 /s of resin at low speed (12 rpm), thus filling a hexagon cell sized as in Figure 2 and 60 mm long within 0.2 s. The actual rate is, however, imposed by the dispensing tip selected and the maximum dispensing pressure. The tip must be long enough to go through the thickness of the cell. A compliant material would help to preserve the integrity of the thin wall cells during insertion. However, this kind of flexible material is not recommended for epoxy resins. Therefore, a precision stainless-steel tip 38 mm long with an internal diameter of 1.54 mm was selected (model 7018035, Nordson EFD, East Providence, RI, USA), demanding the correct alignment of the tip and cell to avoid any cell-wall damage. The injection rate limit is imposed by the resin viscosity and the maximum available injection pressure, resulting in a value of about 0.14 cm 3 /s and some 9 s per cell. The design of the end-effector thus includes one linear actuator, two cartridges, one mixing pipe, and the injection pipe. To help the resin flow, the natural solution is kept in an in-line layout, mounted with the long nozzle perpendicular to the flange of the robotic wrist. The resulting prototype is shown in Figure 6. The end-effector is mounted in a frame directly fastened to the arm flange. A plate supports the cartridges, the guide, and the actuator, weighting about 37 kg. For industrial deployment, a more effective solution would use a continuous supply of either the resin components or the mixed resin, requiring only the dispensing tip, thereby simplifying the end-effector and avoiding the replacement of any limited size cartridges. The operation of this single-cell injector requires moving from one cell to another; this is fast because the movement is finished when the tip is just out of the cells. For the prototyping stage and tests, an additional optical camera (model CV-035M, Keyence Corp. America, Itasca, IL, USA) was used to monitor the process; see Figure 6. However, the critical parameter was to correctly tune the velocity to pull the pipe out of the cell, which is related to the resin injection speed. The goal is that the dispensed resin volume corresponds to the cell volume and no voids are created during the pipe extraction, along with maximizing the operation speed. The operation time for the single-cell injection is dictated by the time involved in the in-cell injection, which should be minimised while preserving the filling quality. The overall time is defined by the number of cells to be filled in; this is dictated by the injection time, with the cell-to-cell displacement being negligible. For these tests, the cells were filled in about 9 s and each 39 mm diameter spot with about 30 cells each required 4.5 min. Disregarding the transitions between spots, this is about 8 h per door with 107 spots to fill. End-Effector Design for Multi-Cell Injection The natural upgrade of the end-effector is a multi-cell injector. The time involved is directly reduced by the number of simultaneously filled cells. The nozzle now must be a multiple-tip nozzle, distributed in the honeycomb pattern and, ideally, compliant to help with insertion. Since the resin injection piston is unique, the only requirement is that the cells must all have equal internal volume, which is the case for the honeycomb panels in this study case. However, different regions may require a different nozzle; see Figure 4-right. The new injector with multiple tips requires increased resin volume ready for injection at an equal flow distribution. The mixing stage option as used for a single-cell is not suitable. For the multi-cell end-effector, it is better to use a single resin chamber driven by a piston to serve all the pipes at the same time, with the mixed resin prepared and filled previously in the chamber. Therefore, the end-effector only holds the injection stage using a more compact piston actuator. The end-effector uses a simple support to hold the piston set. The design is now with the piston axis set parallel to the arm flange; see Figure 7. The multi-pipe injector nozzle was designed and virtually tested. The design is completely customised, based on a set of calibrated pipes bundled together to form the nozzle. The main drawback was that the cell pattern distribution under the skin may change from location to location, disturbing the insertion and compromising the integrity of the thin cell walls. Moreover, some cells could be not 'piped' in the periphery because of the mismatch of the different cell patterns with the same nozzle; even worse, some empty cells could also occur in the internal circle, creating a faulty part. This virtual analysis showed that this option for multi-cell injection did not fulfil our needs and posed major drawbacks. Figure 7. Design of the multi cell injection end-effector prototype. One single chamber for the resin, actuated by a piston, is filled with the mixed epoxy resin. The injector nozzle is made by a number of pipes fastened together in a circle. The pipe pattern must match that of the honeycomb below the skin at the potting spot. Vacuum-Assisted Multi-Cell Injection The multi-cell injection concept is the key to creating an efficient approach to the cell filling process. Using multiple pipes, as mentioned above, would require the fine-tuning of the insertion process, adapting from position to position in the honeycomb core with the help, possibly, of image analysis. Alternatively, the resin can be directly injected at a simple surface spot, with the nozzle covering the whole pattern of cells. The injection end-effector is now largely simplified, reduced to only the resin chamber with the driver piston and the nozzle. Moreover, the position definition of the end-effector does not limit the process by using a (slightly) larger nozzle diameter than the skin hole. All the cells beneath the nozzle would be filled and the skin itself would define the nozzle injection border. The injector prototype was mounted and tested by using a linear actuator (model CDQ2B32TF-75DZ magnetic piston, 15 bar, SMC, Chiyoda City, Japan) to actuate the piston of the resin chamber (with a rod and plunger system made of hardened steel (DIN St 37-2)). The chamber-nozzle set was simply a hollow cylinder filled (manually) with resin. The actuator-chamber set was mounted with an ad-hoc frame, using three vacuum grips to hold it firmly to the skin surface while injecting the resin; see Figure 8. The nozzle included a simple interface made of a flat foam ring mounted as a gasket to avoid the resin spilling out around the head. The drawback of this method is that the cell filling now depends only on the forced flow from one side. The nozzle remains static and does not help in the fluid distribution, opposite to the effect of using pipes for the injection. The overall effect is that voids can be more easily created, caused by the resin viscosity and the 'sticky' effect at the cell walls. To help improve the resin distribution, a vacuum nozzle is paired at the opposite side of the honeycomb panel. The resin is thus under the double effect of the injection pressure from one side and under-pressure from the opposite side of each cell channel. Voids are rarely formed if the injection flux is not too fast. The nozzle gasket is also key to guaranteeing that the skin-nozzle interface makes a tight volume so that the vacuum is applied effectively to fill the cell group. This injection option, tested with the prototype shown in Figure 8, resulted in no empty cells observed in the many trials conducted in the workshop once the injection parameters were tuned correctly. The vacuum-assisted multi cell injection system was then mounted on a robotic arm and tested on a real setup; see Figure 9. The injector was mounted with the same linear stage system as with the single-cell injector, and with a camera vision system to actuate the piston. The procedure requires acting on both sides simultaneously, injecting the resin from one side and assisting with vacuum on the opposite side. This procedure could require removing the laminate skin on both sides, if present, and using robotic arms on both sides. In this study, the potted spots had CFRP skin on only one side, with the core open on the opposite side, thus creating a convenient situation. The end-effector was mounted perpendicular to the flange of the robotic wrist. A continuous resin supply was included to re-fill the chamber; see details in the Appendix A. During the tests, the end-effector also included an optical camera. The image was used for control: first, to correctly position the nozzle on top of the skin opening and then, prior to the procedure, to check the cleanliness (burrs) of the honeycomb cells. Finally, after filling, the image served as a post-check. The time involved in the tests made was about 1.3 min per spot; this is 3.55 times faster than single-cell injection. Experimental Results In order to verify the correct functioning and compare the performance of the prototypes, several tests were conducted, repeating and adjusting the process as many times as necessary to obtain the right test results to use as references for the comparisons. The prototypes of the end-effectors developed for the automation were mounted on the robotic arm and the panels were subjected to milling and injection operations to test the process and compare the options. On the one side, the CFRP-skin removal was tested to verify the integrity of the sandwich panel. On the other side, the resin filling result was compared using the different options developed. CFRP-Skin Removal The process for machining CFRPs is well documented and is a common process in the aeronautical industry. This case refers to the removal of a thin skin (0.6 mm thick, two layers) glued to the honeycomb panel. The process is carried out by a conventional milling machine (see Figure 4-left) with a high rate of success. The milling of CFRPs is known to relate the cutting method and tools with the forces and the surface quality [40][41][42]. Different strategies and methods have been studied to improve the milling performance, typically in respect to the delamination issue [43,44]. Model analysis has also been used to include tool wear and milling forces in order to better master the process parameters [45][46][47]. The milling process parameters can also be specifically defined for a robotic arm implementation. In [48], the authors found that spindle speed dictates the surface roughness, while milling depth dictates the vibration. The problem of milling parts with low rigidity is a common topic in the aircraft industry, requiring vibration-controlled approaches [49]. The main concern with this operation is the potential damage caused to the layers, namely due to delamination. It is also possible to damage the honeycomb core if the cells walls are pulled by the cutting tool and the cell-net continuity is broken. Furthermore, the burrs, if any exist, must be removed. The specifics of cutting honeycomb panels are fine-tuned using the results on tearing, rubbing, and cell-wall integrity [50], and specific solutions using cooled machining have been recently proposed [51]. A proper milling operation that just removes the CFRP-skin, with only a tiny fraction of honeycomb cells removed, avoids burr formation (to the maximum (tiny) allowed size), as well as any cell degradation. This is possible if the mill depth is carefully fit to the skin thickness, as the new end-effector allows with the position gauge. The result of the skin removal operation is shown in Figure 10. The cutting was conducted by circle interpolation and it took about 27.5 min to complete the skin removal of the door, with some 15.4 s per spot (39 mm). The post-operation check was completed by visual inspection. All tests conducted with the end-effector showed that the milling result is perfectly adequate to the quality requirements. No delamination was observed in the many tests carried out with the end-effector, using sharpened tools and the selected spindle. The measurements taken with the gauge allowed appreciable burrs to be avoided and the cell integrity to be preserved. In a processing line, an optical camera would allow for the control of both skin delamination and core (burrs and integrity). Alternatively, image control can be utilized in the next step, prior to potting, when cameras are used to verify the injection stage. The time involved in the milling operation is comparable between the end-effector mounted on the robotic-arm and a conventional milling machine. The main difference is that working in a process line avoids the logistic chain (stockpiling and delivering) due to a separated process. This is an important part of the time and cost savings, and is made possible with the automation solution. Resin Injection: X-ray Radiographic Non-Destructive Inspection The quality of the filling for an acceptance test is defined not by load tests to probe, e.g., the fixation pull-out strength, but by practical manufacturing aspects, namely, verification of the characteristics of the resin used, the curing process control, and the correct filling of the cells. Therefore, quality assessment places importance on the proper manufacturing procedures, with the appropriate functionality justified by independent structural analysis and tests; see Section 2. Concerning the evaluation for this study, the critical check is correct cell filling, since the resin and curing are those specified by the client and manufacturer, respectively, according to the specific regulations for the aeronautic industry. The potted spots are typically positions to drill fixation points for some kind of fastener. Therefore, the structural integrity of such fixing points requires that there are no empty cells close to the central drilling bore, nor cracks or voids in that volume. In Figure 11 upper-left panel, a faulty potting spot shows empty cells in the central part, as well as partially filled cells, with void space around the walls. The resin must overflow to avoid such partial cell filling ( Figure 11 lower-left panel). An optical camera picture can be used for this kind of checking. However, this visual inspection does not help to control the internal volume. In Figure 11 upper-right panel, some voids are visible in the inner volume, as observed after cutting the panel. This kind of test can help to tune the manufacturing (injection) parameters. The panels must be examined and qualified individually; thus, ND methods must be applied to preserve the integrity of the whole part. Different ND methods are available and capable of spotting different issues on CFRP parts for aircraft [52][53][54]. X-ray radiographic methods are a well-proven ND tool. It is possible to obtain 2D (projections) and 3D images (tomography) based on the radiation absorption properties of the different materials, and thus analyse CFRP parts to different detail levels [55,56]. Indeed, these methods are proven valid for adequately controlling volumetric defects [57]. The brightness scale observed in a radiographic 2D image is related to the radio-density of the materials involved, as well as their thicknesses. In this case, it is closely related to the thickness of the material traversed by the X-ray beam, since all the involved materials (skin, honeycomb, and filling resin) are, mostly, carbon-based, with close radio-densities. As the cell-wall and CFRP-skin 2D projection are very homogeneous, the brightness level (or grey scale level) is a direct index of the total traversed matter. Therefore, if voids are present in the cells, the air amount will change the grayscale value on the projected image, due to the different (radio-)density with respect to a fully filled cell. To test the potting filling quality, several samples were prepared with the vacuumassisted injection method and analysed with X-rays. To finish the potting process, the spots must be grounded (Figure 11 lower-right panel). This is also important to homogenise the X-ray analysis, which directly depends on the panel thickness value. The samples were analysed with a 160 kV X-ray tube and the images were recorded with a flat panel of 1024 × 1024 pixels, with pixel size 1 × 1 µm 2 . In Figure 12, the results of two different faulty potting spots are shown. They correspond to two different core thicknesses; thus, the grayscale differs for each case. However, both were calibrated with respect to the corresponding fully-filled cell reference case. The grayscale quantitative analysis allows for defining the deviations in brightness value. Some cells show deviations in the range from 4 to 15.4%. The deviations can affect the region of the cell (cases 1, 2, 3, 5 in the upper panel case) or the overall projection of the cell (case 4). An independent test with reference samples allows one to set a numerical limit on the accepted deviation, correlated with the actual size of the void, depending on the ratio of the radio-density of the resin and air and on the void size. In the lower panel, the density deviations are marked (cases 4 and 5), as well the presence of spots (cases 1, 2, 3) generated for the size calibration, similar to the image quality indicators ISO 19232-1:2013 based on metal wires, which serve as size reference gauges. The multiple tests developed in this ND program demonstrate that it is feasible to achieve a repeatable fully successful injection procedure by adjusting the injection flow rate. For single-cell injection, the final quality depends on carefully tuning the injection flow rate and pipe extraction speed. For multiple-cell injection, it depends mostly on the bubble-free quality of the resin (see Appendix A), because the injection itself is continued until overflow. As seen from the measurements, voids are rarely left in the internal cell volume if the injection speed is kept steady and adapted to the viscosity of the resin (which dictates the maximum speed of the process), and the resin mixing is bubble-free. Since the vacuum assisted multiple-cell injection proved capable of making the correct cell filling and is up to 3.55 times faster, it is the selected automation solution to implement structural potting. The success rate observed is promising, and requires a dedicated specific sampling analysis to define the qualifying procedure. Indeed, (almost) the only possible source of issues would be a failure in the resin supply system due to non-steady flux or viscosity of the resin, or to bubble content. Conclusions The demanded increase in production rate at a limited cost across the aeronautical industry requires the automation of the manufacturing processes related to large CFRP parts. In this work, we developed prototypes of tools capable of automating one complex task: the structural potting of honeycomb panels. The specific end-effectors designed and tested here allow for, first, removing the CFRP skin of the panels and, then, filling the panel cells with resin to create a monolithic volume where necessary. The designed end-effectors are ready to be mounted on conventional robotic arms in a production line, as tested. • The end-effector dedicated to the CFRP skin removal proved capable of proper operation, with the rigidity of the robotic arm high enough to guaranty the milling quality. The tests performed showed no CFRP delamination defects and the core (cells) integrity was preserved. • The resin injection operation required considering different options to fill either a single cell or multiple cells. Specific prototypes were tested. X-ray radiographic inspections performed on the samples showed that it is possible to obtain the right filling quality with the two types of end-effectors after properly tuning the injection rate. The vacuum-assisted multi-cell injection end-effector, with a simplified design, provided the correct filling quality and reduced the operation time by a factor of more than 3.5. The deployment of this end-effector system for potting in a processing line requires a first stage with one robotic arm to remove the CFRP-skin on each panel side affected by this operation. The second stage includes a robotic arm with the injection end-effector. The paired vacuum nozzle at the opposite panel side requires a second arm. An alternative option would be using co-bots for this position. The operator would assist the operation and, at the same time, conduct the inspection control of the correct resin dispensing cycle. The possibility of automating the structural potting of large aeronautic (thick) honeycomb panels is proven feasible with this innovative end-effector system. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Resin Cartridge Filling The mixing of the bi-component resin system is necessary before injection. For the multi-cell injection option, the mixing must be performed separately so that enough resin volume can be provided to re-fill the resin chamber prior to injection. The mixing requires (i) the right component combination, (ii) the complete mixing of the components, and (iii) elimination of any bubbles in the mixed product. A dedicated mixing bench was set up to provide the resin; see Figure A1. A chamber with two opposite inlets admits each resin component, supplied in individual cartridges, which can be adapted to any commercial size. The closed chamber is evacuated, therefore emptying the cartridges. The resin system is for mixing equal volumes, simplifying the operation. A simple paddle mixer, actuated by an external engine, shakes and mixes the resin. The continuous vacuum easily removes the bubbles from the resin volume. A different outlet is used to pump the mixed resin into the end-effector nozzle. Figure A1. Drawing of the bench dedicated to mixing the bi-component resin system. The chamber is evacuated, causing the cartridge to discharge into the chamber. The paddle mixer and continuous vacuum guarantee correct, bubble-free mixing.
2022-09-30T15:25:16.645Z
2022-09-26T00:00:00.000
{ "year": 2022, "sha1": "ca6c9c659a9464e3d89e22e2272c7f18a704c1e3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/19/6679/pdf?version=1664343599", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c17e7f202475784aa68fd90c65a0663457878d7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
11816374
pes2o/s2orc
v3-fos-license
Exhausting treadmill running causes dephosphorylation of sMLC2 and reduced level of myofilament MLCK2 in slow twitch rat soleus muscle Myosin light chain 2 (MLC2) is a small protein in the myosin complex, regulating muscle contractile function by modulating Ca2+ sensitivity of myofilaments. MLC2 can be modified by phosphorylation and O-GlcNAcylation, two reversible and dynamic posttranslational modifications. The slow isoform of MLC2 (sMLC2) is dephosphorylated in soleus muscle during in situ loaded shortening contractions, which correlates with reduction in shortening capacity. Here, we hypothesize that exhausting in vivo treadmill running induces dephosphorylation of MLC2 in slow twitch soleus, but not in fast twitch EDL muscle, and that there are reciprocal changes in MLC2 O-GlcNAcylation. At rest, both phosphorylation and O-GlcNAcylation of MLC2 were lower in slow than fast twitch muscles. One bout of exhausting treadmill running induced dephosphorylation of sMLC2 in soleus, paralleled by reduced levels of the kinase MLCK2 associated to myofilaments, suggesting that the acute reduction in phosphorylation is mediated by dissociation of MLCK2 from myofilaments. O-GlcNAcylation of MLC2 did not change significantly, and seems of limited importance in the regulation of MLC2 phosphorylation during in vivo running. After 6 weeks of treadmill running, the dephosphorylation of sMLC2 persisted in soleus along with reduction in MLCK2 both in myofilament- and total protein fraction. In EDL on the contrary, phosphorylation of MLC2 was not altered after one exercise bout or after 6 weeks of treadmill running. Thus, in contrast to fast twitch muscle, MLC2 dephosphorylation occurs in slow twitch muscle during in vivo exercise and may be linked to reduced myofilament-associated MLCK2 and reduced shortening capacity. Introduction Repeated muscle activity leads to a decline of muscle function known as fatigue. Fatigue typically develops during daily activities like walking or running, and involves decline in muscle force development, shortening and relaxation. However, the mechanisms that mediate fatigue are complex and not fully understood (for review see Allen et al. (2008)). Posttranslational modifications (PTMs) like phosphorylation and O-GlcNAcylation of myofilament proteins can alter protein function and affect fatigue development in working muscles (Fitts 2008;Cieniewski-Bernard et al. 2009). One of these myofilament proteins is regulatory myosin light chain 2 (MLC2), that together with the essential myosin light chain 1 wrap around the neck of the myosin heavy chain, providing mechanical support (Lowey and Trybus 2010). MLC2 can be modified by phosphorylation at Ser15 by the skeletal muscle myosin light chain kinase (MLCK2) (reviewed by Stull et al. (2011)), and phosphorylation is thought to promote the movement of the myosin head toward actin, increasing the Ca 2+ sensitivity of the contractile apparatus (Persechini et al. 1985;Stull et al. 2011). We have previously reported dephosphorylation of slow isoform of MLC2 (sMLC2) in slow twitch soleus muscle during in situ loaded shortening contractions (Munkvik et al. 2009;Hortemo et al. 2013). This was well correlated with a decline in muscle shortening (fatigue), suggesting that sMLC2 phosphorylation participate in the regulation of shortening capacity in slow twitch muscle by modulating Ca 2+ sensitivity of myofilaments. In fast twitch skeletal muscle, tetanic isometric stimulation is associated with increased MLC2 phosphorylation and posttetanic twitch potentiation, while no such potentiation is seen in slow twitch skeletal muscle (Vandenboom et al. 2013). This suggests that MLC2 is differently regulated in fast and slow twitch muscle. O-GlcNAcylation of skeletal muscle proteins is recently suggested to be a regulator of skeletal muscle function (reviewed by Cieniewski-Bernard et al. (2014b)). Several contractile proteins have been described to be O-GlcNAcylated (Cieniewski-Bernard et al. 2004Hedou et al. 2007;Ramirez-Correa et al. 2008), including MLC2, and it is believed that phosphorylation and O-GlcNAcylation could interplay (i.e., called phospho-GlcNAc modulation) in tuning the functional properties of MLC2. However, to our knowledge, the effects of exercise on phospho-GlcNAc modulation of MLC2 in skeletal muscle have not been investigated. Phosphorylation and O-GlcNAcylation are O-linked, reversible and dynamic PTMs at serine and threonine residues, and O-GlcNAcylation is hence different from irreversible N-linked glycosylation in the endoplasmic reticulum-Golgi (for review, see Hart et al. (2011)). The specific site for O-GlcNAcylation on skeletal muscle MLC2 has not been determined, but in rat cardiac MLC2 the O-GlcNAcylation site is the same as the phosphorylation site (Ser15) (Ramirez-Correa et al. 2008), corresponding to the phosphorylation site in rat skeletal muscle MLC2. The enzymes responsible for phosphorylation-dephosphorylation of MLC2 in skeletal muscle are MLCK2 and myosin light chain phosphatase (MLCP), respectively (reviewed by Stull et al. (2011)). MLCP is composed of the catalytic subunit of protein phosphatase 1 beta (PP1B), the myosin phosphatase targeting protein (MYPT2), and the small unit M20 of unknown function. Modulation of protein O-GlcNAcylation is achieved by two evolutionary conserved enzymes, O-GlcNAc transferase (OGT) and O-GlcNAcase (OGA). OGT and OGA antagonistically add and remove the O-linked GlcNAc to serine or threonine residues, comparative to protein phosphorylation by kinases and phosphatases. Interestingly, MLCK2, MYPT2, PP1, OGT, and OGA were recently shown to exist in a multienzymatic complex at the sarcomere (Cieniewski-Bernard et al. 2014a). In this study, we hypothesized that the phospho-Glc-NAc pattern is different in slow versus fast twitch muscle at rest, and that the effects of in vivo treadmill running on MLC2 phosphorylation are different between the two muscle types. Specifically, an important aim of our study was to determine if there is dephosphorylation of sMLC2 in slow twitch muscle during in vivo running and whether there are reciprocal changes in MLC2 O-GlcNAcylation. Finally, we measured MLCK2, MLCP, OGT, and OGA in the muscle homogenate and in the myofilament protein subfraction since the amount of these enzymes might explain the degree of MLC2 phosphorylation and O-GlcNAcylation. Ethical approval All experiments were performed in accordance with the Norwegian Animal Welfare Act. Protocols were reviewed and approved by the Norwegian Animal Research Authority (ID 3383 and 3301) and conformed to the NIH Guide for the Care and Use of Laboratory Animals. Male Wistar and Sprague Dawley rats (Taconic, Skensved, Denmark) were housed in a controlled environment (temperature 22 AE 2°C, humidity 55 AE 5%, 12/12 h daylight/ night cycle) for 1 week after arrival before included in the study. Rats were fed standard rat chow (B & K Universal, Oslo, Norway) and water ad libitum. Treadmill running -One exercise bout Male Wistar rats~300 g (n = 18) were acclimatized to the treadmill for 15 min the last 2 days prior to the experiment (5 min at 8 mÁmin À1 , 10 min at 12 mÁmin À1 ). At the day of the experiment, rats were randomly assigned to three different groups; the run (RUN) group performed one exercise bout to fatigue on the treadmill; the recovery (REC) group performed one exercise bout and were subsequently allowed 24 h rest, and the control (CTR) group remained sedate. The exercise was performed at 12.5°inclination with incremental running speed, starting at 8 mÁmin À1 and increasing every second min toward maximum running speed of the individual rat. The exercise protocol was continued until exhaustion, defined as when the rat was unable to continue running at the maximum running speed. Rats in the RUN group were at the end of exercise immediately anaesthetized in a chamber with 4% isoflurane (Forene â ) and sacrificed by neck dislocation, and within 1 min after termination of the exercise protocol soleus and extensor digitorum longus (EDL) muscles were harvested and snap-frozen in liquid nitrogen and stored at À80°C until analysis. Rats in the REC group were at the end of exercise allowed rest, food, and water ad libitum for 24 h before muscles were harvested. In situ exercise protocol The in situ exercise protocol was performed essentially as described previously (Munkvik et al. 2009). In short, male Wistar rats~300 g (n = 9) were anaesthetized, intubated and placed on a respirator, and the soleus muscle was prepared in situ keeping the blood supply intact. The distal tendon of soleus was fastened to a combined force and length transducer, and the muscle was intermittently electrically stimulated to perform fatiguing shortening contractions toward a preset load (1/3 of maximal tetanic force). The temperature was kept at 37°C by preheated 0.9% NaCl running over the epimysium of the muscle. At the end of the experiment, the soleus muscle from the stimulated (i.e., exercised; EX) leg was harvested and snap-frozen in liquid nitrogen within 10 sec after termination of contractions, and subsequently the soleus muscle from the resting control (CTR) leg was harvested and snap-frozen within 1 min before the animals were killed by neck dislocation while still anaesthetized. Treadmill running -Six weeks Male Sprague Dawley rats~280 g (n = 14) were randomly assigned to perform a 6 weeks interval training program (RUN) on the treadmill (Columbus Instruments, Colombus, OH) or to remain sedate (CTR). One week acclimatization was performed with running velocity 6 mÁmin À1 for 30, 45, 60, 75, 90, and 120 min, respectively, and 1 day with rest. Interval training was then performed 6 days a week at 25°inclination; 10 min warmup (10 mÁmin À1 ) followed by 12 9 8 min intervals separated by 2 min resting periods (6 mÁmin À1 ). The running speed during intervals was 15 mÁmin À1 the first week, then increasing with 2 mÁmin À1 each week. Rats were given 0.1-0.2 g chocolate (Kvikk Lunsj, Freia, Oslo, Norway) and free access to water after accomplishing each training session. Rats not able to fulfill the exercise protocol were withdrawn from the study. After the last training session (after 6 weeks), rats were allowed rest for 24 h before the muscles were harvested; rats were anesthetized in a chamber and subsequently mask-ventilated by 3% isoflurane and 97% O 2 , and within 3 min after onset of anesthesia the soleus and EDL muscles were dissected and snap-frozen in liquid nitrogen before the animals were sacrificed by cardiac excision while still anaesthetized. Myofibrillar protein extracts were made by pulverizing muscles in a mortar with liquid nitrogen. Ice cold 6.35 mmolÁL À1 EDTA solution with protease inhibitors, phosphatase inhibitors, and 40 mmolÁL À1 glucosamine were added and the muscles samples were homogenized with Polytron â 1200, stored on ice for 30 min, and centrifuged at 18,000 g for 10 min at 4°C. Pellets were washed with 50 mmolÁL À1 KCl containing protease inhibitors, phosphatase inhibitors, and glucosamine, and centrifuged for another 10 min. The final pellets were resuspended in 50 mmolÁL À1 KCl containing protease inhibitors, phosphatase inhibitors, and glucosamine, and stored at À80°C. Immunoblotting Protein concentrations in lysates were determined using the Micro BCA Protein Assay Kit (Pierce/Thermo Scientific, Oslo, Norway) and 20-90 lg of protein was loaded on 1.0 mm 4-15% or 15% Tris-HCl gels (Criterion, BIO-RAD, Oslo, Norway). SDS-PAGE and Western blotting were performed essentially as described in the Criterion BIORAD protocol, using PVDF Hybond membranes (Amersham/GE Healthcare, Oslo, Norway). Blots were blocked in either 5% nonfat dry milk or 5% BSA for 1 h at room temperature, and incubated with primary and secondary antibodies overnight at 4°C and 1 h at room temperature, respectively. Primary antibodies were anti-O-GlcNAc CTD110.6 (MMS-248R; Covance, Oslo, Norway), anti-MLC2 pSer15 bated with appropriate anti-HRP-conjugated secondary antibodies from Southern Biotechnology (Birmingham, AL), developed using the ECL Plus Western Blotting Detection System (Amersham/GE Healthcare) and visualized in the Las-4000 mini (Fujifilm, Stockholm, Sweden). Blots were reprobed after stripping using the Restore Western Blot Stripping Buffer (21059; Thermo Scientific). Quantification of protein band intensity and processing of immunoblots was performed using ImageQuant (GE Healthcare) and Adobe Photoshop CS5. Calculation of MLC2 phosphorylation and O-GlcNAcylation O-GlcNAcylation level was detected by a global O-Glc-NAc antibody CTD110.6 with subsequent stripping and overlay with MLC2. The phosphorylation level was detected using a site-specific phospho-antibody recognizing MLC2 pSer15. The sequence of probing was; O-Glc-NAc, stripping, MLC2 pSer15, stripping, MLC2. The efficiency of stripping and the specificity of antibodies were confirmed in a control experiment (Fig. 1). Further, parallel control blots with anti-O-GlcNAc antibody RL2 (MA1-072; Thermo Scientific) revealed essentially the same pattern as with anti-O-GlcNAc antibody CTD 110.6 (data not shown), but the sensitivity of RL2 to recognize O-GlcNAc-modified proteins was somewhat inferior to CTD 110.6, and we therefore used CTD 110.6 in all our analyses. By using the pan MLC2 antibody (F109.3E1), the different isoforms sMLC2 and fMLC2 could easily be distinguished by their different molecular weight ( Fig. 2A, lower panel). Staining intensity of sMLC2 and fMLC2 in EDL was calculated relative to the staining intensity of these proteins in soleus to compare the level in fast versus slow twitch muscle, and equal protein loading was ensured by Coomassie staining (not shown). Statistics Data are expressed as means AE SEM relative to control, if not otherwise specified. For all tests, P < 0.05 was considered significant. Differences between two groups were tested using Student's paired-or unpaired t-test. The statistical analyses were performed by means of SigmaPlot (Systat Software Inc, version 12.5, Erkrath, Germany) or Microsoft Excel 2010 (Microsoft, Oslo, Norway). Results Different phospho-GlcNAc pattern of MLC2 in soleus versus EDL MLC2 isoform distribution, MLC2 phosphorylation, and MLC2 O-GlcNAcylation were measured by immunoblotting in soleus and EDL ( Fig. 2A). As expected, the expression of sMLC2 was highest in soleus, while fMLC2 was most abundant in EDL (Fig. 2B). Interestingly, in resting muscle both sMLC2 and fMLC2 phosphorylation (Fig. 2C) and O-GlcNAcylation ( Fig. 2D) were significantly higher in fast twitch EDL compared to slow twitch soleus. To assess levels of enzymes regulating MLC2 phosphorylation and O-GlcNAcylation, MLCK2 (90 kDa), MYPT2 (110 kDa), PP1B (36 kDa), OGT (110 kDa), and OGA (130 kDa) in total protein lysate from resting soleus and EDL muscles were measured by immunoblotting (Fig. 2E). The level of MLCK2 was more than two times higher in EDL compared to soleus (Fig. 2F), while MYPT2 barely was detectable in EDL, but abundant in soleus. Further, the expression of both OGT and OGA were significantly lower in EDL compared to soleus. In the myofilament protein fraction from soleus and EDL (Fig. 2G) all the enzymes analyzed in the total protein extract were detected, suggesting that each enzyme can be found in conjunction with the contractile apparatus. The level of MLCK2 in the myofilament fraction (i.e. myofilament MLCK2) was more than three times higher in EDL than in soleus, and MYPT2 was abundant in soleus but barely detectable in EDL (Fig. 2H), well compatible with the higher MLC2 phosphorylation in EDL compared to soleus (Fig. 2C). Further, the level of OGA on myofilaments was lower in EDL compared to soleus, which fits the higher O-GlcNAc level of MLC2 in EDL (Fig. 2D). Successful fractioning of myofilament proteins was confirmed by Sypro Ruby gel staining and immunoblotting with marker proteins of different subcellular compartments (Fig. 3). One bout of treadmill running causes dephosphorylation of sMLC2 in soleus Maximum running speed of the animals that performed one exhausting exercise bout (n = 10) was on average 20 AE 1 mÁmin À1 , and the time to exhaustion was 26 AE 1 min. In soleus, phosphorylation of sMLC2 was significantly decreased after one exhausting bout of treadmill running (RUN), and was restored to control values after 24 h recovery (REC) (Fig. 4A and B). O-Glc-NAcylation of sMLC2 was nominally, but not significantly increased after one exercise bout (P = 0.07), and was not different from control after 24 h recovery (Fig. 4C). sMLC2 phosphorylation is strongly correlated to myofilament MLCK2 One short exercise bout is not expected to alter total protein expression, and accordingly the enzyme expression in the total protein lysate did not change after one exhausting exercise bout ( Fig. 4D and E). However, in the myofilament fraction, the level of MLCK2 was significantly reduced after one exercise bout (RUN), and was restored to control level after 24 h recovery (REC) (Fig. 4F and G). There were no concomitant changes in MYPT2, PP1B, OGT, or OGA. Thus, the variation in sMLC2 phosphorylation after exercise and recovery covaried with the presence of MLCK2 associated to myofilaments, showed by a positive correlation (P < 0.0001) (Fig. 4H). The reduction in myofilament MLCK2 is rapid To further investigate the association between sMLC2 phosphorylation and myofilament MLCK2, we included experiments using the in situ exercise protocol. Already after 100 sec in situ exercise (EX), there was parallel reduction in muscle shortening (Fig. 4I) and sMLC2 phosphorylation (Fig. 4J), and in accordance with the results from the in vivo treadmill exercise, the reduction in sMLC2 phosphorylation was accompanied by reduced levels of myofilament MLCK2 (Fig. 4K). Thus, reduction in myofilament MLCK2 is linked to reduced sMLC2 phosphorylation also in the in situ model, indicating a rapidly responding mechanism in shortening muscle that performs work. Six weeks treadmill running induces persistent dephosphorylation of sMLC2 in soleus Rats that accomplished the 6 week interval training significantly increased their running speed during intervals from 15 mÁmin À1 to 25 mÁmin À1 , and had lower body weight compared to sedate controls (339 AE 9, n = 6 vs. 444 AE 9 g, n = 6; P < 0.05). Expression of CS, a mitochondrial enzyme catalyzing the first reaction in the citric acid cycle and a marker of muscle oxidative capacity, was increased in soleus by 63 AE 4% (n = 4 + 4; P < 0.001) in the exercise group compared to sedate controls, indicating increased oxidative capacity of the trained muscles. Muscles were harvested 24 h after the last day of the 6 weeks exercise program. Hence, acute effects of exercise were not investigated in this cohort, but the focus was on long term effects of exercise training. In soleus, sMLC2 phosphorylation was significantly and persistently decreased after 6 weeks exercise training (RUN) compared to sedate controls (CTR) (Fig. 5A and B). The sMLC2 O-GlcNAcylation was not altered in RUN compared to CTR (Fig. 5C). The persistently reduced sMLC2 phosphorylation after 6 weeks exercise training was paralleled by sustained reduction of MLCK2, both in the total protein lysate ( Fig. 5D and E) and in the myofilament fraction ( Fig. 5F and G). OGA was reduced in the total protein lysate ( Fig. 5D and E). The reduced expression of MLCK2 and OGA in the total protein lysate indicates enzyme regulation at the transcriptional level after 6 weeks exercise training, different from after one exercise bout. Treadmill running does not affect phospho-GlcNAc pattern of MLC2 in fast twitch EDL Remarkably, in contrast to soleus, no differences were found in fMLC2 phosphorylation ( Fig. 6A and B) or fMLC2 O-GlcNAcylation (Fig. 6C) in EDL after one bout of treadmill running or after recovery. In accordance with this, none of the enzymes analyzed (MLCK2, MYPT2, PP1B, OGT, OGA) were altered after one exercise bout in EDL, neither in the total protein lysate nor in the myofilament fraction (data not shown). There was neither any significant difference in phospho-GlcNAc pattern of MLC2 or enzyme expression in EDL after 6 weeks exercise training (data not shown). Discussion In this study we show dephosphorylation of the regulatory protein sMLC2 in slow twitch soleus muscle after exhausting in vivo treadmill running. The phosphorylation level was strongly correlated to the level of the kinase MLCK2 associated to myofilaments, indicating a rapid mechanism to regulate contractile function. O-GlcNAcylation of MLC2 did not change significantly, and seems less important in the regulation of MLC2 phosphorylation during in vivo exercise. The pattern of MLC2 phosphorylation in slow twitch muscle is different from the pattern in fast twitch muscle, and our data support that dephosphorylation of sMLC2 in slow twitch muscle may be linked to reduced shortening capacity of the muscle. Basal phospho-GlcNAc pattern is significantly different in soleus vs. EDL The comparison between soleus and EDL revealed manyfold higher levels of MLC2 phosphorylation and O-Glc-NAcylation in EDL compared to soleus, corresponding with profound differences in the expression of regulating enzymes. This suggests that the functional role of phosphorylation and O-GlcNAcylation may be different in the two muscle types. In EDL, the high expression of MLCK2 and the barely detectable level of MYPT2 favor a high phosphorylation level of MLC2. In soleus, on the con- trary, the low levels of MLCK2 combined with abundant expression of MYPT provide a plausible explanation of the lower phosphorylation level of MLC2 in soleus. Consistent differences in enzyme activity (Moore and Stull 1984) and expression (Ryder et al. 2007) of MLCK2 and MLCP/MYPT2 between EDL and soleus have been reported previously, supporting a fiber type-specific regulation of MLC2 phosphorylation. Also in regard to O-GlcNAcylation we find differences in enzyme levels, especially lower OGA in the myofilament fraction in EDL compared to soleus, well compatible with the higher O-GlcNAcylation level of MLC2 in EDL. The higher level of both phosphorylation-and O-Glc-NAcylation of MLC2 in fast compared to slow twitch muscle is interesting in light of a recent study which suggest that no phosphorylated form of sMLC2 is at the same time O-GlcNAcylated and vice versa (Cieniewski-Bernard et al. 2014a). If the two PTMs are mutually exclusive, our results suggest that the proportion of unmodified MLC2 is high in soleus, while on the contrary a large part of the total MLC2 in EDL is modified by either phosphorylation or O-GlcNAcylation. The phosphorylation site on skeletal muscle MLC2 is Ser15, while the specific site for O-GlcNAcylation has not been identified. However, in cardiac muscle the only known O-GlcNAcylation site is the same as the phosphorylation site (Ser15) (Ramirez-Correa et al. 2008), and it remains to determine whether this is the case also for skeletal muscle MLC2. Reversible dephosphorylation of sMLC2 in soleus after one bout of treadmill running An important finding of the present study was the reduced phosphorylation of sMLC2 in slow twitch soleus muscle after one bout of in vivo exhausting treadmill running, fully reversible after 24 h rest. This finding sup-ports the results from our previous in situ studies (Munkvik et al. 2009;Hortemo et al. 2013). In these studies, repetitive in situ loaded shortening contractions of soleus induced reversible reduction in muscle shortening correlated with reversible dephosphorylation of sMLC2, suggesting a role of sMLC2 in regulating shortening contractions in slow twitch muscle. Interestingly, a role of MLC2 in regulating loaded shortening contractions has also been reported in cardiac muscle (Sanbe et al. 1999;Scruggs and Solaro 2011;Toepfer et al. 2013). Most previous experiments conducted on slow twitch muscle in regard to sMLC2 phosphorylation have comprised in vitro or in situ isometric stimulation of short duration, in unfatigued muscle. We have recently shown that loaded shortening contractions (concentric contractions, i.e. work) that were associated to high metabolic stress (drastic fall in muscle CrP, ATP and increase in lactate) seem necessary to induce changes in sMLC2 phosphorylation in slow twitch muscle (Hortemo et al. 2013). There was on the contrary no or little alteration of sMLC2 phosphorylation in slow twitch muscle when the muscle performed solely isometric contractions or when shortening was almost unloaded (Danieli-Betto et al. 2000;Hortemo et al. 2013), where in both situations the metabolic stress is low. This implies that shortening contractions trigger dephosphorylation of MLC2 in slow twitch muscle only when the muscle performs work that causes a metabolic stress, like the exhausting treadmill running performed in the present study. We demonstrate in the present study that the variation in sMLC2 phosphorylation after one in vivo exercise bout correlates strongly to the level of myofilament associated MLCK2 (Fig. 4H). The observation is strengthened by the additional in situ experiments where we find parallel reductions in muscle shortening, sMLC2 phosphorylation and myofilament MLCK2 already after 100 sec ( Fig. 4I-K). This reveals a rapidly responding system, and we suggest that the reduction in myofilament MLCK2 represents dissociation of the enzyme from the myofilaments, reducing the amount of available kinase in the proximity of sMLC2. MLCK2 was recently shown to exist in a multienzymatic complex at the sarcomere together with MLC2, OGT, OGA, MYPT2, and PP1 (Cieniewski-Bernard et al. 2014a), supporting our finding of MLCK2 in the myofilament protein fraction. However, the specific binding site for MLCK2 on myofilaments remains to be identified. Also the regulation of MLCK2 in slow twitch skeletal muscle is poorly understood. In fast twitch muscle, the same Ca 2+ signal that initiates force development also regulates MLCK2 activity; when calmodulin is saturated with four Ca 2+ , the Ca 2+ /calmodulin complex bind to MLCK2 and the regulatory segment on MLCK2 is displaced, allowing interaction with MLC2 (Gao et al. 1995;Ryder et al. 2007;Stull et al. 2011). Exercise could elevate Ca 2+ /calmodulin and activate MLCK2, but this does not fit the reduced phosphorylation of MLC2 that we observed in slow twitch soleus muscle after in situ stimulation (Munkvik et al. 2009;Hortemo et al. 2013) and after in vivo exercise in the present study. In cardiac muscle, there are ambiguous results whether the cardiac MLCK2 is Ca 2+ /calmodulin-dependent or not (reviewed by Scruggs and Solaro (2011)). Our data suggest that Ca 2+ activation of MLCK2 does not seem to be a major regulator of the activity-dependent phosphorylation level of sMLC2 in slow twitch muscle. Dynamic interplay between phosphorylation and O-GlcNAcylation of MLC2 has been suggested to participate in the regulation of skeletal and cardiac muscle contractile function (Hedou et al. 2007;Ramirez-Correa et al. 2008;Cieniewski-Bernard et al. 2009, 2014aLunde et al. 2012). In our model of in vivo treadmill running, we did not detect significant changes in O-GlcNAcylation of MLC2. Thus, MLC2 O-GlcNAcylation does not seem to be an important regulator of MLC2 phosphorylation during treadmill running although there was a trend toward increased MLC2 O-GlcNAcylation after one bout of treadmill running (P = 0.07). We cannot exclude that this is a type II error since the stoichiometry of O-GlcNAcylation is low and the immunoblot signal of O-GlcNAcylated MLC2 in soleus is weak. Hindlimb unloading was recently reported to induce a 400% increase in sMLC2 phosphorylation in soleus, but only a 50% reduction in O-GlcNAcylation (Cieniewski-Bernard et al. 2014a), suggesting that variations in MLC2 O-Glc-NAcylation are smaller than variations in phosphorylation. More sensitive analysis methods and development of site-specific anti-O-GlcNAc-antibodies are warranted to detect subtle variations in protein O-GlcNAcylation. Persistent dephosphorylation of sMLC2 in soleus after six weeks treadmill running In contrast to the full recovery of sMLC2 phosphorylation observed in soleus muscle 24 h after one single exercise bout on the treadmill, sMLC2 was still dephosphorylated 24 h after the last training session following 6 weeks treadmill running. Also different from after one single exercise bout, there was reduction in MLCK2 not only on myofilaments, but also in total protein homogenate, indicating regulation at the transcriptional level after 6 weeks treadmill running and providing a plausible explanation to the persistent dephosphorylation of MLC2. The animals increased their running speed significantly after 6 weeks exercise, and the increased expression of CS confirmed the training response biochemically. The persistent dephosphorylation of sMLC2 could hence be a component of an advantageous physiological adaption to exercise. We speculate that the persistent dephosphorylation provides a beneficial restraint during long-lasting exercise; postponing the fatigue development by limiting the initial work performed and hence energy consumption. Abbate et al. (2001) showed that the contraction economy (muscle force/muscle energetics cost) was reduced when MLC2 was phosphorylated compared to nonphosphorylated in fast twitch muscles from wild-type and MLCK2 knockout mice. This may suggest that during prolonged activity, increased phosphorylation could cause adverse metabolic changes, and that low levels of MLC2 phosphorylation in slow twitch soleus contribute to the fatigue resistance in this muscle type. Phospho-GlcNAc pattern of fast twitch EDL is not altered by treadmill running In fast twitch EDL muscle, in contrast to slow twitch soleus, the phospho-GlcNAc level of MLC2 was not modulated by treadmill running, and there was no change in enzyme expression. This strongly indicates that the regulation of phospho-GlcNAc of MLC2 in fast twitch EDL is different from the regulation in slow twitch soleus. The profound differences in enzyme levels between soleus and EDL (Fig. 2H) could possibly by large explain the dissimilar modulation of MLC2 during exercise, and are likely important for the muscles' functional properties. Our results highlight the importance of exploring slow twitch muscle, not only fast twitch muscle as many investigators do, because the response to exercise and the mechanisms of fatigue appear to be fundamentally different in the two muscles. Moreover, to study loaded shortening (concentric) contractions at physiological ª 2015 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society. temperature (37°C) is essential to understand the fatigue development observed during daily life activities. Conclusion In conclusion, we report dephosphorylation of sMLC2 in rat slow twitch muscle after exhausting in vivo treadmill running, both in a single exercise bout and after 6 weeks training. The reduction in sMLC2 phosphorylation is strongly correlated to reduced level of myofilament MLCK2, suggesting a novel mechanism to regulate contractile function. O-GlcNAcylation of MLC2 did not change significantly and seems of less importance in regulating MLC2 phosphorylation during treadmill running. In fast twitch EDL, the levels of phosphorylation and O-GlcNAcylation of MLC2 are higher than in slow twitch soleus at rest, but were not altered by treadmill running. Thus, in contrast to fast twitch muscle, sMLC2 dephosphorylation occurs in slow twitch muscle during in vivo exercise and may be linked to reduced level of myofilament associated MLCK2 and reduced shortening capacity. This provides an exciting basis for discovery of mechanisms underlying fatigue in vivo during loaded shortening contractions like walking and running.
2016-05-04T20:20:58.661Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "5a9010090c25a671ea6b76fe0f2a8266e981ab51", "oa_license": "CCBY", "oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.12285", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e3e4c7ff89c47b7d925fc29a4ce4b25754ab84a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
199535032
pes2o/s2orc
v3-fos-license
Designing resilient creative communities through biomimetic service design Creative communities are grassroots, bottom-up initiatives of people who through their diffuse design capacity propose new, desirable service futures that address the problems of everyday life. These creative communities exist within a transition from modernity towards sustainment, their adversarial character embodying alternative values such as conviviality, solidarity, and openness and shifting the focus from growth to flourishing. The sociotechnical system that is a creative community creating social innovation faces constant threats due to the collapse of traditional support structures and their disruptive, adversarial character, and so, identifying strategies to increase its resilience is necessary. We turn to nature for inspiration and mentoring. Biomimicry is a framework that designs solutions inspired by biological systems. We argue that permaculture, provides an interesting direction for development and research in the context of social innovation. INTRODUCTION This paper aims to support the idea that increasing the resilience of creative communities by fostering the emergence of greater diffuse capacity on a local level can act as a successful exit strategy for service design. To achieve this goal, we turn to Biomimicry, the study of biological systems, to translate their principles into sociotechnical ones. Applying these through design can provide a way to reconstitute the domains of everyday life (Kossoff, PAGE 251 Bofylatos, S. (2020). Designing resilient creative communities through biomimetic service design. Strategic Design Research Journal, volume 13, number 02, May -August 2020. 249-267. Doi: 10.4013/sdrj.2020.132.09 allows us to gain insight into the tacit knowledge that is part of the type of distributed design process adopted by creative communities (Bofylatos 2018). This kind of knowledge tends to be a blind spot for design research and finding ways to scale up or duplicate the innovative knowledge created in the field. In a wider methodological context this study is categorised as participatory action research. Given the necessary ontological and methodological shifts needed to transition towards sustainability the rejection of traditional scientinfic knowledge structures and the application of new methods has the capacity to lead to new ways of designing that adress the structural issues of modernist thought. SOCIAL INNOVATION In the last couple of decades service design has emerged as a field with promising potential to minimise the material flows and increase the overall sustainability of human activities on the planet. One of the central reasons for this is the adoption of 'service dominant' logic as opposed to the 'goods dominant' logic of traditional economies. Service dominant logic sees value as dynamic and co-created when a service provider and a client interact (Meroni and Sangiorgi, 2011), whereas goods dominant logic asserts that value as static and embodied in material products. The main shift is that of perspective; products are systems that embody 'value in exchange', whereas services are systems that produce 'value in use' (Edvardsson et al., 2005;Grönroos, 2008). These perspectives shift the focus towards the process of value creation, due to the shift from considering value as embedded in tangible goods to conceiving value as co-created amongst various economic and social actors (Vargo and Lusch, 2008;Meroni and Sangiorgi, 2011). The transition from atoms to bits is one that is conducive to sustainability as it reduces material flows, increases resource efficiency, and reduces consumerism. The interactivist view of value in the context of service systems points to the adoption of a relational ontology (Escobar, 2018) that better supports the emerging scientific paradigm associated with sustainability. Social innovation builds on tools and methods of service design to increase sociality through participation, to strengthen local economies and to create enabling ecosystems for the transition towards sustainment. According to Manzini (2017), social forms are made possible, durable and, where appropriate, relocatable and scalable through participation in a social ecosystem that hopes to make itself more desirable. Creative communities that use the diffuse design capacity to co-create collaborative services sit at the centre of social innovation and transition studies. This difference between collaborative services and standard services (Cipolla and Manzini, 2009) exemplifies the dichotomy between reducing unsustainability and enabling the emergence of unsustainability. Collaborative services utilise a higher level of cooperation than that of simply building the service itself. Today, social innovation and social entrepreneurship have garnered a lot of attention from professional designers and funding agencies (Telalbasic, 2015). Due to the externalities of this approach the social enterprises and creative communities that are structured are not emergent from the bottom up but exist for external reasons. In addition, the designers associated with the project are not embedded in the community and there is rarely special attention given to increasing the diffuse design capacity in the community. In a sense the community becomes dependent on external resources; as such, when these are removed, the community withers because it lacks the capacity to exist without the supporting apparatus. This lack of an 'exit strategy' has been identified as an issue that becomes more and more important in service design discourse. The theoretical model presented and the supporting case study aspires to provide the initial structure of a framework for designing creative communities in a way that increases the odds in favour of a self-sustaining community creating social innovation. However, this process has several shortcomings in relation to spatial and temporal sustainability. If the expert designers engaged in the projects are removed, the diffuse design capacity has not reached a level of maturity that allows the community to continue evolving and flourishing. This field is described as designing 'exit strategies' for the design team. This project aims to present the initial thought process of such an exit strategy, developed in an action format informed by Participatory Action Research methodologies (Fassi et al., 2013). (Fassi et al. 2013) Within this context the main research proposition emerges. RP1: "We create a viable exit strategy for designers engaged in social innovation by fostering the diffuse design capacity of the creative community." We posit that by applying resilience thinking and biomimetic design methods in the context of a systemic perspective, these ecosystems of creative communities can be enabled and strengthened and can better achieve their goals. Increasing resilience by fostering the diffuse design capacity can be a viable exit strategy in any service development within a community. RESILIENCE Resilience is defined as the capacity of a system to retain its organisational closure while absorbing external perturbations ). The sociotechnical system that is a creative community creating social innovation faces constant threats due to the collapse of traditional support structures and their disruptive, adversarial character. Identifying strategies to increase the capacity of any system to resist external forces is necessary to ensure their survival in a time of unprecedented environmental and social pressures, but in the context of the wider transitions towards sustainment and the necessary reconstitution of the domains of everyday life. "Three aspects can help us to achieve resilience: Persistence to withstand shocks or unexpected events, transformability, to move from crisis to innovation, adaptability, or able to understand change" (Rockstrom, 2009). Meadows (2008) explains that once we see the relationship between structure and behaviour, we can begin to understand how systems work and how to shift them into better behaviour patterns. Systems thinking, she adds, can help us to manage, adapt and see the wide range of choices we have before us, to identify root causes of problems and to see new opportunities. So, systems thinking consists of behavioural patterns, and learning to use them along with design can result in the design of resilient strategies to forecast the effect of a design. Another tool is the notion of the Panarchy which is developed by Gunderson and Holling. This tool attempts to understand the source and role of changes in systems which transform and take place in adaptive systems (Gunderson and Hollng, 2001). Based on the study of ecosystems, the researchers describe how nature proceeds through recurring cycles that contain four basic phases: 1) Rapid growth (r); 2) conservation (K); 3) release (omega); and 4) reorganisation (alpha). In panarchy, adaptive cycles take place along different scales (global and local) of time and space (gradual and episodic, rapidly and slowly unfolding). Panarchy is explained as the antithesis of hierarchy. In its original meaning it is defined as a set of sacred rules or as a framework of nature's rules. This term is now widely used to visualise systems theory and complexity. The theory of panarchy "rationalizes the interplay between change and persistence, between the predictable and unpredictable and how panarchies represent structures that sustain experiments, test the results, and allow adaptive evolution" (Resilience Alliance, 2015). The tools above represent a contemporary notion of resilience thinking, which looks at the rhythms of creating, conserving, revolting and finally declining within a continuous cycle. Although it requires deeper study, the idea offers a principle that designers can incorporate into their philosophy of making ecological and social systems (Ruano, 2016). The types of resilience of social systems might be different but the strategies for increasing resilience remain the same. Creative communities go through similar cycles of panarchy. In addition, the interactivist and relational character of service design further supports the distributed system with multiple redundancies, a high degree of interconnectedness, and diversity, similarly to biological systems with high resilience. This further informs our theoretical understanding and leads to RP2: "The increase of the diffuse design capacity of a creative community leads to increased resilience of the systems." One approach to creating such nested, resilient systems within a Systemic Design ethos is the adaptation of permaculture in the context of social systems. In the next section we discuss the notion of Permaculture within a systemic view. PERMACULTURE We argue that permaculture, an agroecological systemic design tradition (Cassel, 2015), provides an interesting direction for development and research in the context of social innovation. In contrast to monoculture, where only one type of value is the goal of the system, permaculture provides a systemic view that is focused on fostering virtuous cycles and cooperation between different symbiotic systems. Permaculture was introduced by Bill Mollison, a biogeographer, together with his student David Holmgren, as a response to the 1972 book The Limits to Growth. The permaculture concept is not simply an environmental protection measure or an organic farming principle, but rather a form of ecosystem design. As Mollison writes: "This book is about designing sustainable human settlements and preserving and extending natural systems. It covers aspects of designing and maintaining a cultivated ecology in any climate" (1978). The 12 permaculture design principles are based on creating and developing patterns. 1) Observe and interact; 2) Catch and store energy; 3) Obtain a yield; 4) Apply self-regulation and accept feedback; 5) Use and value renewable resources and services; 6) Produce no waste; 7) Design from patterns to details; 8) Integrate rather than segregate; 9) Use small and slow solutions; 10) Use and value diversity; 11) Use edges and value the marginal; 12) Creatively use and respond to change. All processes within a system are regulated by these principles, which Mollison refers to as "axioms". There is overlap between these axioms and the six principles of biomimicry. Looking at creative communities as an interconnected ecosystem rather than as discrete systems provides a different avenue for increasing their resilience and capability for flourishing by creating positive feedback within a wider ecosystem of bottom-up initiatives on both a local and a global level. All of the basic principles of permaculture are part of the design for social innovation. In order to draw inspiration, the prairie was selected as the basis for designing an ecosystem of creative communities entangled in virtuous circles, with the aim of increasing the resilience of a bottom-up organisation while increasing the overall flourishing in an insular environment. The example of prairies shows the significance of multiple and different 'crops' being required to enable resilience to succeed. This metaphor leads us to the idea that a social-ecological system requires many and diverse creative communities interacting with each other and applying the principles of community resilience. Due to the high level of complexity of the systems of polyculture in general and the prairies specifically, they are diverse by their nature and they interact by transferring knowledge and leaving constant feedback, thus ensuring the maximisation of community resilience by the constant creation of new varieties and the emergence of redundancies. The Prairie metaphor suggests a polycentric model (Benyus, 1995) as well. Classic studies on the sustainable governance of social-ecological systems highlight the importance of so-called 'nested institutions'. These are institutions connected through a set of rules that interact across hierarchies and structures so that problems can be addressed swiftly by the right people at the right time. Nested institutions enable the creation of social engagement rules and collective action that can 'fit' the problem they are meant to address. In contrast to more monocentric strategies, polycentric governance is considered to enhance the resilience of ecosystem services in six ways, which coincide elegantly with other principles aiming to increase the resilience of local creative communities and ecosystems: it provides opportunities for learning and experimentation; it enables broader levels of participation; it improves connectivity; it creates modularity; it improves the potential for response diversity; and it builds redundancy that can minimise and correct errors in governance. Another reason why polycentric governance is better suited for the governance of socialecological systems and ecosystem services is that it gives traditional and local knowledge a much better chance of being considered. This, in turn, improves the sharing of knowledge and learning across cultures and scales. This acts in tandem with the Systemic Design approach that permeates this approach as it provides the principles for dealing with the complexity of a sociotechnical system (Jones 2014, Ryan 2014) the production of knowledge within a design project (Sevaldson 2010). This understanding of how polyculture creates more resilient systems leads to RP3: "Permaculture can be utilised to increase systemic resilience when designing social innovators with creative communities." This is particularly evident in local and regional water governance, as in watershed management groups in South Africa or the management of large-scale irrigation systems in the Philippines, where polycentric approaches have facilitated participation by a broad range of actors and the incorporation of local, traditional and scientific knowledge (Simonsen et al., 2014). In order to further incorporate nature's teachings in the design process, we turn to Biomimicry, a trans-disciplinary approach to problem solving which has emerged through the integration of design with other disciplines, such as biology and engineering, and which attempts to translate biological mechanisms into components of socio-technical systems. Biomimicry offers the tools for identifying patterns and mechanisms in the natural world that have evolved to increase the resilience of a system. Translating and adapting these solutions to a social project is the next part of the process. BIOMIMICRY In order to create the necessary strategies, we turn to nature for inspiration and mentoring. Biomimicry is a framework that designs solutions inspired by biological systems. It opens up possibilities of seeing the way nature works, teaches, and informs arts and sciences (Ruano, 2016). It encourages deeper studies in order to arrive at technologies and strategies that may be achieved through interdisciplinary dialogues. Ecosystems display differing degrees of resilience. Understanding the strategies developed by nature to increase the resilience of ecosystems is a first step. Identifying and reframing these solutions can foster the resilience necessary for creative communities to flourish. The emerging fields of biomimetic design of services can support the evolution of service design (Ivanova, 2014) methods in the context of social innovation and shift the underlying assumptions behind the decisions made. Biomimicry has proven a robust methodology for the development of solutions in the fields of material engineering and product design; applying lessons from nature is a frontier for service design and the creation of resilient organisations. We can explore the relationship between ecology and social innovation through the lens of a biomimetic idea, namely a generation tool for service design as proposed by Ivanova. This process takes into consideration the ecology metaphor, the fact that service design and ecology share the same level of organisation and lastly, the relation contained in their definition -both terms study interactions of organisms with their environment, in ecology with the natural inhabitants, and in service design with resources, people, organisations, nature and technology (ibid., 2014). Biomimicry is a tool that can help us find options and can sometimes force the researcher to find answers (Benyus, 1997). Use of a natural pattern does not guarantee that the biomimetic artifact or system will work; for this reason, a prototype (digital or physical mock-up) is required. As the prototype is developed, it will be acquiring features that can be evaluated and modified, if necessary. "How does nature do it...?" is a key question to ask in the process of implementing biomimetic thinking in design. It suggests new ways of inquiry when designing infrastructure, messages or artefacts using keywords related to natural forms, functions, processes and systems found in nature. The difficulty occurs when the learner must structure this information or validate its accuracy (Ruano, 2016). This action format is incredibly compatible with the massively co-designed approach used in social innovation. The service itself is in a continuous iterative redesign process of evolving and growing. Ivanova proposes "a conceptual proposition of what biomimetic service design might 'look like'", a tool inspired by the TRIZ methodology and the Lotus Blossom tool of Namahn and Design Flanders. It follows these steps: definition of the design challenge and definition of eight design requirements; abstraction of a design principle which needs to define each design requirement in more general terms; searching for a biological analogue to each abstraction; and extraction of the principles behind each biological example. So the last research proposition is shaped as RP4: "Biomimicry can be used to inform the design of creative communities that aim to be more resilient. " In these sections a collection of design related approaches has been presented as elements of an approach that can enable the more robust design of relational services by creative communities. The main conclusion in our view lies in the increased adaptability present in polyculture systems due to their virtuous cycles and the contingencies that can be applied in bottom-up social systems such as creative communities. The four research propositions put forward outline an emerging systemic design approach informed by biology and service design that is spatially aware and responsible and has the capacity to transform into autopoietic systems (Battistoni and Barbero, 2018). In addition, the different levels of everyday life proposed by Kossoff (2015) point to an another interesting notion: creative communities exist with a focused and specialised goal at their centre; however, they are connected by adoption of a specific subset of values such as conviviality, participatory decision making, and others (Bofylatos and Telablasic, 2018). By looking at creative communities in a place such as an ecosystem and focusing on their interdependencies and virtuous cycles, a polyculture of social innovation can emerge in a specific space. Increasing the diffuse design capacity within this given territory can be a valid exit strategy for professional designers. THE 'APANO MERIA' SOCIAL COOPERATIVE AS A RESILIENT SERVICE POLYCULTURE In order to elaborate on the recognised strategies, the 'Apano Meria' social enterprise will be analysed with respect to the relationships between different focus groups and how these can increase the overall resilience of the system. The object of this case study is a collection of different creative communities with various interests but connected by a common theme: enabling the flourishing of the island of Syros, Cyclades. The Cyclades is a collection of islands in the middle of the Aegean Sea in Greece. They have been one of the first places in Europe to be settled in early Prehistory. The insular character of island creates a unique context in each island with human activities, microclimate, flora fauna and traditions varying highly from island to island. In order to achieve this goal three main themes have been adopted: the environment, culture and people. Each of these themes is made up of different special interest groups that are interconnected both within the theme and within the wider scope of the community. What brings everything together is conservation, the need to keep the essence of place safe from the homogenising unsustainability of modernity. The breadth of the whole enterprise is visualised in the system map below. The 'Apano Meria' social enterprise is a bottom-up initiative of the people of Syros who are interested in working to preserve the essence of place on the island. This creative community aims to act as a hub for existing or emerging active citizens' groups focused on different issues but brought together by an understanding of the need to preserve the character of the island for future generations. A place is made of tangible and intangible parts interconnected in an unending dance. The vernacular crafts and farming methods are deeply embedded in the local context and climate and are an integral part of this territory. In addition to conservation waste management is a modern problem that needs to be translated in a local context to be addressed on a local level. Finally the unique geological features of Syros create the setting for a specific, self aware visitor who requires an authentic experience when visiting a place. In order to co-create the meta-community that 'Apano Meria' strives to become we adopted the Participatory Action Research framework for Social Innovation developed in Politechnico di Milano (Fassi et al., 2013). In order to build on the idea of biomimicry we used Ivanova's A different issue is associated with the conservation of traditional farming methods and varieties. This touches on environmental aspects (conservation of native species) as well as heritage management and environmental stewardship. The Cyclades region is a very arid one and as such vernacular methods for cultivation with minimal water use were developed. In order to aid local farmers in transitioning towards such labour-intensive methods, a Passive Humidity Condenser was developed in collaboration with the local university. These types of interconnected relations increase the resilience of the systems and point to the leverage points for further increasing the diffuse design capacity. All of these teams are in an open dialogue amongst themselves and the goal is to foster the evolution of the diffuse design capacity in a way that creates design redundancies throughout the system. Understanding the flows of information, juxtaposing people in different roles as well as increasing the overall diffuse design capacity of the participants in the social enterprise, form the first step in creating a resilient organisation. Identifying relevant biological models that create virtuous cycles and translating these into design strategies will increase variety, resilience and the contingencies relating different people and communities. Functional redundancy, or the presence of multiple components that can perform the same function, can provide insurance within a system by allowing some components to compensate for the loss or failure of others. Redundancy is even more valuable if the components providing it also react differently to change and disturbance. This response diversity (differences in the size or scale of the components performing a particular function) gives them different strengths and weaknesses, so that a particular disturbance is unlikely to present the same risk to all components at once. Within a governance system, a variety of organisational forms such as government departments, NGOs and community groups can overlap in function and provide a diversity of responses, because organisations with different sizes, cultures, funding mechanisms and internal structures are likely to respond differently to economic and political changes. Diverse groups of actors with different roles are critical in the resilience of social-ecological systems, as they provide overlapping functions with different strengths. In a well-connected community, where functions overlap and redundancy is present, creativity and adaptability can flourish. In the next section the five lessons from the biological methods analysed are presented. Permaculture was central in the selection of biological metaphors but it was not the only factor. Social insects and hermit crabs were also used as inspiration for the extraction of design principles for the project. through social-ecological interactions. However, in a community of creative entities, the metaphor of mutualism can be related to people's exchange of knowledge and services which, at the same time, 'feed' the whole community with trust and multiple options for responding to change and dealing with uncertainty, thus helping to increase self-reliance. All these 'nutrients' contribute to the resilience of the creative community. Self-reliance requires connectivity. Connectivity refers to the structure and strength with which resources, species or actors disperse, migrate or interact across patches, habitats or social domains in a social-ecological system. Consider, for example, the epiphytic plants connected in Bromeliads: Bromeliad is the system; the epiphytes are parts of the system. How they are linked together determines how easy it is for an organism to move from one module to another. In every system, connectivity refers to the nature and strength of the interactions between the various components. From a social network perspective, people are individual actors within a system embedded in a web of connections. Connectivity can influence the resilience of ecosystem services in a range of ways. It may safeguard ecosystem services against a disturbance, either by facilitating recovery or by preventing a disturbance from spreading. The effect on recovery is demonstrated in riparian habitat. Closely situated plant communities with no physical barriers enhance recolonisation of species that may have been lost after disturbances such as floods. The basic mechanism is connection to areas that serve as refuges, which can accelerate the restoration of disturbed areas, thus ensuring the maintenance of functions needed to sustain the habitat and their associated ecosystem services. Perhaps the most positive effect of epiphytic connectivity is that it can contribute to the maintenance of biodiversity. This is because among well-connected habitat modules local species extinctions may be compensated by the inflow of species from surrounding areas. Local resources form the fourth principle of how nature maintains its high levels of resilience. Looking at desert ecosystems, a process of 'local facilitation' among plants enables the usage of local resources, which enables the whole ecosystem to exist. As Manzini posits, "Within the next few years, we will have to learn to live (and to live better, in the case of most and diversity work efficiently through localities which consist of networked people working together with high levels of self-reliance. The fifth and last principle which arose during the creation of this idea generation tool is the necessity of feedback between the actors. In the study of social insects, we understand that most individuals would be trapped and probably die, without the feedback each one leaves through its pheromones. In discussing resilience we refer to fast and slow variables, whereby drivers (external to the system, or from higher scales) cause change in 'slow' (controlling) variables; as slow variables approach threshold levels, the fast-moving variables in the system fluctuate more in response to environmental and other shocks; and these shocks or directional change in the drivers can push the system across a threshold into an alternative stability regime. Therefore, feedback plays an essential role in complex systems. CONCLUSIONS The possibility of an exit during the post-design process in services is probably the greatest concern of any service designer today. The example of panarchy shows an unending process of creating and maintaining adaptive capability. However, what could happen if this diffuse design capacity of each creative entity could be translated into expertise through the biomimetic model for resilience? Unfortunately, usage of the principles of biomimetic creative communities is a process with a timeframe that makes it impossible for it to lead to concrete data. The very design process adopted in the context of the 'apano meria' social enterprise is similar to panarchy and as such no beginning or end exists, only adaptive cycles between the different phases. However, combining the early research findings extracted over the last two years, we can assume that the model of biomimetic creative communities creates an optimistic scope for further research. Addressing the community one domain higher provides the opportunity to maximise the diffuse design capacity of the members. The main research proposition is that creating contingent design capacity in a specific territory through the evolution of diffuse design capacity can increase the resilience of a social innovation ecosystem. This means that the designers embedded in this process need to understand and move between the different organisational layers of the local society. Our mapping of the creative communities of Syros showed that these bottom-up initiatives are amorphous and change depending on trends, mitigations, emergencies etc. One important takeaway from this process has been that designing for the internal gratification of the goals of a creative community can be problematic. If the environment in which this creative community exists does not inform the Design Research Journal, volume 13, number 02, May -August 2020. 249-267. Doi: 10.4013/sdrj.2020.132.09 design process a high volume of wealth and connections is lost. It is those connections that increase the resilience of the system. Staying with resilience and permaculture, they mark an interesting continuation of the drift of design from the personal to the collective. Co-design, participatory design, and open design are but milestones in the democratisation of design. This shift towards the emergence of a collective diffuse design capacity informed by the local context and global movements in the vein of 'cosmopolitan localism' has the capacity to act as a tool for conviviality which creates value offerings capable of accelerating the transition towards sustainable ways of life tailor-made for every geographical territory. In the context of degrowth such changes are necessary to avoid the collapse of the socio-natural ecosystem. Although not evidently biomimetic, creative communities exhibit biomimetic elements. During the process of creating the bio-inspired idea generation tool we kept confirming that all the important characteristics of a social-ecological system for resilience thinking leading to a creative community exist in nature. We explored case studies from nature which prove that their resilience is based on these characteristics. Two main takeaways in relation to biomimicry have to do with the epistemology of design and ethics. Firstly, the holistic view associated with approaches that bring nature to the forefront of design exist within an alternative scientific paradigm; the scientific operationalism of modernity is incompatible with the variety of the sociotechnical and natural system. New approaches are needed to navigate these muddy waters and to deal with complexity. The second conclusion on the application of biomimicry in the context of service design is associated with ethics. Naturalistic ethics are natural and apply to living systems, but humans have, for better or worse, gone beyond this 'might is right' ethical worldview. Using natural systems with questionable ethical systems as inspiration provides a very interesting field of open dialogue and introspection on values and worldviews. One final conclusion is associated with research through design and social innovation. When undertaking action research or some other embedded epistemic approach, the perspective can be skewed, as the biases of the practitioner coexist with those of the researcher. In addition, this embeddedness in the collective creates an imbalance in decision making. The direction in which a collective wants to move can be uninteresting to research and vice versa. Balancing the different perspectives and allowing the movement of a creative community is associated both with the idea of an exit strategy and with the necessary increase in the requisite variety of the system in order to increase its resilience.
2019-08-11T17:22:38.772Z
2020-10-29T00:00:00.000
{ "year": 2020, "sha1": "eae4e7cc1824fcad69d76a6b238e9b2f06c65468", "oa_license": "CCBY", "oa_url": "http://revistas.unisinos.br/index.php/sdrj/article/download/sdrj.2020.132.09/60748084", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "04619ff9ba787cff69f45d521b9606edca429bad", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Sociology" ] }
248333273
pes2o/s2orc
v3-fos-license
Genetic risk factors in melanoma etiopathogenesis and the role of genetic counseling: A concise review Melanoma is a highly aggressive cancer originating from melanocytes. Its etiopathogenesis is strongly related to genetic, epigenetic, and environmental factors. Melanomas encountered in clinical practice are predominantly sporadic, whereas hereditary melanomas account for approximately 10% of the cases. Hereditary melanomas mainly develop due to mutations in the cyclin-dependent kinase 2A (CDKN2A) gene, which encodes two tumor suppressor proteins involved in the cell cycle regulation. CDKN2A, along with CDK4, TERT, and POT1 genes, are high-risk genes for melanoma. Among the genes that carry a moderate risk are MC1R and MITF, whose protein products are involved in melanin synthesis. The environment also contributes to the development of melanoma. Patients at risk of melanoma should be offered genetic counseling to discuss genetic testing options and the importance of skin UV protection, avoidance of sun exposure, and regular preventive dermatological examinations. Although cancer screening cannot prevent the development of the disease, it allows for early diagnosis when the survival rate is the highest. INTRODUCTION Cutaneous melanoma is a malignant skin tumor that develops from melanocytes that produce melanin. Hippocrates first described melanoma in the 5 th century B.C. as a black tumor (Greek, melas = black, oma = tumor); preserved medical texts from the late 16 th century also mention incurable black tumors [1]. There are four main histological subtypes of melanomas: Superficial spreading melanoma (70%), nodular melanoma (15-30%), lentigo maligna melanoma (4-10%), and acral lentiginous melanoma (<5%) [2]. In addition to the skin, melanomas may also develop in the eye, upper respiratory, gastrointestinal, and genitourinary systems. Although it accounts for only 5% of all skin cancers, it has the highest mortality rate if not diagnosed early. Its incidence increases annually by 3-7%, and the number of newly diagnosed patients doubles every 10 years, making melanoma the most rapidly increasing cancer diagnosis in the white population [3]. The occurrence of melanoma highly depends on the geographic area, that is, its incidence is the highest in countries with the greatest number of sunny days, such as New Zealand and Australia [4,5]. Therefore, these countries have intensified the primary prevention measures, including education about melanoma and raising awareness about the risk of overexposure to the sun, which has helped reduce the incidence rate [6]. Melanoma risk factors may be classified into three groups: genetic, epigenetic, and environmental [7]. Genetic factors include family history, Fitzpatrick skin Types 1 or 2 (pale skin that easily burns and never tans, and red hair), and defects in DNA repair mechanisms [8]. These risk factors are the main topic of this article, especially genes associated with high or moderate risk of melanoma, hereditary syndromes, and the current genetic counseling approach in at-risk populations. GENES THAT INCREASE THE RISK OF MELANOMA Most malignant tumors in the human body have multifactorial causes, that is, they result from the complex interactions between genes and environment, or in other words, the interplay between genetics and epigenetics [9]. Such tumors are sporadic [10]. They arise from the cells that have accumulated mutations throughout life, eventually leading to their malignant transformation. A small fraction (approximately 10%) of 674 www.bjbms.org encoded by the α transcript consists of 156 amino acids and is called p16INK4a, while the translation of the β transcript results in the protein called p14ARF, which contains 132 amino acids ( Figure 1, Table 1) [22]. Although different, they both influence the progression of the cell cycle. The most critical control point in the mammalian cell cycle is the G1 phase because it precedes the DNA replication in the S-phase. Thus, the replication of damaged DNA has to be prevented to avoid mutations [23]. The proteins involved in cell cycle regulation belong to two main groups: those that stimulate the cell cycle and those that stop it. The progression of the cell cycle is helped significantly by a group of kinases called CDK, which exert their function by binding to another cyclin protein. After the CDK-cyclin heterodimer is formed, kinase may phosphorylate the target proteins and stimulate the cell cycle [24]. The proteins that stop the cell cycle are called antiproliferative proteins; they are the products of tumor suppressor gene activity. Two well-known tumor suppressor genes are RB1 and TP53 [25]. The Rb protein' s function is to halt the cell cycle in the G1 phase, which is accomplished by binding the Rb protein to the E2F transcription factor that stimulates the transcription of many genes responsible for the DNA replication process. In its active state, Rb is unphosphorylated or hypophosphorylated, binds to E2F and stops the cell progression at the restriction point (R-point) in the G1 phase. However, when it is phosphorylated by CDK bound to cyclin, the Rb protein conformation is altered, leading to a release of bound E2F, which triggers the transcription of many genes whose protein products stimulate DNA replication [26]. The p53 protein is also known as "the guardian angel" of the human genome because its expression is increased in cells that suffer DNA damage. It acts as a transcription factor that all malignant tumors is hereditary. Unlike sporadic tumors, hereditary tumors occur in persons born with a mutated gene [11]. This phenomenon is called germline mutation or malignant variation. It is either inherited from one parent or occurs during gametogenesis, and consequently, a mutated gene is present in every cell of the body [12,13]. However, not everyone who inherits such a mutation will develop melanoma because this also depends on gene penetrance, expressed as a proportion of mutation carriers who develop a disease. For example, if gene penetrance is 100%, all carriers of gene mutation develop the disease; if gene penetrance is 50%, then 50% of mutation carriers develop the disease [14]. Whether or not a gene will have a phenotypical expression depends on other factors that increase or decrease the risk. In the case of melanoma, the other factors include the number of moles and sun exposure [15]. Cyclin-dependent kinase 2A (CDKN2A) gene It is estimated that ~10% of all melanoma cases diagnosed in 2002 were hereditary, with 40-60% of them occurring due to the mutation of the gene coding for CDKN2A [16,17]. William Norris first observed the potential heredity of melanoma in 1820. However, his observation went unnoticed until 1968, when Lynch and Krush first reported on the relationship between pancreatic cancer, multiple moles, and melanoma. Ten years later, Clark described dysplastic nevi in several members of one family and called it the "B-K mole syndrome" [18]. Henry T. Lynch suggested "familial atypical multiple mole melanoma (FAMMM)" instead of "B-K mole syndrome. " The first mutation of the CDKN2A gene in FAMMM was reported in 1992 [19,20]. FAMMM is inherited as an autosomal dominant trait and is characterized by multiple melanocytic moles (>50 nevi) and positive family history. It is associated with germline mutations of CDKN2A. Some mutation carriers may be prone to pancreatic cancer or other malignancies [17]. The CDKN2A gene is located at the short arm of chromosome 9 at the 9p21.3 locus. The locus encodes two proteins interacting with two tumor suppressors: Retinoblastoma protein (Rb) and p53 protein (cellular tumor antigen p53 or tumor suppressor p53) [21]. The gene contains two promoters. When activated, each promoter leads to a different primary transcript, either alpha (α) or beta (β). Each transcript contains a specific exon 1, 1α, and 1β, respectively, whereas they share the exons 2 and 3. The promoter leading to the β transcript is located upstream of the promoter leading to the α transcript. Exon 1 variants being spliced to the shared exon 2 cause the formation of an open reading frame. Thus, exon 2 is read differently due to different starting points in the two transcripts, and the process of translation results in two utterly different proteins. The protein www.bjbms.org regarding p14ARF. These mutations can result in an incompletely synthesized protein because the intron mutations may cause the incorrect processing of the primary transcript. Sometimes, the protein is not synthesized because the primary transcript cannot reach the cytoplasm through nuclear pores [33]. Notably, these mutations may increase the efficacy of immune checkpoint inhibitors, such as ipilimumab (anti-CTLA-4 monoclonal antibody) and anti-PD-1 (programmed cell death-1) antibodies pembrolizumab and nivolumab, possibly due to increased mutation load in CDKN2A mutated tumors [34]. CDK4 CDK4 is a serine/threonine kinase responsible for the progression of the cell cycle from the G1 to S phase [35]. It exerts its intracellular function only after binding to cyclin D and phosphorylating the retinoblastoma protein at a single point [36]. The result of such monophosphorylation is the release of transcription factor E2F, which triggers the transcription of the cyclin E gene CCNE1 and its binding to CDK2. The new cyclin E-CDK2 complex additionally hyperphosphorylates the Rb protein at other serine and threonine phosphorylation sites and facilitates the progression of the cell cycle (Table 1) [37,38]. Based on the GenoMEL centers' study that involved 2137 cutaneous melanoma patients originating from 466 families with at least 3 cutaneous melanoma cases per family, the frequency of the CDK4 mutations is 2-3% [16]. The CDK4 gene is located at the short arm of chromosome 12 (12q14). It consists of 8 exons and is mutated in about 4% of melanoma cases [39]. The missense mutation at codon 24 of the second exon triggers the change in the activity of the protein product of this gene from a protooncogene to a dominant oncogene. This change results from histidine (R24H) stimulates p21 gene transcription [27]. The gene encodes the protein that binds to the CDK-cyclin complex, thus preventing it from phosphorylating the Rb protein and halting the cell cycle. Thus, the damaged DNA should not replicate and create a mutation. At that moment, the cell should wait for repair before it continues the cycle. Moreover, if the repair does not occur, p21 may stimulate the apoptosis of the cell and prevent the occurrence of mutation [28]. p53 in the cell is bound to another protein called mouse double minute 2 homolog (MDM2), which protects it from degradation and becomes active only after being released from the complex. The protein products of the CDKN2A gene exert their activity at the checkpoint of the cell progression from the G1 to S phase. p14ARF inhibits the MDM2 protein and its ubiquitin ligase activity, releasing p53 and making it free to stop the cell cycle through p21 ( Figure 1) [29,30]. On the other hand, p16INK4a inhibits the cyclin D-CDK4/6 complex, preventing it from phosphorylating the Rb protein, which then remains active and does not allow E2F protein to transcribe the genes needed for the cell to enter the S-phase, consequently keeping the cell in the G1 phase and not allowing the progression of the cell cycle toward DNA replication ( Figure 1) [29,31]. Thus, the two protein products of the CDKN2A gene bring the cell cycle to a halt in the same G1 phase by acting through two different mechanisms. The CDKN2A gene mutations have different effects on the synthesis of p16INK4a and p14ARF proteins, as these are formed by transcription resulting from two different reading frames. There are four types of mutations: deletions, insertions, duplications, and substitutions [32]. According to their effects on protein synthesis, they may also be divided into missense, nonsense, and frameshift mutations. The mutation affecting p16INK4a is most frequently located at exon 1a, which corresponds to intron 1, which also harbors ~1/3 of the mutations www.bjbms.org or cysteine (R24C) being incorporated instead of arginine in codon 24, thus preventing p16 from binding to CDK4 protein and regulating its activity [40]. The median age of melanoma diagnosis in families with this mutation is 39 years, with an estimated lifetime penetrance of 74% [41]. Sporadic missense and silent mutations of this gene have also been reported in other cancers, such as endometrial cancer. Its expression is altered in ~2% of all cancers, including lung adenocarcinoma, liposarcoma, and glioblastoma. It is also the reason why this mutated form of the CDK4 gene is a well-chosen target for innovative drugs, such as palbociclib, ribociclib, and abemaciclib (CDK4/6 inhibitors) [42]. Notably, palbociclib has been approved to treat estrogen-positive breast cancer with a high proliferation index (measured by Ki-67), and clinical trials investigating its effectiveness in CDK4 mutated melanomas are underway [43,44]. Telomerase reverse transcriptase (TERT) The TERT gene encoding the protein part of telomerase reverse transcriptase is located at the short arm of chromosome 5, locus 5p15.33. Telomerase is a ribonucleoprotein that acts as a reverse transcriptase -a function performed by TERT ( Table 1). The other ribonucleic part comprises a long non-coding RNA -telomerase RNA (TR or TER) [45,46]. If the cells did not contain telomerase, the chromatids would become ever shorter with every DNA replication because DNA polymerase catalyzes the addition of a new deoxyribonucleoside triphosphate only in a 5'-3' direction. In other words, it would be beneficial only for one newly synthesized DNA chain. In contrast, the other one would be shorter and shorter with each replication, and this is where telomerase comes into play and prevents such shortening of the chromatids [46]. However, in most cells in the body, telomeres actually do shorten, and the activity of telomerases is needed only in cells such as germ cells, lymphocytes, keratinocytes, endometrial cells, hematopoietic stem cells, and epithelial cells of the intestines, esophagus, and cervix [47]. Maintaining the same length of telomeres is the characteristic of many cancers, including melanoma. Mutations in the TERT gene are characteristic of both sporadic and hereditary melanomas. Specific mutations in the promoter of this gene generate the binding site for the family of ETS (E-twenty-six-specific sequence or E26 transforming sequence) transformation factors, leading to the increased TERT gene transcription [48,49]. The two most common mutations in the gene promoter result from the transition of cytosine to thymine. They are located within 100 bp from the transcription starting site and are called C228T and C250T (chromosome 5, 1,295,228 C>T and 1,295,250 C>T, respectively) [50]. Since these mutations were detected in 77% of intermediate melanocytic tumors and melanoma in situ cases, they mark the beginning of malignant transformation [51]. The described mutations in the TERT promoter region indicate poorer prognosis and can be used as a marker of shorter survival of these patients [52,53]. Inhibition of the activity of this gene and related protein can be a potential therapeutic target for melanoma patients [54]. Protection of telomeres protein 1 (POT1) The POT1 gene is located at the long arm of chromosome 7 (7q31.33). Its protein product, POT1, is part of the protective protein complex, shelterin or telosome, included in the regulation of telomere length, maintenance of chromosomal stability, prevention of aberrant chromosome separation, and protection from unnecessary recombination repair (Table 1) [55]. It is a heterohexamer built of telomeric repeat factor (TRF) 1, TRF2, repressor activator protein 1, TERF1interacting nuclear factor 2, tripeptidyl-peptidase 1 (TPP1), and POT1 subunits [56]. The POT1 protein consists of 634 amino acids; it is the only part of the complex that can bind directly to a DNA sequence using the oligonucleotide/oligosaccharide-binding (OB) fold domains OB1 and OB2 at the N-terminal. POT1 blocks the function of ataxia telangiectasia and Rad3-related (ART) protein responsible for initiating the DNA break repair through forming a heterodimer with POT1 protein. The heterodimer recruits telomerase to elongate the ends of the chromosome. The same protein may also have the opposite action, that is, prevent the elongation of telomeres by competitive inhibition with telomerase at the 3' end of a single-stranded DNA molecule. The mutations leading to POT1 inhibition, or mutations resulting in the loss of a binding site for TPP1, increase telomerase activity and are related to various malignancies, including melanoma [57]. According to one study, POT1 seems to be one of the most commonly mutated genes in hereditary melanoma, along with CDKN2A [58]. Moreover, other studies indicated that these mutations were more common than TERT mutations [59,60]. Melanocortin 1 receptor (MC1R) MC1R is located at the long arm of chromosome 16 (16q24.3). It encodes the MC1R, which belongs to the family of the G protein-coupled receptors (GPCR). The extracellular GPCR domain binds ligands, whereas the intracellular GPCR domain activates adenylyl cyclase and cAMP synthesis through G protein [61]. One of the most critical roles of the MC1R is melanin biosynthesis, which occurs in the melanocyte organelles called melanosomes and results from the binding of α-melanocyte-stimulating hormone (αMSH) and agouti signaling protein ( Table 1). The binding of αMSH to the MSH receptor (MSH-R) activates adenylyl cyclase, catalyzing cAMP production. The result is the synthesis of eumelanin www.bjbms.org from tyrosine. ASIP competes for the same receptor, that is, it acts antagonistically by blocking the expression of microphthalmia-associated transcription factor (MITF). MITF is the main factor in melanin synthesis because it regulates the activity of tyrosine-related protein 1 (TRP1) and tyrosinase [62]. The binding of ASIP to MSH-R inhibits eumelanin synthesis and stimulates the production of pheomelanin [63]. Interestingly, MC1R variants may substantially increase the penetrance of CDKN2A mutations and the risk of melanoma in affected families, particularly multiple MC1R variants and red hair color variants [64]. Identifying polymorphisms and mutations of the MC1R gene would enable a better understanding of melanoma susceptibility and potential treatments [65][66][67]. Melanocyte-inducing transcription factor or microphthalmia-associated transcription factor (MITF) Melanocyte-inducing transcription factor (MITF) gene, or microphthalmia-associated transcription factor gene, is located at the short arm of chromosome 3 (3p13). It encodes the transcription factor called basic-helix-loop-helix-leucine zipper, which it uses to bind to DNA [68]. This domain recognizes specific sequences in the target gene promoters, such as tyrosinase (TYR). It is essential for regulating the expression of TYR and TYR-related proteins, such as TYR-related protein 1 and, therefore, plays a central role in regulating melanin synthesis in melanocytes (Table 1) [69]. Among several isoforms of the MITF gene, only MITF-M is specific for melanocytes [70]. Wnt, TGF-beta, and RTK are only some of the signaling pathways related to the expression of this gene [71]. MELANOMA-ASSOCIATED SYNDROMES Melanomas are part of several hereditary syndromes. Two of them, FAMMM and BRCA1-associated protein-1 (BAP1) tumor predisposition syndrome, a malignant tumor syndrome (including melanoma) associated with the mutation of the BAP1 gene, are described in the following paragraphs. FAMMM syndrome The first record of FAMMM syndrome dated from 1820, when Norris described the development of a tumor from a brownish mole that recurred after removal in a patient with about 40 more similar skin lesions and enlarged lymph nodes [72]. The disease was so extensive that the only option was palliative care. The autopsy found that the tumor spread throughout the body, including the heart and lungs. The family history revealed that the patient' s father died of the same disease, and the siblings also had numerous nevi. Norris concluded that it was a hereditary disorder [73]. In 1968, Lynch and Krush described four families with multiple melanomas, including a family where the proband (the first affected family member who seeks medical attention and whose findings raise the suspicion of a hereditary disease) developed the disease at age 26. In 1980, the hereditary nature of this syndrome was confirmed: it showed an autosomal dominant pattern of inheritance. In the 1990s, Lynch reported that the syndrome was associated with other cancers, especially pancreatic carcinoma [74]. The association between this syndrome and pancreatic cancer was explicitly observed in patients carrying the p16-Leiden mutation in the CDKN2A gene (deletion of 19 base pairs in exon 2 of the gene CDKN2A; NM_000077.4: c.225_243del19 (p.p75fs)) [75,76]. The mutation carriers were also prone to esophageal cancer [77,78]. Many other tumors are related to this syndrome, including lung, breast, liver, and brain tumors [79]. The loss of CDKN2A heterozygosity is considered the first step in developing melanoma in patients with FAMMM syndrome [80]. Diagnostic criteria for FAMMM syndrome are as follows: 1. Melanoma in one or more first-or second-degree relatives 2. Total body nevi count >50, including atypical nevi (asymmetric, raised above the skin, varying in color, and size) 3. Nevi showing specific histological features, including asymmetry, subepidermal fibroplasia, and lentiginous melanocytic hyperplasia (spindle or epithelioid melanocytes forming nests of different sizes and merging with adjacent rete ridges, and creating bridges), and dermal lymphocyte infiltrates [72]. These patients are referred to genetic counseling, genetic testing, and follow-up. Examination intervals depend on the number of close relatives with the disease and the nevi count. The usual follow-up interval is 6 months. Dermoscopy is the method of choice, but the importance of self-examination should not be underestimated. The use of smartphone applications for examination, which may become available soon, also holds potential [73]. BAP1 tumor predisposition syndrome This syndrome, caused by mutations in the BAP1 gene, is characterized by uveal melanoma, mesothelioma, and (less often) skin melanoma. Other malignancies may also develop, including kidney, bladder, brain, and soft-tissue tumors [81]. A tumor suppressor gene called BAP1 codes for ubiquitin carboxy-terminal hydroxylase BAP1. It removes ubiquitin from other proteins, making them more resistant to degradation. It also interferes with their interaction with www.bjbms.org other proteins. BAP1 is involved in various cellular processes, regulating the cell cycle, transcription, chromatin organization, DNA repair, and apoptosis (Table 1) [82]. This protein, made of 729 amino acids, consists of three main domains: the N-terminal domain that removes ubiquitin, the middle domain that binds the nuclear transcription co-factor called host cell factor 1, and the C-terminal domain that interacts with other proteins [83]. The disease is inherited in an autosomal dominant pattern, and mutations that affect the nuclear localization signal (the sequence of amino acids that directs the protein into the nucleus) or catalytic domain for ubiquitin removal are believed to cause the most severe clinical presentations [84]. BAP1 gene is often mutated in cases of uveal melanoma, which accounts for 3-5% of all diagnosed melanomas [85]. The carriers of BAP1 gene mutations are also prone to developing clear cell renal cell carcinoma or mesothelioma [86,87]. More recent studies suggest that BAP1 mutations may indicate a poorer prognosis for these patients [88]. Mutation of the BAP1 gene usually manifests as the growth of melanocytic BAP1-associated intradermal tumors (MBAITs). These tumors are raised above the skin surface, are about 5 mm in diameter, and are pigmented or skin-colored. They were previously called atypical Spitz tumors; however, later, it was shown that they differ histologically and morphologically from typical and atypical Spitz tumors. They usually occur in the second decade of life [81]. The number of lesions increases with time but varies from patient to patient [72]. If this gene mutation is suspected, the patient should be referred to genetic counseling, with testing and follow-up measures arranged for the patient and the entire family [89]. GENETIC COUNSELING The National Society of Genetic Counselors probably gave the best definition of genetic counseling in 2006: "Genetic counseling is the process of helping people understand and adapt to the medical, psychological, and familial implications of genetic contributions to disease. This process integrates the interpretation to assess the chance of disease, education, and counseling" [90]. The advantage of genetic counseling is to have a better understanding of basic concepts related to genetics, such as mutation, mutation of germinative or somatic cells, early tumor markers, targeted therapy, and molecular analysis [91], and reduction of anxiety related to a possible positive test result and its long-term effect on the patient' s quality of life [92]. The counseling process includes the assessment of disease probability in patients and other members of their families. For that purpose, the consensus on the testing protocol is essential. For example, to test a person for a pathogenic variant of the CDKN2A gene, there should be at least three first-or second-degree relatives on the same side of the family with the disease plus a positive prediction test, such as GenoMELPREDICT [93] or evidence of pathogenetic variant of this gene in a family member [94]. When making recommendations, the following factors should be considered: number of family members with a confirmed diagnosis of the skin or ocular melanoma; melanoma before the age of 40; and presence of pancreatic cancer or some other malignancy [95]. These evaluations are best supported by research in families with known mutations such as the mutation CDKN2A c.256G > A (Ala86Thr), that is, replacement of guanine by adenine at position 256, resulting in the incorporation of threonine instead of alanine [96]. Genetic counseling is essential not only for the possibility of testing but also for the risk calculation and modification of risk behavior, which play an equally important role in the etiology of the disease since the information received during counseling may positively change the behavior of the person [92]. Specifically, in hereditary melanoma, avoiding prolonged exposure to UV light is of utmost importance [97]. Usually, it is imperative in children not to get tested for adult hereditary tumors. However, in the case of hereditary melanoma, there are indications that genetic testing in children could be justified in the case of CDKN2A gene mutations [98]. It was reported that high-quality genetic counseling contributed to a decreased number of hours of UV light exposure in hereditary mutation carriers and non-carriers alike [99]. The person who comes for genetic counseling should be explained the advantages and disadvantages of genetic testing to help them decide whether to accept it. The advantage of early identification of mutation carriers is implementing thorough lifelong monitoring of the carrier and their family members by digital dermoscopy and photography, thus detecting the melanoma in its earliest stage. In the case of CDKN2A gene mutation, other malignancies should also be considered, especially pancreatic cancer [17]. The disadvantage of genetic testing is possible anxiety due to the increased risk of melanoma if the test results are positive. It is advisable to refer the mutation carrier to psychological counseling in such cases. A negative test result (absence of mutation), on the other hand, may give a false sense of security, which is also a disadvantage. Therefore, it is essential to highlight that familial malignant melanoma accounts for only 10% of all melanoma cases, whereas the remaining 90% are sporadic. Psychoeducation can help these patients understand the importance of sunscreen use, self-examination, and regular preventive dermatological check-ups. 679 www.bjbms.org CONCLUSIONS Melanoma is an aggressive malignancy with high metastatic potential. Sporadic melanoma is prevalent in clinical practice, whereas familial malignant melanoma accounts for approximately 10% of the cases. The highest proportion of familial malignant melanoma cases arises from the mutations in the CDKN2A gene, which codes for two tumor-suppressor proteins, p14ARF and p16INK4a. Mutations of the CDKN2A carry a high risk of melanoma, together with CDK4, TERT, and POT1 gene mutations. They encode proteins responsible for cell cycle regulation (CDKN2A and CDK4) or telomeres length control (TERT and POT1). The genes that carry a moderate melanoma risk include MC1R and MITF, whose protein products are involved in melanin synthesis. Since environmental influence plays a role in melanoma development, at-risk patient groups should be offered genetic counseling. During the counseling, along with the option of genetic testing, the patients should be advised to protect their skin from UV light, avoid sun exposure, and keep regular preventive checkups with their dermatologist. Regular examinations may not prevent the development of the disease, but they increase the probability of early diagnosis when the survival rate is the highest. As none of the above-mentioned genes can be individually held responsible for causing melanoma, the most significant advantage of genetic counseling is psychoeducation.
2022-04-23T15:10:10.781Z
2022-04-21T00:00:00.000
{ "year": 2022, "sha1": "4f122dfa77246c7c3b38ce3d2e186c0f606b9051", "oa_license": "CCBY", "oa_url": "https://www.bjbms.org/ojs/index.php/bjbms/article/download/7378/2494", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9815abd62a4af8eb770013788bd685488a38a280", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
62236691
pes2o/s2orc
v3-fos-license
Using an ‘ open approach ’ to create a new , innovative higher education model Navigating learning, formal or informal, can be overwhelming, confusing, and impersonal. With more options than ever, the process of deciding what, where, and when can be overwhelming to a learner. The concept of Open College at Kaplan University (OC@KU) was to bring organization, purpose, and personalization of learning caused by vast resources and numerous options. Focused on organizing, supporting and providing a personalized education experience using open courses, an innovative higher education model was conceived and created. As the concept of developing a college that adapted its resources and services to the needs of the learner emerged so did the idea of integrating open courses into a formal higher education model to award college credit for open courses as well as inclusion in a degree program. Introduction Navigating learning, formal or informal, can be overwhelming, confusing, and impersonal.With more options than ever, the process of deciding what, where, and when can be overwhelming to learners.Confusion occurs when products and services are plentiful and are disbursed without instructions or directions to help the learner navigate the maze.In addition, learning that is dictated rather than selected by the learner contributes to a lack of personalization. Today's learners present new challenges like we've not seen before both in behavior and in expectations.Technology has affected both the learner's behavior and expectations; multitasking, always-on communication, and engagement with multimedia have become the norm (Hartman, Moskal & Dziuban, 2005).Access to technology and information 24/7 have created the desire to integrate technology into their education, the ability to control their own learning, and the desire to have personalized learning (Hartman et al., 2005).Basically, the expectation for an educational experience doesn't differ much from their Amazon experience!Can students have it all?Our response is yes.Yes, it is possible to have it all.For the learner, having it all means consuming what you need when you need it.The University of Central Florida (UCF) is a perfect example as Morrison (2012) suggested in his article.UCF provides choices or options that puts students in charge of "when, where and how they want to learn".Students do not present themselves to an institution as being similar but different. OC@KU was an opportunity to make a distinctive footprint in higher education by using open resources and technology to address the new behavior and expectations while also effecting the significant barriers to education through a new approach of supporting and promoting personalized learning.The approach is ultimately flexible, aligning specific learning points to the unique needs of the learner.OC@KU represents the power of technology and open resources to increase the personalization and quality of both informal and formal higher education while reducing cost. OC@KU, the concept Focused on supporting and providing a personalized education experience, several years ago, a small group of people at Kaplan University began to meet around the concept of creating a college for the future.As the concept was developed, we found ourselves imagining a college that adapted its resources and services to the needs of the learner, whether the needs included a degree, informal learning, assessments or some combination of all three.A college was envisioned that met a learner's needs without requiring admission as the price of participation.Rather than just co-opting the language, the vision was a college that was truly learning-and learner-centric. At the same time, a trifecta of circumstances and events were occurring in higher education that was effecting learning in higher education: technology, supply and demand, and the rising cost of education (Huggins & Smith, 2013). Technology improvements made it possible to make learning opportunities available anytime, anywhere.And as the emergence of Massive Open Online Courses (MOOCs) amply demonstrates, terrific content is readily available at low cost or for free.Place-based education, the campus, becomes an option, not a necessity.An important part of this change is its financial consequence.Technology creates a whole new level of access. Supply and demand of skilled workers is a driver to find new ways to validate knowledge and skills.In a world of accelerating change in the workplace, there is a growing gap between the number of people with the skills needed for entry into the workforce and the number of jobs requiring those skills.Employers are unable to hire the people they need because colleges are not graduating them in sufficient numbers with the skills needed. Rising cost of education adds to the pressure for colleges to think creatively and to act entrepreneurial to create more effective educational models. The building process OC@KU was launched in the fall of 2014 with the initial offering of thirteen open courses.The first group of courses that were launched were developed specifically to be delivered as free and selfpaced, with no formal enrollment.The competencies and outcomes of the open courses aligned with their sister Kaplan University courses and were developed internally by credentialed faculty with subject-area teaching experience. Using some existing best practices such as Walden University's (2006) Concierge Service, Capella University's (2015) Flex-Path, and Thomas Edison State College's For Adults with Higher Expectations (2015), OC@KU strategically selected and developed open courses, placed them in an openly accessible environment with live, personal support, and a formal degree program designed for open courses. Within months, hundreds of learners were using the open courses.Keeping the original goal in mind of organization, purpose, and personalization, the next logical step was to develop the purpose and personalization: course assessments by which learners could earn and apply college credit for the open courses toward a degree. Since of the launch of the first OpenCourseWare project in 2002 (MIT, 2015), tens of thousands of open courses have been developed and offered in multiple formats by many vendors and schools.Collectively the Open Education Consortium (www.oeconsortium.org) and MIT list almost 40,000 courses.At the end of 2014, MIT Opencourseware published over 2,250 courses with over 1 billion page views (MIT, 2015).While MOOCs are still referred to as a trend or phenomenon in education, open courses have met diverse learner needs in ways that were as unanticipated.Just-in-time, affordable, and self-managed are the core, primary characteristics of why open educational resources Using an 'open approach' to create a new, innovative higher education model Open Praxis, vol.7 issue 2, April-June 2015, pp.153-159 have grown in popularity.Combined, the characteristics represent a new asset for lifelong learners, putting the learner in control of their learning, formal and informal.When they are coupled with a curation of services, technologies, and assessments, open educational resources are the springboard to organized high quality learning that meets the personal needs of the individual learner. As such, OC@KU is an example of a wholly new method of access.Beyond the actual availability of thousands of courses and services, physical access if you will, OC@KU offers access to learnerowned, personalized learning.It is a learning concierge service that puts higher education in an entirely new dimension, removing the barriers between career advancement and education.OC@KU encourages learners to chart their own paths using assessments to recognize learning, however acquired, for academic and career value. Multi-faceted offerings of OC@KU In addition to offering free, open courses developed in-house, other resources are also available for free or at a low cost.Realizing that one size does not fit all and that learners have varying needs, OC@KU provides a suite of services that learners can use to customize their learning."Equally important, as part of this technological and web-driven disruption, learners' capacity to develop and store evidence of learning in electronic portfolios-carefully organized around career, academic, or personal interests-has also been transformed", as stated by the founding president of OC@KU, Dr. Peter Smith in a recent Educause article (2014).Learners can use the open courses to learn for the sake of learning, or, if a more formal goal is desired, apply their learning toward a degree. To date, the following suite of products are available as a part of the personalized learning concierge service: • Open Portfolio.The Open Portfolio is a tool that allows learners to track and manage their open courses.Built with an integrated API for Open Education Consortium courses, the Open Portfolio is a free tool for users to develop informal learning plans around personal or professional interests which can be shared or kept private.Acting as a "learning journal," the Open Portfolio becomes a powerful tool, not simply a passive repository.www.openlearningportfolio.com • LearningAdvisor.LearningAdvisor is a free, comprehensive search tool that connects learners with thousands of open courses.This rigorous tool allows users to search for courses by subject, institution, or interest.Created for integration with the American Association of Retired People's (AARP) Life Reimagined program, LearningAdvisor is focused on life transitions be they from job to job, career to retirement, and any other combination of events.www.learningadvisor.com/courses• StraighterLine.StraighterLine provides first and second year general education courses that have been evaluated and recommended for college credit by the American Council on Education.As an OC@KU course partner, any of the courses from StraighterLine can be applied toward a degree at OC@KU.www.straighterline.com• CareerJourney.Learners have free access to CareerJourney, a self-paced course that was developed in partnership with LinkedIn, using their rich database for career planning.In a game-like environment, CareerJourney provides practical strategies to identify strengths, explore career opportunities, network with like-professionals, and to create a professional development plan.CareerJourney also includes the ability to match skill gaps to courses.www.Careerjourney.com• CLA+.CLA+ is a low cost skills assessment tool.For a minimal fee, learners can take an assessment that will evaluate real-life, cross-cutting intellectual skills.The skills that CLA+ test for include analysis and problem solving, scientific and quantitative reasoning, critical Open Praxis, vol.7 issue 2, April-June 2015, pp.153-159 reading and evaluation, and critiquing an argument.These are the skills which employers have overwhelmingly stated matter more to them than a particular major or GPA.Our objective in partnering with CLA+ is to provide evidence for learners and employers that the individual is ready to work effectively.www.takeclaplus.com• Learning Recognition Course (LRC100).As the first open course developed by OC@KU, the LRC100 guides individuals through the process of documenting their training and experience in a portfolio which is evaluated by university faculty for college-level credit.The LRC100 is free and self-paced with personal support provided by assessment specialist that have many years of experience with adult learners.As a sense-making offering, the portfolio is a curation of the individual's prior learning, experience, and informal learning.https://opencollege.kaplan.com/events/LRC100 The unique degree program model The Bachelor of Science in Professional Studies (BSPR) is designed with self-motivated students in mind and provides the opportunity to create a customized degree plan to meet professional goals. The open degree format provides the flexibility of learning through open courses from anywhere. The degree program is focused on professional knowledge and skills, problem solving, and strategic planning and culminates in a capstone class with a portfolio project.Built on a proprietary platform, technology brings together the use of open courses, assessments, and other learning resources to provide learners with an Individualized Learning Plan, a customized learning path allowing the learner, with the guidance of a faculty mentor, to develop a personalized degree path. As the name indicates, the ILP is unique to the learner and includes: • a career goal statement which enables the student and the faculty advisor to identify potential course assessments and learning options to fulfill the degree requirements that relates to a career, • a review of previously earned college credit, • an analysis of previous experience that can be evaluated for college credit so that the learner does not duplicate learning that has already been acquired, • potential open courses to meet degree requirements, and • course assessments (credit by exam) to meet the degree program learning outcomes. Looking at traditional education through the new lens of an open approach, learners have the opportunity of an individualized, affordable education that integrates technology, open resources, and personalized services to help them meet their career, academic, and personal goals.While this may sound like business as usual, it is not.While most of the language is not new, OC@KU is actually a unique development within the realm of higher education.The result of creating and organizing a one-stop shop for all things 'open,' learners are in charge of their learning, formal or informal.In an era in which curricula is easily accessible, OC@KU provides a manner in which learners can create, organize, and make sense of their learning. The page that organized courses according to subject.Not only did it provide learners with a list of Udemy courses organized by subject, it also gave the faculty an opportunity to select courses that best aligned with credit-bearing courses. Another unique feature is the integration of multiple types of assessments starting with the assessment of prior learning.The open, self-paced course, the Learning Recognition Course (LRC) guides learners through the development of a rigorous leaning portfolio which is assessed by faculty subject-matter experts.In addition, the LRC was reviewed by the American Council on Education (ACE) enabling learners to earn college credit for the course. In addition to prior learning, new learning acquired from the open courses is assessed for college credit in multiple ways, through a portfolio assessment, standardized assessments, and challenge exams.The learning acquired from the open courses is assessed against the outcomes of their college-level counterparts.OC@KU has developed course assessments, partnered with Kaplan University as well as external nationally recognized challenge exam providers to be able to offer a wide range of examinations.Successful completion of the challenge exams results in credit awarded by the University toward the BSPR. Lastly, since the degree is based on services rather than courses provided, the degree is based on a monthly subscription model.The monthly subscription includes access to faculty mentors, assessment advisors, an assessment of prior learning, and open resources curated specifically for the BSPR students. Individualized doesn't have to be lonely For many years, the instructor was the center of the classroom, responsible for creating and maintaining the classroom community through projects that encouraged communication and collaboration and through open classroom discussions.They were basically responsible for the overall classroom community. In recent years, the traditional instructor-centered classroom has been disrupted by the Internet and wireless capabilities and has become a mobile community.Two recent reports support the fact that various devices now play a significant role in the classroom, not replacing the instructor, but creating a new type of community.Over 82% of mobile device owners claim that they have used a table for academic purposes (Chen & Denoyelles, 2013).A Pew Research Center report states that two-thirds of Americans own a smartphone, 67% use their phone to share pictures, videos, or commentary about events happening in their community, with 35% doing so frequently (Smith, 2015).The community is no longer instructor-centered, but is now mobile. Embracing the new community, OC@KU focused on supporting adult learners through multiple mobile-ready tools: • Live Seminars using Google+ and YouTube.The Live Seminar is a unique tool that combines video, Google+, and YouTube to stream a live, real-time interaction with the instructor.Today's learners are savvy consumers.They demand consumer elements such as self-management, real-time access, device availability, and socializing in their educational journey.Although the learning environment is open, proven instructional strategies that provide a positive experience are part of the design and delivery of OC@KU. To date, the evidence is positive Since the launch of OC@KU in the fall of 2014, the outcome has been beyond expectation.With little marketing outside word-of-mouth, there is a tremendous interest in our unique model. • Approximately 4,000 unique learners have accessed the OC@KU open course web site and have registered for 4,800 courses.• 104 have developed and submitted a prior learning portfolios for assessment. • Learners have received college credit for 711 courses as a result of the prior learning portfolio assessment which translates to over $1m in tuition savings at Kaplan University.• The students currently enrolled in the BSPR are on target to complete their degree for under $10,000.• Faculty are using the open courses as supplemental material to their college credit classroom. Conclusion: Looking Forward OC@KU offers a myriad of ways that learning can be supported in the era of abundant information.It is ultimately flexible, aligning specific learning points to the unique needs of the learner.At the same time, its free and low cost sense-making services provide non-judgmental diagnostics and information which assist the learner in personalizing their learning to meet their needs. Finally and importantly, however, OC@KU represents a new method, an approach to thinking about learning environments and learning support in the 21 st century.As such, it can be used in multiple delivery environments, beyond the exclusive online, self-paced model we have started with.Whether you want to support a weekend or a different low-residency model, adapt it to groups of learners using the BSPR or other degree programs and individual assessments, use it in a business environment, or put it in a blended environment, the OC@KU method will work effectively.It represents the power of technology and open resources to increase the personalization and quality of both informal and formal higher education while reducing their educational cost. As a sense-making venture, OC@KU combines self-paced, open-access, and relevant courses with the opportunity to earn college credit in a one-stop environment, a technologically supported and andrologically appropriate learning hub.As an extraordinary driver of change, the broadening of the Internet has been a driver in how content, information, and learning is accessed and delivered; today access to resources and information is like no other time in history.A college was developed that embraced and utilized the benefits of the new information environment. When it comes to harnessing the infinite power of the Internet to meet educational and career needs, the process can be as offensive and confusing as it is difficult.We likened the unharnessed information on the Internet to a blizzard and the unsupported user trying to organize it to a skier without goggles in that blizzard.Hence, we developed the concept of developing and offering free sense-making services.These services addressed organizing and understanding prior learning, career investigation, and the curation of informal learning as the preliminary steps to identifying a learning path forward that could be followed informally, with individual assessments, or in a degree program.To that we have added low cost diagnostics as well. uniqueness of the open degree includes multiple features; the most outstanding is the fact that learners can use open courses-taken anywhere-to complete their degree requirements.To start the task of identifying open courses, the faculty mentor works with the learner to help them identify open course outcomes to their college credit counterpart.Because searching for open courses can be overwhelming, OC@KU strategically developed partnerships with select open course providers to develop curated course pages to help learners navigate open courses by subjects.For example, our faculty worked with Udemy to develop a customized, co-branded curated Using an 'open approach' to create a new, innovative higher education model Open Praxis, vol.7 issue 2, April-June 2015, pp.153-159 Students can see and hear the live lecture and chat with the instructor at a set time.Students can access a recorded, archived Live Seminar if they miss the live version.• Social Media.Some of the open courses contain social media activities.Classroom responses, communication, and feedback are delivered via social media.• Live support and feedback in the courses.Even though the courses are open and self-paced, live course support is available by phone or email.• Forums and discussion boards.Faculty are assigned to the open courses on a rotating schedule.They are visible, accessible, and to engage the learners in communication.
2018-12-31T06:23:33.204Z
2015-04-22T00:00:00.000
{ "year": 2015, "sha1": "3418fc4da91d2743fca1d7f1003083806482c6c3", "oa_license": "CCBY", "oa_url": "http://openpraxis.org/articles/10.5944/openpraxis.7.2.193/galley/395/download/", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3418fc4da91d2743fca1d7f1003083806482c6c3", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
20843546
pes2o/s2orc
v3-fos-license
Hybrid Reasoning and Coordination Methods on Multi-Agent Systems This paper briefly introduces a summary of the special session on Hybrid Reasoning and Coordination Methods on Multi-Agent Systems, held in conjunction with the 4th International Conference on Hybrid Artificial Intelligence Systems 2009 (HAIS’09). The research papers of this session have been revised and extended and the final results are published in this journal special issue. I. INTRODUCTION Over the last years, Multi-Agent Systems (MAS) have been successfully applied to manage complex distributed processes in a wide range of application domains.In these systems, agents must be able to reason autonomously and coordinate their activities with other agents of the system to fulfil their objectives.Since the consolidation of the MAS paradigm in the 90's, much research has been done to provide social agents with better reasoning and coordination mechanisms that enhance their intelligence and improve their performance.Recently, the research community in MAS has focused their efforts on adapting MAS to open environments where heterogeneous agents interact.In these systems, agents can enter in (or leave) the system, form societies and communicate with other agents.Common assumptions about the agents of most MAS, such as honesty, cooperativeness and trustworthiness cannot be longer taken as valid hypothesis in open MAS.Therefore, the high dynamism of these open MAS gives rise to a greater need for complex reasoning and coordination mechanisms to control the access to the system and the deliberative processes of the agents. The methods applied are very varied and the synergies with different areas of the Artificial Intelligence and other sciences have given rise to excellent results.Hybrid Artificial Intelligence Systems (HAIS) combine symbolic and sub-symbolic techniques to provide hybrid problem solving models.Their capabilities in handling many real world complex problems, involving imprecision, uncertainty and high-dimensionality makes them very suitable to cope with reasoning and coordination problems in complex open MAS. The special session on Hybrid Multi-Agent Systems, Reasoning and Coordination Methods was aimed at discussing research on HAIS to develop reasoning and coordination methods on MAS.The session was conceived as a forum to Stella Heras, Martí Navarro and Vicente Julián are with Universidad Politécnica de Valencia.S. Heras: sheras@dsic.upv.esM. Navarro: mnavarro@dsic.upv.esV. Julián: vinglada@dsic.upv.espresent theoretical advances and real-world applications in this multidisciplinary research field. II. SPECIAL ISSUE ON HYBRID MULTI-AGENT SYSTEMS, REASONING AND COORDINATION METHODS This volume presents a revised version of the best papers presented in the special session on Hybrid Multi-Agent Systems, Reasoning and Coordination Methods, held in conjunction with the 4th International Conference on Hybrid Artificial Intelligence Systems 2009 (HAIS'09).Section encouraged topics, such as: practical reasoning methods, multi-agent casebased reasoning, artificial social systems, trust and reputation, social and organizational structure, teamwork, coalition formation, distributed problem solving, electronic markets and institutions, cooperative and non-cooperative game theory methods, social choice theory, voting protocols, auction and mechanism design, argumentation, negotiation and bargaining, agent commitments, semantic alignment and ontologies were tackled by the session papers, advancing research on the area of hybrid artificial intelligence systems.Each paper was reviewed by two independent experts in this area. In the first paper, Muñoz and Botía propose an Argumentation System Based on Ontologies (ASBO) to cope with conflicts based on inconsistent knowledge which arise when agents exchange information.Their formal model follows an engineering-oriented approach to develop a software architecture which allows working with argumentation in MAS. In the second paper, Pinzón et al. presents the core component of a solution based on agent technology that allows the identification of denial of service (DoS) attacks introduced in the XML of SOAP messages.The paper presents an advanced classification mechanism designed in two phases that are incorporated within a CBR-BDI Agent type.In addition, the method involves the use of decision trees, fuzzy logic rules and neural networks for filtering attacks.As a result of this work, a prototype was developed and the conclusions obtained are presented in the paper. In the third paper, Búrdalo et al. propose a general tracing system that could be used by agents in the system to trace other agents' activity.This provides agents with an alternative way for perceiving their environment.The paper presents preliminary results of the authors' work, consisting of the requirements which should be taken into account when designing such a tracing system. In the fourth paper, Castillo et al. show their work on developing a Multi-Agent Recommendation System (RecMAS) able to coordinate the interactions between a user agent and a set of commercial agents.The system provides a useful service for monitoring changes in the user agents beliefs and decisions based on two parameters: (i) the strength of its own beliefs and (ii) the strength of the commercial agents suggestions.The system was used to test a prototype that copes with several commercial activities in a real shopping centre by using wireless devices (PDA, mobile phone, etc.).Using a theoretical model and the simulation experiments, commercial strategies in relation with the socio-dynamics of the system were obtained. In the fifth paper, Heras et al. propose a new dialogue game protocol for modelling the interactions produced between the agents of an open MAS that must reach an agreement on the use of norms.The protocol is formally specified and the decision-making process of the agents is also developed.In addition, the authors provide an application example for showing both the performance of the protocol and its usefulness as a mechanism for managing the solving process of a coordination problem through norms. Finally, in the last paper Navarro et al. copes with the problem of merging intelligent deliberative techniques with real-time reactive actions in the special context of Real-Time Multi-Agent Systems (RTMAS).In these systems, the temporal restrictions of their Real-Time Agents make their deliberation process to be temporally bounded.The paper proposes a solution based on a temporally bounded Case-Based Reasoning mechanism.Thus, it also presents a guide to adapt the Case-Based Reasoning cycle to be used as deliberative mechanism for Real-Time Agents. SPECIAL ISSUE ACKNOWLEDGEMENTS We would like to thank authors for their valuable work and their interest in the special session on Hybrid Multi-Agent Systems, Reasoning and Coordination Methods.Their contributions show high quality research in the area and their presentations gave rise to interesting discussions.This special session could not be held without the support of the organising committee of HAIS 2009, to whom we are very grateful for their invaluable help.We also want to thank the experts of the program committee for their independent reviews and comments to the authors. Finally, the Guest Editors wish to specially thank Professors Miguel Cazorla and Vicente Matellan (Editors-in-Chief of the Journal of Physical Agents) for the publication of this special issue and their support during the publishing process.
2018-01-20T05:11:35.603Z
2009-09-30T00:00:00.000
{ "year": 2009, "sha1": "f772c224e831c0a4f89252ec2d7bb60a11afa127", "oa_license": "CCBYSA", "oa_url": "https://rua.ua.es/dspace/bitstream/10045/12526/1/JoPha_3_3_01.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "f772c224e831c0a4f89252ec2d7bb60a11afa127", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
238003295
pes2o/s2orc
v3-fos-license
Babi Yar and the Nazi Genocide of Roma: Memory Narratives and Memory Practices in Ukraine Abstract Thousands of Roma were killed in Ukraine by the Nazis and auxiliary police on the spot. There are more than 50,000 Roma in today’s Ukraine, represented by second and third generation decendants of the genocide survivors. The discussion on Roma identity cannot be isolated from the memory of the genocide, which makes the struggle over the past a reflexive landmark that mobilizes the Roma movement. About twenty Roma genocide memorials have been erected in Ukraine during last decade, and in 2016 the national memorial of the Roma genocide was opened in Babi Yar. However, scholars do not have a clear picture of memory narratives and memory practices of the Roma genocide in Ukraine. A comprehensive analysis of the contemporary situation is not possible without an examination of the history and memory of the Roma genocide before 1991. Introduction About 250,000-500,000 Romani were persecuted by the Nazis and their allies and collaborators in Europe during World War II. The estimated number of Roma genocide victims within the borders of today's Ukraine varies from 20,000 to 72,000 individuals. All figures are tentative, for they are based solely upon the few available records (Kruglov 2009;Kotljarchuk 2016b). The Roma genocide, which is recognized as such by international law, has a strictly defined legal meaning. The key notion for the legal evaluation of its genocidal nature is intent. Legal theory treats dolus generalis and dolus specialis differently in cases of mass crimes against humanity. It means that a genocide did not occur when the mass murder of individual members of an ethnic group (dolus generalis) did not have the specific intent (dolus specialis) of exterminating the community as such (Schabas 2000, 213-225). The Nazi annihilation of Roma and Jewish people is, in a legal sense, genocide, the mass killings of the Slavic population by the Nazis were crimes against humanity. Babi Yar (Babyn Yar in Ukrainian -Old Woman's Ravine) is a chain of seven deep ravines in the north-western part of Kyiv. The site is considered to be the single largest Nazi extermination site in the former Soviet Union. There, between September 1941 and November 1943, the Nazis murdered about 77,000 individuals, mostly Jews, but also Romani, POWs, mental patients, and members of the anti-Nazi resistance (Berkhoff 2012;Berkhoff 2018, 87-92). Theory and method Pierre Nora and Lawrence Kritzman proposed a concept of two types of collective memory: minor memory and major memory. They have noted that the inclusion of a minor memory of a certain group into a national historical narrative (major memory) takes place through the sites of memory (Nora and Kritzman 1997). Every modern state is involved in the forming of a national memory narrative, which always has a strongly engaged political meaning. The project of a new monument initiated by memory actors could be sanctioned by the authorities, edited, or rejected. Memorials are one of the most powerful physical sites of memory, connecting a remembrance date of a historical event with agents of memory and memory practices. The memorials bring sacral meaning to mass graves from world wars that occupied a central place in the cultural landscape of modern Europe (Baer 2000;Pickford 2005). The approved design of a public monument often illustrates major memory narratives. Through memorials, a minor memory of a certain group could be included or excluded from a major memory narrative. Another theoretical and methodological approach for this study is the path-dependent analysis proposed by Jeffrey Olick in the study of historical anniversaries in post-war Germany (Olick 1999). Olick formulates a process-relational approach that recognizes memory as an ongoing process that links the past and the present in dialogically contingent ways. He developed the concept of cultural mechanisms of commemoration's path dependency. According to Olick the path dependence is a tool through which one can track and analyse changes in the construction of historical narratives. At the same time, as Evgenii Dobrenko noted, in the study of the late Stalinism, the Soviet narrative of World War II changed all the time, depending on the political context, and had no monolithic character (Dobrenko 2020). This article examines the development of memory narratives and memory practices of the Roma genocide in Ukraine as an interplay between the process of documentation of the Nazi crimes, the changing political agenda and activities of various memory actors. The methodology of the study is based on a micro-historical approach. Micro history does not mean ignoring a macro-historical perspective. On the contrary, through the site of Babi Yar in Kyiv it is possible to trace principal changes of memory narratives and memory practices of the Roma genocide in Ukraine. The article examines certain effects generated by memory actors which led to changes of the major narrative of World War II in Ukraine, and the creation of new memory practices: How the process of inclusion of the Roma genocide into the major narrative of World War II does depend on the political context and the state of knowledge production? How changes in the major narrative of World War II affected the memory practices of the Roma community? The memory and memory practices of the Roma genocide are studied in a comparative perspective in relation to the memory of the Jewish genocide in Ukraine. The comparative analysis of memory narratives and practices of the Roma genocide in Soviet and post-Soviet Ukraine in relation to other countries is beyond the scope of this article. Chronologically the article is focusing on three time periods: -World War II and the post-war years of the rule of Stalin -Liberalisation in the Soviet Ukraine during Khrushchev's thaw and Gorbachev's perestroika, -The post-1991 independent Ukraine Background The Roma people in Ukraine are not a homogeneous ethnic group. Ukrainian Romani are divided into several sub-cultural and religious groups, the largest are Ukrainian Servy followed by the Russian Tsigane, Romanian Kelderari, Hungarian Lovari, Crimean Chingené and others. The first census of 1926 introduced ethnicity as a basic Soviet statistical category (Blum and Mespoulet 2003). The census counted 61,234 Romani in the Soviet Union, of them only 13,578 in Ukraine (the 1926 All-Soviet census). The local government of Ukraine believed that many Romani remained invisible for census takers due to a vagrant way of life. On behalf of the republican government, professor Oleksei Baranikov began in 1928 a massive investigation of Romani in Ukraine. He counted 691 Romani in the Kyiv region, many of them were vagabonds (Baranikov 1931, 15). The vagrant lifestyle of the Roma was a great concern for the communist regime, which considered this to be a major obstacle for their socialist transformation and control (Kilin 2005). In Soviet imagination, vagabondage and poverty made Romani "a most backward minority" (O'Keeffe 2010). Therefore, the main goal of nationalities policy towards Roma was to settle them down in order to overcome their nomadic way of life and their "backwardness," and to develop them in the short term to a higher level, like other national minorities. Altogether 52 Roma kolkhozes were established in Soviet Ukraine prior to World War II (Belikov 2008, 35-36). However, many Romani abandoned kolkhozes due to maladjustment to farming. Therefore, a special programme was launched and Roma who worked with crafts were invited to build up craft cooperatives in the cities (Belikov 2008, 36). In 1937, a group of itinerant Kelderari Roma was settled in the vicinity of Babi Yar. There were 27 families of metal workers who founded a cooperative named Trudnatsmen (National minority's workers) near the village of Kurenivka. 1 A collective petition sent by the Roma to Kyiv authorities mirror discourses of Soviet nationalities politics: We are writing to You with a great earnest request to help us in our grief. We are in a difficult situation living in a camp on the open space. Previously, we were nomads, a dark illiterate peoplethat is our nation. But now, thanks to the Soviet power, we began to work on the manufacturing and treatment of metal goods. 2 Since the nineteenth century, the territory around Babi Yar was a home for social outcasts, prisoners and mental patients, as well as a burial place. The Kyiv psychiatric hospital with about 1300 mental patients and Lukyanivska Prison with about 25,000 inmates neighboured the Romani village. The village was surrounded by Christian Orthodox, military and Jewish cemeteries, and the Lukyanivska goods station. It was not by accident that the Nazis chose Babi Yar as a site of mass extermination in September 1941. After June 22, 1941, Kyiv was under martial rule and local Romani had few chances to leave the town. In 1941, Nazi Germany occupied central Ukraine. The largest area of the republic including Kyiv became a part of the civil zone called the Reichskommissariat Ukraine. On 29-30 September 1941, nine days after the German occupation of Kyiv, more than 33,000 Jewish civilians were exterminated by the Nazis at Babi Yar in two days of mass killings. At the same time, a German physician Gustav Schuppe visited the Kyiv psychiatric hospital. His team of about ten physicians and SS-soldiers dressed as medics used lethal injections to murder mental patients, including those who were of Roma origin (Rhodes 2002 178-179). The mass killings of the Roma population by the Nazis began in 1941 and continued in Kyiv until the liberation of the city (Kruglov 2002, 78). Academic publications about the mass killings of Roma at Babi Yar are based on testimonies collected by Soviet authorities after 1943. According to them, the first group of Romani were killed at Babi Yar in September 1941 (Levitas 1993;Berkhoff 2008, 41, 60, 221;Berkhoff 2012). The survivor of Roma genocide Volodymyr Nabaranchuk, a native of Kyiv, stated that in addition to the population of the Romani village near Kurenivka, the Nazis killed Romani in other suburbs of Kyiv with compact Roma populations (Nakhmanovich 2016, 96-97). However, no exact information about the progress of killing actions, victims and perpetrators was found in Soviet, Ukrainian or German archives (Kruglov 2011). Babi Yar and the Roma: the first Soviet Reaction Even during the war, official media made some efforts to raise public awareness about the mass extermination of Roma by the Nazis in the occupied territories of the country. A few articles were found, published in 1942-1945 by central newspapers, describing the extermination of Roma in the occupied territories of the Soviet Union as a whole (Kotljarchuk 2016c). Concerning Babi Yar, recent research shows that in 1943-45 the Soviet media reported frequently about the extermination of Jews at Babi Yar (Berkhoff 2018, 85-87). However, less is known about the Roma victims of Babi Yar. After the investigation of articles published in 1943-45 by central press about Babi Yar and Nazi crimes in Kyiv, we can conclude that no one mentioned the mass killings of Romani. The ethnic origin of the victims was defined by official prints as Jewish, Ukrainian, and Russian (Kriger 1943;"Doroga na Berlin," Krasnaya Zvezda, February 15, 1945;"Babi Yar," Krasnaya Zvezda, November 20, 1943;Dubina 1945, 5-7). After the liberation of Soviet Ukraine three renowned writers and army correspondents of Jewish origin, Vasily Grossman, Ilya Ehrenburg and Lev Ozerov, began to collect testimonies and records about the Nazi extermination of Jews. Some of these testimonies were published by them in media. The entire manuscript with records and oral testimonies was completed in 1945, but banned by censorship and only published in Russian in 1993 (Grossman and Ehrenburg 1993). 3 The book titled Chernaya kniga (Black Book) had a chapter about Babi Yar edited by Lev Ozerov, a native of Kyiv. The mass killing of Roma was not mentioned by Ozerov (Grossman and Ehrenburg 1993, 17-24). 4 Unlike Jewish victims at Babi Yar and other places, the Roma were not mentioned in the widely disseminated international note on Nazi atrocities announced in January 1942 by Foreign Minister Vyacheslav Molotov (Nota 1954). Therefore, the public link between the site of Babi Yar and mass killing of Roma was missed from the very beginning. As Karel C. Berkhoff pointed out, the absence of significant foreign factors was a main reason for the lack of political interest of Soviet media to the mass shooting of Roma by the Nazis. He has noted that "in the eyes of the Kremlin, Gypsies, who actually were subject to the same mass extermination as Jews [Stalin did know about that] had no political value" (Berkhoff 2010, 116). In 1944, the central party newspaper Pravda published a report of the Extraordinary State Commission for the Investigation of Crimes Committed by the German-Fascist Invaders and their Accomplices in Kyiv (hereafter the ChGK). The commission was led by Nikita Khrushchev, the leader of Ukrainian SSR (see figure 1). The report ignored the ethnicity of civilian victims of Babi Yar and presented them as "ordinary Soviet citizens, women, children and old folk" ("Soobshchenie Chrezvychainoi Gosudarstvennoi Komissii po ustanovleniu i rassledovaniu zlodeianii nemetskofashistskikh zakhvatchikov i ikh soobshchnikov o razrusheniyakh i zverstvakh sovershennykh nemetsko-fashistskim zakhvatchikami v gorode Kieve," Izvestiia, February 29, 1944). This definition was made despite the ChGK investigation that collected many sources on mass extermination of Jewish and Roma population. For example, Ivan N. Zhitov, a professor at Kyiv Institute of Forestry, stated that the Germans started to shoot Romani at Babi Yar three months after the massacre of Jews, meaning at the end of December 1941. 6 Ludmila I. Zavorotnaya stated that during the occupation of the town she saw many gypsy wagons with people guarded by the Nazis drive past her house towards Babi Yar. N. Tkachenko claimed to have seen a lot of traditional gypsy clothes left by perpetrators in Babi Yar (Kotljarchuk 2014, 27-28). In 1945, ChGK records were defined as classified and the commission was dissolved. Publishing of information about the ethnicity of victims was not prohibited officially. The list of unacceptable information, edited by the General Directorate for the Protection of State Secrets in the Press, only prohibited mention in mass media of the number of victims among civil population (Perechen' 1949, 20-22). 7 Post-war public silence about the ethnicity of Roma victims of Babi Yar can be interpreted against the background of the so-called internal censorship, in which memory actors had to follow the lines of the major narrative of World War II. As Ivan Katchanovsky pointed out "Soviet academic and public discourse concerning the war was heavily politicized and censored, and some historical facts and data were falsified to reflect the party line and official ideology" (Katchanovsky 2014, 217). The official concept of peaceful Soviet citizens affected the knowledge production, and the development of memory narratives and memory practices of the Roma genocide in Ukraine. The war against Nazi Germany was named "The Great Patriotic War of the Soviet nation," a term that appeared in Pravda on June 23, 1941 for the first time, on the day after the Nazi invasion (Yaroslavsky "Velikaya otecchestvennaya voina sovetskogo naroda," Pravda, June 23, 1941). The major narrative of the Nazi occupation at that time had a strong focus on heroes (partisans and underground fighters) but not on civilian victims. Hundreds of memorials devoted to dead soldiers, partisans and underground fighters were erected in Ukraine after 1945. Few of them were dedicated to civilian victims, and as a rule, without naming their ethnicity. However, the situation of memorialisation of the Jewish and Roma genocide during the first decade after the war was different. The struggle of Jewish intelligentsia for recognition of Nazi crimes led to some compromises with the state. On many Jewish mass graves, monuments were erected through the initiative of survivors and military veterans of Jewish origin. Most of these had a politically correct inscription about peaceful victims of fascism, which, however, often were doubled by inscriptions in Yiddish. The letters left no doubt about the ethnic origin of the victims (Altshuler 2002). Holocaust monuments appeared in the Soviet Union at many Jewish cemeteries emphasizing the ethnicity of the dead (Zeltser 2018). At the same time, most of the Roma genocide mass graves remained unmarked (Kotljarchuk 2016c). Alaina Lemon has pointed out that because victory over Nazism was seen to be achieved by all nationalities of the Soviet Union, the Nazi extermination of civil population was depicted as a tragedy for the entire Soviet nation without specifying the victims of the Roma genocide (Lemon 2000, 148). In 1945, Pravda informed its readers about the decision of the communist party to build a Memorial and Museum in Babi Yar "to the memory of tens of thousands of peaceful citizens of Kyiv" ("Pamiatnik pogibshim v Bab'em Yaru," Pravda, April 3, 1945) (see figure 2). The fact that most of the victims of Babi Yar were Jews and Romani was ignored. However, no memorial was built until 1976, and the site remained unmarked after the war. Having avoided a public discussion on the genocidal nature of the massacres at Babi Yar, the authorities stopped previous plans for memorialization of the site and dampened the historical narrative of the tragedy. The largest site of the Nazi extermination was not recognized by the state and was deprived of legal protection (Burakovskiy 2011). In 1961, as a result of an accident at the brick factory that had been built at Babi Yar after 1945, the dam securing large volumes of pulp collapsed and destroyed most of the ravines and mass graves. In post-war Soviet Union, there was little recognition of Roma as an ethnic group specifically and systematically targeted for persecution by the Nazis. The state preferred to treat the Roma as a part of the entire group of civilian victims called peaceful Soviet citizens, who suffered during the temporary occupation of the country. Romani of Kyiv, who survived the genocide, visited Babi Yar after 1945. The commemoration ceremony was held on Provody Day (a Ukrainian Orthodox religious holiday for commemoration of dead people). According to Roma tradition, a funeral wreath of flowers was left at Babi Yar and a commemoration dinner was arranged. Due public silence and the absence of a memorial, the tragedy was remembered only on a family level, inside local Romani circles (interview with Raisa Nabarchuk 2012). The Roma had very little possibility of carrying out their memory practices in public. The authorities have not regarded Babi Yar as a site for remembrance and have forced all involved actors to follow this decision. Without having a public space, the memory of genocide existed only in private family circles of the Roma community. According to Michael Stewart the situation of "remembering without commemoration" was typical at that time for Roma genocide survivors across Europe (Stewart 2004). Unlike the Jews, the Roma of Ukraine lacked a rich cultural landscape. Therefore, the Jewish Holocaust was commemorated not only at mass graves, but also through deserted synagogues, former ghettos and cemeteries. The Roma, who mainly had a vagrant way of life prior to the war, do not have any of these. The remaining mass graves constitute the only physical space of remembrance of the Roma genocide. Politics of Liberalisation and New Trends in Memory Narratives and Practices During the Khrushchev thaw, new interpretations regarding the significance of civilian victims of the Nazi occupation developed in the Soviet Union. In 1960, the Piskarevo Memorial Complex was opened in Leningrad devoted to "The Victims of Siege during the Great Patriotic War." In 1965, a large memorial "To the Victims of Fascism" was opened in Ukrainian Donetsk. In 1969, the Memorial Complex Khatyn' was completed in Belarus, on the site of a former village where the Slavic population was fully exterminated by the Nazis (Rudling 2012;Kotljarchuk 2013). A focus on civilian victims created opportunities for including the memory of Roma and Jewish genocides into the major memory narrative of the Nazi occupation. On the 20th anniversary of the tragedy, in September 1961, Yevgeny Yevtushenko published an epic titled "Babi Yar" in Literaturnaya gazeta, the leading newspaper of the Union of Soviet Writers. The poem, whose first line is "over Babi Yar there are no monuments" led to a strong public support to recognize Babi Yar as a site of Jewish genocide (Gitelman 1997, 20). Nikita Khrushchev, the party leader of the Soviet Union and former head of the ChGK commission in Kyiv had to meet the writers in order to explain the "errors" of Yevtushenko's epic. For the first time, the political leader recognized in the public sphere that both Roma and Jews were a primary target of Hitler's extermination war, but he denied the exceptional nature of the Roma and Jewish genocides: Let's take the case of Babi Yar. When I worked in Ukraine, I visited Babi Yar. Many people were murdered there. But comrades, Comrade Yevtushenko, you have to know that not only Jews died there, there were many others. Hitler exterminated Jews, exterminated Gypsies, but his next plan was to exterminate the Slavic peoples, we know that he also exterminated many Slavs. If we now calculate arithmetically, how many exterminated peoples were Jews and how many Slavs, those who state that it was anti-Semitic [war] would see that there were more Slavs exterminated than Jews. It's correct. So why should we put special attention to this question and contribute to hatred between peoples? What aims have those who raise such question? Why? I think this is completely wrong. (Khrushchev 2009, 2, 547) Pravda published a report from the meeting which stressed that: According to Comrade Khrushchev, the author of the poem [Yevtushenko] showed an ignorance of historical facts, he believes that the victims of Nazi atrocities were only the Jews, in fact there [in Babi Yar] were murdered many Russians, Ukrainians and other Soviet people of various nationalities. ("Rech' tovarishcha N. S. Khrushcheva," Pravda, March 10, 1963) As we see, the text of the speech at the meeting has been censored afterwards. The Roma have been excluded from the list of victims and the genocide of Jews was played down. According to the official narrative, the victims of genocide suffered under the Nazi occupation just like other people of various nationalities in the Soviet Union. In 1956, Khrushchev and the Soviet government took the initiative to introduce a criminal prosecution of vagabond Romani. According to the edict On Engagement in Work of Nomadic Gypsies the police had an obligation to stop all travelling Roma and to compel them to settle down. The local authorities had to provide Roma with temporary housing and work, and most itinerary Romani in Ukraine settled down. As a result, many genocide mass graves lost the personal and emotional link between relatives and victims. The representatives of local Roma communities that settled down near many of the genocide sites knew nothing about the murdered Roma. The situation was different for Jewish communities since most of the genocide mass graves were commemorated by local genocide survivors (Kotljarchuk 2016c). The site of Babi Yar was an exception due to the existence of a local community of settled Roma, represented by first and second generations of genocide survivors who continuously visited Babi Yar after 1956 (interview with Raisa Nabaranchuk; interview with Tatiana Demina). In 1966 Anatoly Kuznetsov published a documentary novel about the Babi Yar massacre. Kuznetsov grew up in Kurenivka in the vicinity of Babi Yar, and survived the occupation in Kyiv. The novel describes personal experience of the Nazi occupation focusing on the massacre of Jews, but also mentioned the mass killings of Roma. The novel was printed first as a journal publication in 2 million copies (Kuznetsov 1966). In 1967, the novel was published as a book in 150,000 copies by Komsomol printing house (Kuznetsov 1967b). The censors cut the manuscript down by a quarter of its original text and removed all mention about the local collaboration with the Nazis, as well as anti-Semitic attitudes of Slavic neighbours (Blium 1996, 133-34). It should be noted that a fragment concerning the mass killings of Roma remained in the Soviet edition. The book was translated into English in 1967 and published in non-censored content in the USA (Kuznetsov 1967a). For the first time, Soviet and international readers learnt about the mass killings of Roma at Babi Yar: The fascists hunted Gypsies as if they were game. I have never come across anything official concerning this, yet in the Ukraine the Gypsies were subject to the same immediate extermination as the Jews … Whole tribes of Gypsies were taken to Babi Yar, and they did not seem to know what was happening to them until the last minute. (Kuznetsov 1967a, 100) Despite having been published by Komsomol printing house, the novel was highly criticized by Izvestia, the official newspaper of Soviet government (Troitskii "Po stranitsam zhurnalov," Izvestiia, January 20, 1967). Kuznetsov defected from the Soviet Union in 1968, and the book was confiscated from the libraries. Nevertheless, the novel became the first single public testimony on the mass killing of Roma at Babi Yar. In 1968, an international journal for Romani studies published a review on Kuznetsov's novel, written by Angus Fraser, with special attention to the fate of the Romani at Babi Yar (Fraser 1968). In 1968, Grattan Puxon, a British Traveller-Gypsy activist and Dr. Donald Kenrick, a prominent linguist, completed the first-ever research project on the Nazi genocide of the Roma, supported by the Institute of Contemporary History at the Wiener Holocaust Library in London. The authors referred to Kuznetsov and noted that "an unknown number of Gypsies were murdered with the Jews at Babi Yar" (Kenrick and Puxon 1968, 149-152). This information was repeated in their ground-breaking book on the Roma genocide, The Destiny of Europe's Gypsies (1972). On September 29, 1966, on the 25th anniversary of the tragedy, an unauthorized rally was held for the first time at Babi Yar. The participants demanded the recognition of the Jewish genocide and the construction of a monument (Nakhmanovich 2006). The rally was attended by hundreds of people, among them Holocaust survivors, Jewish activists, writers, film makers, and dissidents of Jewish, Russian, and Ukrainian origin. The extermination of Roma in Babi Yar was not mentioned by any of the speakers ("Babi Yar-1966: kak eto bylo," Maidan, September 28, 2006). The same year, a feature film Those Who'll Return Shall Love to the End was shot in Kyiv by Leonid Osyka. The central scene of the film is the mass killing of a Gypsy caravan by the Nazis (Kotljarchuk 2016a). At the end of 1966, a simple foundation stone was put up in Babi Yar with the inscription, "A monument will be erected here to honour the Soviet peoplevictims of fascist crimes in the period of temporary occupation in Kiev in 1941." Finally, in 1976 the Soviet memorial was erected at Babi Yar (see figure 3). The initiators discouraged placing any emphasis on ethnic aspects of the tragedy. Instead a typical soldier monument was constructed with the inscription, "Soviet citizens, POWs, soldiers and officers of the Red Army, were shot here in Babi Yar by German Fascists" (Evstafieva 2004, 187-206). The construction of a memorial 35 years after the tragedy was presented as a great achievement of memory politics (Kotljarchuk 2014, 34-36). The civilian victims were defined by officials as "peaceful citizens of Ukrainian, Jewish, Belarusian and Polish descent" (Odinets"Monument u Bab'ego Yara," Pravda, June 23, 1976). The Roma victims were not mentioned. An interview with Chief Architect Anatoly Ignashchenko illustrates the main line of the official narrative of Babi Yar, with a strong focus on heroes and combatants: "The memorial is intended as a sculptural requiem to strong-minded peoplemariners of Dnepr River Navy, the defenders of Kyiv, underground fighters, the POWs, but also peaceful citizens, women, elderly, children" (Tsikora "Monument zhertvam fashizma," Izvestiia, July 2, 1976). Despite the silence concerning the Roma victims, the 1976 memorial legitimized memory practices of local Romani. The first and second generations of genocide survivors started to visit the monument on Victory Day and Memorial Day, the 29th of September (see figure 4). They brought flowers and photos of murdered relatives to the monument (interview with Raisa Nabarchuk). However, due to the lack of an educated strata within Romani community, efforts for public recognition and memorialization of the genocide were problematic. The only known attempt was made in 1968 in Moscow. The artists at the State Theatre Romen sent a collective petition to the regional authorities in Smolensk asking for permission to erect a monument on the site of the mass extermination of the gypsy village Aleksandrovka (Holler 2009, 263-79). The answer was negative, despite the existence of the official print Democratisation of Soviet society during the perestroika opened closed public "floodgates" of memory of the Roma genocide. In 1985, the theatre Romen prepared a spectacle Birds need the Sky performed in Moscow, Kyiv, and other large cities, dedicated to Roma genocide victims (Kotljarchuk 2016a). The review of the performance was the first detailed account of the Nazi genocide of Roma published in the Soviet Union after 1945 (Kishchik 1985). In 1989, a metal plaque was placed at the Babi Yar monument in Hebrew symbolizing the conversion of Babi Yar from a typical memorial of the Great Patriotic War to a Holocaust site. The first testimonies of the Jewish tragedy were published already during World War II. After the war the Jewish memory actors made a lot of public efforts for recognition of Babi Yar as the site of genocide (Mankoff 2004;Lustiger 2008, 122-124). The first list of Jewish victims of Babi Yar with more than 7000 names and several personal biographies was collected and printed during the Soviet time (Zaslavskii 1991). The attempts of Roma memory actors were not so successful. In 1989, the first memorial on the site of the Roma genocide was opened at the village of Aleksandrovka near Smolensk. However, the 1989 monument presented victims as the Soviet peaceful citizens without specifying their ethnicity. It was not until 2019 that the first Roma genocide memorial was erected in Russia, three years after the Roma genocide memorial in Kyiv. The new memorial erected at Aleksandrovka is the first Roma genocide monument in the post-Soviet countries where the names of the victims are written. However, the list of the victims is not comprehensive. The commemoration of the Roma genocide in the post-Soviet Ukraine faced several obstacles related to the dependency on past history, like the poor documentation and the de-personification of the victims. A key challenge for the commemoration of the Roma tragedy of Babi Yar was the lack of names and personal biographies of both genocide victims and survivors. This highlights the main difference between the ongoing commemoration of the Holocaust and the Nazi genocide of Roma in Ukraine. Due to the collective protests and efforts of Jewish intelligentsia, the Jewish trauma of Babi Yar was accepted, to some extent, by the Soviet government; and recognized in 1989-1991. The official recognition of the Roma genocide was a long-term process that took decades after the fall of the Soviet Union. Babi Yar and the Commemoration of the Roma Genocide in Contemporary Ukraine After 1989, the significance of the Holocaust underwent a substantial change and the Jewish memorial Menorah was opened at Babi Yar on the 50th anniversary of the tragedy. Two new editions of the list of Jewish victims of Babi Yar were prepared and published, which contain more than 14,000 names and several personal testimonies (Shlaen 1995;Levitas 2005). Only two names of Roma victims have been identified for the same period of time (The Babi Yar Public Committee Database 2019). However, even two names have a great symbolic value for actors of memory in their efforts for public recognition of the Roma genocide. An initiative to erect a Roma genocide monument at Babi Yar was taken in 1995 by Anatoly Ignashchenko, the former chief architect of Soviet memorial. Ignashchenko discussed the design of the Roma genocide monument with Roma activists Mikha Kozimirenko and Volodymyr Zolotarenko (Yarmoluk "Kvitok do Romanistana," Den', May 30, 1998; Zinchenko "Baron i kosmos," Aratta, February 17, 2009). Kozimirenko was a genocide survivor and poet who published a poem "Babi Yar" devoted to Roma victims in which he protested against unmarked mass graves and the absence of a memorial. The monument, which was sponsored by different non-governmental organizations, was completed in 1996. It represents a life-size gypsy wagon made of wrought iron with bullet holes through it. Due to his knowledge of Romani memory practices at Babi Yar, Ignashchenko came up with a solution to overcome the de-personification of victims. He attached to the tent several photo frames in which relatives are encouraged to insert photos of murdered relatives (Kotljarchuk 2014, 45). The inscription was made both in Ukrainian and Romani and was devoted "To the memory of Roma exterminated by the Nazis in 1940-1945. We remember!" The idea of the monument was inspired by the film A Roma directed by Alexander Blank and based on the novel written by Anatoly Kalinin. The film tells a story of a Red Army veteran and a gypsy baron Budulai who travelled with a caravan through the Soviet Union searching for his family who were killed by the Nazis, and their wagon that was lost in the war (Kotljarchuk 2016a). However, when the monument was ready to be placed in Babi Yar the city authorities stopped the erection of the monument (interview with Tatiana Demina). What exactly it was that prevented the proposed monument is unknown. Ignashchenko had to donate the monument to the town of Kamyanets-Podilsky. On Memorial Day, September 29, 1999, a simple stone foundation was erected in Babi Yar at the expense of different Roma non-governmental organizations. This time the city authorities sanctioned the erection of the stone. The setting up of the monument led to new memory practices. On the International Roma Day (April 8), activists and representatives of the Roma community started to arrange an annual ceremony at the foundation stone. They placed flowers and installed three flags behind the monument: those of the national Roma, Ukrainian, and European Union. The actors of memory used a public ceremony to share awareness about the Roma history and collective trauma with visitors of Babi Yar as well as through publication about the event in the press. The 2005 resolution of the Ukrainian Rada, On the International Day of the Roma Holocaust, gave an impetus to further memorialization of the Roma genocide. The parliament instructed local authorities "to identify mass graves and commemorate deported and executed members of the Roma national minority" ("Resolution" 2005). The address of President Viktor Yushchenko on the occasion of the International Day of the Roma Holocaust, which was issued on August 2, 2009, marks a new line in the major narrative of World War II in Ukraine. For the first time, the political leader of the country devoted a statement to the Roma genocide. The president argued for including the Roma genocide into a national memory narrative of World War II. He stressed the exceptional nature of the Nazi extermination of Romani people and called for active participation of authorities and civil society in the commemoration of the Roma genocide (Yushchenko 2009, 11-12). In 2011, some weeks before the International Roma Holocaust Memorial Day, the foundation stone at Babi Yar was totally destroyed by unknown vandals. The Roma Congress of Ukraine sent an open letter of protest to Prime Minister Mykola Azarov, who was the chair of the Committee for the 70th anniversary of Babi Yar. The Congress called for an end to "discrimination of their memory by the state" and required inclusion of Romani representatives in the Committee and constant dialogue between Roma and government regarding the construction of a Roma genocide memorial in Babi Yar ("Romi vimahaut 0 vid Azarova vshanuvaty i ikhni Holocaust," Ukrains'ka Pravda, July 13, 2011). The protest letter led to the erection of a new memorial stone at Babi Yar, this time funded by the state. A new inscription appeared: "In memory of Romani, who were shot in Babi Yar" (Kotljarchuk 2014, 46). The reaction of the Romani community to a new monument was negative. They believed that the inscription meant that the state had constructed a final memorial at Babi Yar. The Roma non-governmental organizations protested and stressed the fact that the Roma people have been waiting almost twenty-five years for a memorial in Babi Yar, while about twenty other memorials have been built (interview with Raisa Nabarchuk). In 2012, the Ukrainian government approved the concept of the National Historical Memorial Preserve in Babi Yar, which includes the building of a Roma memorial and commemoration of the genocide. In cooperation with the Roma, the National Preserve organized a public ceremony on August 2 dedicated to the Roma Holocaust Memorial Day (Babyn Yar National Historical Memorial Preserve 2019). The public ceremony of the Roma genocide at Babi Yar on August 2 marks the commencement of new memory practices. 8 Since 2012, a public ceremony on August 2 in Babi Yar unites the Ukrainian Roma, Roma activists from other European countries, authorities, and the general public (see figure 5). In 2016, Prime Minister Volodymyr Groysman, the head of the Organising Committee for Preparation of the 75-year Commemoration of Babyn Yar, announced the opening of the Roma genocide memorial at Babi Yar (Organising Committee 2016). The gypsy wagon designed by Ignashchenko that had been rejected as a memorial in Kyiv has been renovated and transferred back to the city and erected in September 2016 at Babi Yar, near a Soviet memorial (see figure 6). The opening of a memorial in Babi Yar symbolizes the inclusion of the memory of the Roma genocide into the national narrative of World War II. In 2016, information panels about the Nazi genocide of Roma were installed at Babi Yar; however, very little information and photo material on these panels were dedicated to the local victims. In September 2016, President Petro Poroshenko announced the building of a new museum and central memorial in Babi Yar that will be completed in 2023. The project is financed by an International Foundation established by oligarchs of Jewish descent from Russia and Ukraine (Zisels 2017). On behalf of the government, the Institute of History at Ukrainian Academy of Sciences developed The Concept of the Museum and Memorial Centre at Babi Yar that was published online in 2018 (hereafter the Concept). The research team of Ukrainian historians led by professor Hennady Boriak describes the future memorial and museum as a reunified site of commemoration for all victims of Nazism in Ukraine (Kontseptsiya 2018). The authors avoid speaking about the Roma as victims of genocide. The term Holocaust is used in the Concept only regarding the Jews. The authors ignore the facts that in many European countries the Roma genocide is included in the concept of Holocaust and the European Parliament calls August 2 The European Roma Holocaust Memorial Day (Kotljarchuk 2020). The Concept tends to glorify the Organization of Ukrainian Nationalists (OUN) presenting them as any other victims of Nazism (Kontseptsiya 2018, 51-53). This is problematic due to the collaboration of many nationalist leaders with the Nazis (Burakovskiy 2011;Oldberg 2011;Rossolinski-Liebe 2012;Rudling 2016). The authors of the Concept are critical of the idea of dedicating a central memorial to the Jewish Holocaust and name this point of view "an incorrect vision, which is popular in Jewish, Western, Liberal-Russian, and other post-Soviet circles" (Kontseptsiya 2018, 5). The state-run concept of a new memorial and museum at Babi Yar and the use of the memorial for utilitarian political goals of Ukrainian leadership have sparked a very wary reaction from the Jewish diaspora (Briman 2020). The alternative project of a future memorial and museum is presented by the non-governmental Holocaust Memorial Centre, which is an academic section of the International Foundation Babyn Yar. The authors see a future memorial and museum as, first of all, a site of the Jewish Holocaust. This vision is supported by international Advisory Scientific Council represented by international and Ukrainian researchers within Holocaust studies (Scientific Council 2019). In November 2018, the Scientific Council led by Karel C. Berkhoff presented The Basic Historical Narrative of the Holocaust Memorial Centre Babyn Yar (hereafter The Basic Narrative, see Berkhoff 2018). The Basic Narrative has a section about the mass extermination of Roma in Kyiv (Berkhoff 2018, 224-230). However, the authors argued for exclusion of the Roma genocide from the term of Holocaust (Berkhoff 2018, 12-13, 225). It is known that many international researchers of the Jewish Holocaust rejected the claim that what had happened to the Roma during World War II could be termed Holocaust (Gaunt 2016). A large part of the content of the Basic Narrative is devoted to the collaboration of local auxiliary police and many nationalists with the Nazis and their participation in the Holocaust (Berkhoff 2018, 58-95). Therefore, the authors argued for the exclusion of Ukrainian nationalists murdered by the Nazis in Kyiv from the memory narrative of victims of Babi Yar. The existence of two alternative academic projects led to public debates. On one side, Oleksandr Kruglov, member of the Scientific Council and renowned researcher of the Nazi occupation in Ukraine, criticized the Concept of the Institute of History and argued that Babi Yar is a symbol of a Jewish catastrophe (Kruglov 2019). On another side, Vitaly Nakhmanovich, renowned historian of Babi Yar, argued against the Basic Narrative pointing it out as a private project sponsored by oligarchs with focus on the Jewish Holocaust only. He believes that the national memorial and museum should be designed and constructed by the state-run agency, not by private actors (Nakhmanovich 2020). In an open letter signed by the authors of the Concept, they warn for plans of "privatisation of memory" by the International Foundation and call the Basic Narrative a "wrong attempt to link the site of Babi Yar with the history of Holocaust only, ignoring other victims and other dramatic moments of our past" (List-zasterezhennia 2017). The Basic Narrative has also been criticized by the Institute of National Remembrancea central executive body operating under the government of Ukraine. Using conspiracy rhetoric, Volodymyr Viatrovych, then Director of the Institute, blamed the Holocaust Memorial Centre for having a "Russian connection" and accused the authors of the Basic Narrative of denial of the symbolic value of Babi Yar as an all-national pantheon remembering the Nazi occupation (Viatrovych 2019). The ongoing public debates over the future memorial and museum in Babi Yar could be interpreted in the terms of academic wars of memory. Neither the Concept nor the Basic Narrative are against the memory of Roma genocide. However, the Basic Narrative tended to see Romani as victims of a second-rate genocide and the Concept mentioned Romani among a long list of different groups of victims, together with Ukrainian nationalists. The process of democratization in Ukraine, that started in the end of the 1980s and accelerated after the 2014 Revolution of Dignity, led to the consensus between various non-governmental and governmental actors to commemorate the Roma genocide. Ukraine is an exception in the intensification of memorialisation of the Roma genocide. About twenty Roma genocide memorials were erected in Ukraine during the last decade, compared with other post-Soviet states. Three Roma genocide memorials were erected in neighbouring Belarus, one in Russia, one in Estonia, one in Latvia, and two in Lithuania. The Memorial at Babi Yar became a central site for public commemoration of the Roma genocide in Ukraine. The memorial is visited regularly by various official and non-governmental delegations (My nikogda ne zabudem 2019). New memorials constructed in the countryside are often motivated by the existence of Roma memorials in the capital (Official News Portal of Sumy Region 2016). New genocide memorials have been erected in both western and eastern parts of Ukraine and memory work on the Roma genocide is supported in Ukraine by most of the parliamentary parties (Kotljarchuk 2016c). The previous studies show significant regional and political differences concerning memory narratives of World War II in contemporary Ukraine (Jilge 2006(Jilge , 2007Katchanovsky 2014, Plokhy 2017. The Nazi genocide of Roma is an exception here, due to the absence of essential regional and political differences in the ongoing process of commemoration. For the Ukrainian political left (Communist Party of Ukraine, which was banned in 2015, Party of Regions, and Opposition Platform -For Life), the Roma genocide is an example of cruelty inflicted by the Nazi regime that was defeated by the Red army. For ultranationalists, represented by the members of The All-Ukrainian Union Svoboda and other political organisations, this is a tool to downplay the Jewish Holocaust and to include the leaders of OUN into a national gallery of the victims of Nazism. As Yulia Yurchuk points out "at the level of national memory, the legacy of the OUN and UPA will surely continue to present grounds for disputes and discontent" (Yurchuk 2017, 131). Another factor behind the inclusion of the memory of the Roma genocide into the major narrative of World War II is the integration of Ukraine into the European Union. The European Union regards commemoration of the Roma genocide as a tool for integrating the Romani minority into the major society (Baar 2011). The European Commission against Racism and Intolerance (ECRI) continuously monitors implementation of the 2005 resolution about memorialisation of the Roma genocide (Kotljarchuk 2016c). A principal actor in remembering the Roma genocide in Ukraine is a non-governmental Centre for Holocaust Studies in Kyiv. Academic and educational programmes on the Roma genocide are arranged by the Centre and are led by Mikhail Tyaglyy. There is no such institution in neighbouring Russia or Belarus. The Centre for Holocaust Studies organizes seminars and training for schoolteachers and others on the history and memory of the Roma genocide and cooperates with Roma activists and genocide survivors (Tyaglyy 2013). In October 2019, at the National Museum of the History of Ukraine in World War II, the Centre opened one of the first exhibitions of the Roma genocide called The Neglected Genocide ("Vistavka Znevazhenii genocide" 2019). The cooperation of Romani representatives and academic scholars is the next factor behind the inclusion of the memory of the Roma genocide into the major memory narrative. As David Gaunt pointed out: Bringing together Romani representatives and genocide scholars had been possible through two intellectual trajectories. One approach emerged from the growing insight among historians that memory, previously shunned, could enrich and deepen historical narrative based on archival sources… . Another, completely different, trend grew out of the Roma side, reacting to the fact that scholars who were not Roma dominated Romani studies, with an increasing demand to participate in research on all levels. The slogan 'Nothing about us without us', long expressed only informally, has now been formalized by leading Roma human rights activists." (Gaunt 2016, 38) Conclusion In the study of the commemoration of the Roma and Jewish genocides in Germany, Nadine Blumer analysed debates about the representation of the past at a Holocaust memorial in Berlin. Her research examined how the very idea of the Roma and Sinti genocide memorial arose as a response to the proposal to build a central, reunified memorial devoted to the Holocaust, and how this idea fell out of favour due to the competing interests of various memory actors (Blumer 2013). The case of Babi Yar is similar. The idea to build a new central memorial, devoted to all groups of victims, seems to be problematic due to competition between various memory actors with different visions of the past and for the content of future memorials and museums. Slawomir Kapralski argues that the issue of the memory of the Roma genocide depends on changing dynamics of perceptions and memory practices, which are influenced by the development of the Roma genocide discourse and transformation of the past into a symbolic value of modern Romani identity (Kapralsky 2013). As he has noted "through commemoration of the genocide, the Roma people focus on their common past in order to create a better future" (Kapralsky 2012). Our study confirms this thesis. The revising of Soviet narratives of World War II in Ukraine opened possibilities for inclusion of the collective memory of the Roma people into the major narratives of World War II. As Andrii Portnov pointed out, the general memory narrative of World War II in Ukraine switched from the memory of heroes to the memory of the suffering of ordinary people (Portnov 2007). The political consensus between the Romani activists, Ukrainian genocide scholars, and the authorities created possibilities for inclusion of the Roma minor memory of the genocide into the major memory narrative of the Nazi occupation in Ukraine. The main obstacle for commemoration and memory practices of the Roma genocide in Ukraine is poor documentation. The lack of historical knowledge is a great challenge for memory actors who are trying to develop memory practices without a great number of oral testimonies and personal biographies of victims and survivors. Examination of memory practices of the Roma community in Kyiv shows that they had always commemorated their relatives who were murdered by the Nazis. However, the content and day of commemoration has changed depending on the political context and development of the major narrative. After the war, memory practices relating to genocide victims were limited to family ceremonies on religious holidays inside the Romani community. The politics of liberalisation in the Soviet Union allowed the Roma to legitimate their memory practices and add new content. After 1976, the Roma visited the memorial at Babi Yar on Victory Day (May 9)a principal Soviet holiday for the commemoration of the Great Patriotic War. After the erection of the first foundation stone in 1999, the Roma activists have moved the main day of commemoration from the 9th of May to International Roma Day (April 8), in order to mobilize the national movement and to share awareness about the Nazi persecution of Romani people. The parliamentary resolution on the Roma genocide and the creation of a National Preserve, as well as the process of integration of Ukraine into the European Union, led to the formation of new memory practices. For the first time, the authorities have been involved in the planning of the commemoration ceremonies. The day of commemoration has been moved again, this time, to International Roma Holocaust Memorial Day (2 August). The opening of a national memorial at Babi Yar in 2016 symbolizes the final inclusion of the memory of the Roma genocide into the major narrative of World War II. The positive decisions regarding the erection of the Roma genocide memorial at Babi Yar were taken by different presidents of Ukraine, belonging to different political parties. The construction of genocide memorials in Ukraine was supported by various political organisations, from the Communist Party of Ukraine to the nationalist Cossack associations (Kotljarchuk 2016c). The major narratives of World War II in Ukraine shaped and reshaped the memory and memory practices of the Roma genocide in the exchange and connection between different memory actors in a changing political context. The gradual entry of the memory of the Roma genocide into the major narrative of World War II could be explained by many factors: the switch of focus from the heroic soldiers of the Red army to the suffering of ordinary Ukrainian people, the democratisation process, and the integration of the country with the European Union. Today in Ukraine, the memory of the Nazi genocide is a key element of the Roma national movement and memory practices. Disclosure. Author has nothing to disclose.
2021-08-20T18:33:18.729Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "610c965366fcaf93a664a6870112b9d533231b61", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/DF98FCF821384B0FDCE6A437D70F2ACA/S0090599221000040a.pdf/div-class-title-babi-yar-and-the-nazi-genocide-of-roma-memory-narratives-and-memory-practices-in-ukraine-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "927907d23afb5329ea95c9d2cad1683efd439818", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "History" ] }
134315121
pes2o/s2orc
v3-fos-license
Predictive Uncertainty Estimation in Water Demand Forecasting Using the Model Conditional Processor : In a previous paper, a number of potential models for short-term water demand (STWD) prediction have been analysed to find the ones with the best fit. The results obtained in Anele et al. (2017) showed that hybrid models may be considered as the accurate and appropriate forecasting models for STWD prediction. However, such best single valued forecast does not guarantee reliable and robust decisions, which can be properly obtained via model uncertainty processors (MUPs). MUPs provide an estimate of the full predictive densities and not only the single valued expected prediction. Amongst other MUPs, the purpose of this paper is to use the multi-variate version of the model conditional processor (MCP), proposed by Todini (2008), to demonstrate how the estimation of the predictive probability conditional to a number of relatively good predictive models may improve our knowledge, thus reducing the predictive uncertainty (PU) when forecasting into the unknown future. Through the MCP approach, the probability distribution of the future water demand can be assessed depending on the forecast provided by one or more deterministic forecasting models. Based on an average weekly data of 168 h, the probability density of the future demand is built conditional on three models’ predictions, namely the autoregressive-moving average (ARMA), feed-forward back propagation neural network (FFBP-NN) and hybrid model (i.e., combined forecast from ARMA and FFBP-NN). The results obtained show that MCP may be effectively used for real-time STWD prediction since it brings out the PU connected to its forecast, and such information could help water utilities estimate the risk connected to a decision. Introduction The variation of the water consumption pattern during the day and week is due to several factors, namely climatic and geographic conditions, commercial and social conditions of people, population growth, technical innovation, cost of supply and condition of water distribution system (WDS) [1,2]. Hence, an accurate short-term water demand (STWD) forecast is required for continuous supply of water to consumers with appropriate quality, quantity and pressures [2]. Several predictive models have been proposed to solve water utility operational decision problems [2][3][4][5][6][7][8][9][10][11]. It has been reported in the scientific literature that predicting with hybrid models gives the best forecast for STWD prediction [2,[12][13][14][15]. However, hybrid forecast is deficient for the kind of operational planning decisions that water utilities make when future demand is uncertain [7]. This is because it does not take into account the real uncertainty connected to the future level of demand, and this is a serious limitation [2,7,[16][17][18]. Moreover, given that the actual objective of WDS management is not improving demand forecasts per se, but rather to more reliably guarantee short term user's demand, the problem must be formulated in terms of decision under uncertainty [16,17]. The Bayesian Decision approach is one of the best ways to solve this problem [19,20]. With it, utility function is most often a subjective cost function expressing the propensity of the decision maker to risk and its expected value. Therefore, forecasting the entire predictive density instead of the sole expected value is required, and this can guarantee more reliable and robust decisions [16]. Several authors [16][17][18][21][22][23][24][25] confirmed that it is absolutely necessary to take into account predictive uncertainty (PU), most especially when the predictive models considered are applied within the framework of water management procedures or to support decision-making [21]. According to [16,18], PU is described by the probability distribution of the future (real) value of the predict, which is conditional on the knowledge available at the time of forecasting. To understand and analyse the overall level of the PU connected to STWD forecast, model uncertainty processors (MUPs) are considered [16,26,27]. Amongst other MUPs (e.g., the Bayesian Model Averaging (BMA) [26], Hydrological Uncertainty Processor (HUP) [27]), the Model Conditional Processor (MCP) proposed by [16] is used in this paper. MCP is an uncertainty post-processor that allows the combination of one or more forecasting models to produce a predictive density instead of a single valued forecast [16,17]. The motivation for the selection of MCP is based on the recent applications that have proven its validity and robustness [16][17][18]. Furthermore, decisions such as releasing sufficient amount of water to users may be linked to losses such as loss in users credibility, or more consistently in terms of contractual penalties if the objective is not met [16,24]. At the same time, releasing too much water may lead to wastes, thus to potential economical losses. This is why, by means of a predictive density of future demand, one should compromise between the cost of water, including loss of future opportunities injected into the water distribution network (WDN) to meet future demand and the expected value of losses if the demand objectives are not met. The point is that once the decision is made on how much to inject into the WDN, what is injected is a real physical quantity which has a real cost also in terms of lost opportunities, while the economical losses depending on the future actual demand are still uncertain. This is why, to compromise between the real costs for wasting water and the expected losses for not meeting demand, one needs first of all to assess the probability of future demand, conditional on all our available knowledge, and use this information to estimate the future expected losses by integrating the loss function, which depends on demand, times the probability of demand, over the entire domain of possible demands [18,24,28]. Models are the tools to allow us to correctly assess such an uncertainty, but they are not the final goal. Many researchers in the field of hydrology have looked into the problem of assessing the PU connected to a real-time flood forecasting system using the MCP [16,17]. However, to our knowledge, only Alvisi et al. [18] have used MCP to assess the PU within the framework of water demand forecasting on the basis of the forecasts generated by using two deterministic models, namely a cyclicity and persistence based model (Patt-for) and a feed-forward back propagation neural network (FFBP-NN). Based on the above information, the main contribution of this paper is to apply the MCP approach to demonstrate how a number of comparatively good (or well performing) deterministic models, namely autoregressive-moving average (ARMA), FFBP-NN and hybrid model (combined forecast from ARMA and FFBP-NN) may improve our knowledge, thus estimating the predictive uncertainty when forecasting into the unknown future. This motivation is based on the fact that these models (e.g., hybrid model) are better deterministic models in the current state of the art [2], and have not been tested in assessing the predictive density in another paper before in this perspective. To achieve this aim, the probability density of the future demand is developed based on the forecasts generated by ARMA, FFBP-NN and the hybrid model. Based on an average weekly data of 168 h, the forecasting performances of ARMA, FFBP-NN, hybrid model and MCP are assessed. Furthermore, MCP is used to estimate the PU connected to the STWD forecast. Finally, in this work, we demonstrate how to verify the correctness of the estimated predictive probability by comparing the predicted conditional density to the sampling density of prediction errors via a graphical/statistical acceptance-rejection test. This is an essential step not present in previous works (for instance in [18]) to guarantee that the assessed probability density can be reliably used to estimate the expected losses in decision-making. Model Conditional Processor (MCP) MCP is a Bayesian method (i.e., uncertainty post-processor) used to estimate the PU, which is conditional on a set of historical observations and the corresponding values predicted by one or more deterministic forecasting models [16][17][18]28,29]. In this paper, we demonstrate how to use the models' information to improve our knowledge on the future demand, in terms of variance of prediction errors, by building the probability density of the future demand conditional on the predictions generated by ARMA, FFBP-NN and hybrid model through the MCP approach. Todini [16] developed the MCP approach with the aim of estimating the predictive distribution of a given predictand conditional upon one or more model forecasts, on the basis of the following useful properties of the multi-Normal distribution [30]: If a real valued random vector x j+k , is partitioned into two vectors x = ŷ y , where y j andŷ k , then the mean, µ x = µ y µŷ and the variance matrix, Σ xx = Σ yy Σ yŷ Σŷ y Σŷŷ , and the two partitions y andŷ, will be normally distributed, Under this assumption, one can derive the distribution of each partition conditional on the other one. Therefore, the distribution of y conditional onŷ, is the normal distribution N(µ y|ŷ , Σ yy|ŷ ) with mean and variance co-variance matrix According to [16,18], the MCP method involves the conversion of historical observations and the corresponding predicted values into a normal space using the Normal Quantile Transform (NQT) in order to arrive analytically at an estimate of the joint distribution of the real and forecasted values and hence at a conditional distribution of the real values given the forecasted ones. However, in this paper, we are in the most favourable case since the conversion into and return from the Gaussian space and the problem of the tails fitting are not necessary. Thus in this paper, the observations and model forecasts generated by ARMA, FFBP-NN and the hybrid model are essentially Gaussian. The MCP approach was here implemented to assess the predictive density of the demand conditional to the hybrid predictions based on the ARMA, FFBP-NN and hybrid model, the latter taken as the third model. The importance of including the hybrid model lays in the fact that in the multi-variate approach described by Equations (1) and (2), the weights and the resulting variance depend directly on Σ yŷ the covariance matrix between observations and model forecasts and inversely on Σŷŷ the covariance matrix among the model forecasts. In layman's terms, as a model contributes to the information, the correlation is higher between its forecast and the observations and the forecasts are less correlated with the other model forecasts. As it stands, the hybrid model, a linear combination between the ARMA and FFBP-NN models, has a high correlation with the observations and to a lesser extent with the single ARMA and FFBP-NN models, which allows for the provision of a small amount of important additional information. Case Study Description and Discussion of Results The predictive performances of ARMA, FFBP-NN, the hybrid model and the MCP are assessed based on a 168 h (one week) long set of data sampled at hourly time steps estimated from observations by averaging eight weeks of observations (see Figure 1). These averages were the only available data from a case study site located in a hydraulic zone in the small city of Alquerias (Murcia) in south-eastern Spain, which has a population of approximately 5000 consumers and an extension of nearly 8 km 2 [31]. The number of data used for the calibration and validation sets are approximately 60% and 40% respectively, where the first 100 h are used for calibration. Figures 2-4 and Table 1 are obtained based on the MCP mathematical expressions given in Equations (1) and (2) together with the ARMA, FFBP-NN and hybrid models respectively given in Equations (3)-(5) [2]. In addition, the predictive performances of ARMA, FFBP-NN, hybrid model and MCP are evaluated by using the following forecasting statistical terms: root mean square error (RMSE), mean absolute percentage error (MAPE) and Nash-Sutcliffe (NS) model efficiency as given in Equations (6)- (8). Although optimising the single value statistics is not the main goal of this paper, the RMSE, MAPE and NS values presented in Table 1 show that MCP generated the best forecast compared to ARMA, FFBP-NN and hybrid model (see Figures 2 and 3). In addition to providing the expected conditional forecast, as all the models do, the MCP approach allows the correct estimation of the full conditional predictive probability distribution (see Figure 4). where p and q are the model orders, φ is autoregressive parameter, θ is the moving average parameter, µ is the mean value of the process, and t is the forecast error at time t. Y t is the observed value of demand at time t, k is the number of historical periods, Y t−k and ε t−k is the observation at time t−k [2]. where p is the number of hidden nodes, h is the number of input nodes, f is a sigmoid transfer function, α j is the vector of the weights from the hidden to the output nodes, β ij are the weights from the input to hidden nodes, α 0 and β 0j are the weights of the arcs leaving from the bias terms [2]. whereŶ i,t is the predicted value of the time series at time t using the i th model, β 0 is the regression intercept, β i coefficients are determined by optimisation or least squares regression to minimise the mean square error (MSE) between the hybrid forecastŶ i,t and the actual data [2,7]. where Y t is the real observation,Ŷ t is the forecast value at time t, and µ Y t is the mean of real observation [2]. In Figure 4, the expected conditional value and a 95% probability band are compared to the observations showing that, as expected, most of the observations fall within the uncertainty band. The outcomes show that MCP provides more information in terms of a correct estimate of the full predictive probability density, which allows estimating the "expected utility function" within a Bayesian Decision scheme. In addition, the probability plots obtained in Figure 5 are generated based on [32], which shows the hypothesis that our predictive probability distribution correctly estimated cannot be rejected at the 95% probability level. The following tests (see Figure 5) demonstrate that MCP correctly estimates the predictive probability density, and such outcome is useful within the framework of water management procedures or to support decision-making [16][17][18]21]. Unlike the forecasts generated by ARMA, FFBP-NN and hybrid model, MCP provided a probabilistic forecast such that Equations (1) and (2) resulted to a mean and variance given below, whereŷ 1 ,ŷ 2 andŷ 3 are the model forecasts for ARMA, FFBP-NN and hybrid model respectively, and y is the historical observation. According to Equations (1) and (2), which correspond to multiple regression in the Normal space, the predictive probability density of the predictand (the future demand) conditional to the three model predictions is the Normal probability density with: Time of the Week [h] where µ y and σ 2 y are mean and variance of the predictand, while µ y 1 , µ y 2 and µ y 3 are the means of the three models forecasts. The value of the weights ω 1 , ω 2 and ω 3 can be estimated from the observations y (the predictand) and the model forecastsŷ 1 ,ŷ 2 andŷ 3 (the predictors), as: where γ yŷ i , ∀i = 1, ..., 3 are the co-variances between the observations y and the model forecastŝ y i , while γŷ iŷj , ∀i, j = 1, ..., 3 are the variances (∀i = j) and the co-variances (∀i = j) between the model forecasts. By setting Equation (9) becomes: Accordingly, the estimated weights and variance become: ω 0 = −2.285; ω 1 = 0.406; ω 2 = 0.349; ω 3 = 0.370 var{y|ŷ 1 ,ŷ 2 ,ŷ 3 } = 0.867, leading to: { mean{y|ŷ 1 ,ŷ 2 ,ŷ 3 }=−2.285+0.406ŷ 1 +0.349ŷ 2 +0.370ŷ 3 var{y|ŷ 1 ,ŷ 2 ,ŷ 3 }=0.867 (13) In this work, following [32], we introduce the probability plot as a tool to assess the acceptance of the estimated probability. The probability plot is a plot of the estimated probabilities versus their empirical cumulative distribution function. For a perfect match, the shape of the resulting curve should be a 45 degree line corresponding to the cumulative uniform probability distribution. Alternatively, the curve will approach the bisector of the diagram. Kolmogorov confidence bands can be represented on the same graph as two straight lines, parallel to the bisector and at a distance dependent upon the chosen significance level of the test. For a 0.05 probability level, corresponding to a 95% band, the distance from the bisector line is 1.358 √ n , with n the number of observations used in the test. Figure 5 shows the probability plots relevant to the calibration and validation datasets. It is clear that the acceptability test for both datasets is passed at the 5% acceptance level, which implies that the developed predictive densities can be reliably used to estimate the expected utility values to be maximised in the Bayesian Decision scheme. Nonetheless, a better outcome could be obtained if a long record of data is used. Based on this study, the results presented in Figures 2-5 and Table 1 show that MCP may be effectively used for real-time STWD forecast. Conclusions This paper intends to apply the MCP approach to demonstrate how a number of potential predictive models may improve our knowledge, and in turn estimate the predictive uncertainty (PU) when forecasting into the unknown future. To achieve this aim, comparative assessment of the forecasts generated using ARMA, FFBP-NN, hybrid model (i.e., combination of ARMA and FFBP-NN) and MCP conditional on ARMA, FFBP-NN and hybrid models is firstly conducted. Afterwards, the probability density of the future demand is built based on the forecasts generated by ARMA, FFBP-NN and the hybrid model. In addition, in this work, we demonstrate how to verify the correctness of the estimated predictive probability, which is an essential step towards correct expected losses estimates in view of decision making. The PU connected to the STWD forecast is estimated and validated based on 5% probability acceptability test. The results obtained show that the forecast generated by MCP marginally outperforms those of ARMA, FFBP-NN and hybrid model, and also allows assessment of the full predictive density to be used in the estimation of the expected losses in decision making. Finally, the probability acceptance/rejection tests on both the calibration and verification periods showed that the developed predictive densities are acceptable at 5% probability level. In conclusion, the outcomes of this study indicate that MCP may be efficiently used for real-time STWD prediction since it brings out the PU connected to the MCP forecast obtained, and with such information, water utilities could estimate the risk connected to a decision.
2019-04-27T13:09:32.662Z
2018-04-12T00:00:00.000
{ "year": 2018, "sha1": "8bc9ec7e2ce774c857f2e1bc99499be0439118e4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/10/4/475/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e33dd3f7d47100f95bcda6a33bfa686d6302456b", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
138511570
pes2o/s2orc
v3-fos-license
Hardening treatment of friction surfaces of ball journal bearings The article presents the technology of finishing plasma hardening by the application of the multi-layer nanocoating Si-O-C-N system to harden the friction surfaces of the ball journal bearings. The authors of the paper have studied the applied wear-resistant anti-friction coating tribological characteristics, which determine the increase in wear resistance of the ball journal bearings. Introduction One of the new methods of surface hardening, which ensures the application of wear-resistant thin film coatings, is the process of finishing plasma hardening (FPH) based on the application of the plasma jet flowing at atmospheric pressure. The efficiency of this process derives from the compact and economical equipment, which allows for applying the hardening nanocoatings. This method belongs to the additive technologies [1]. The FPH technology fundamental principle of applying a thin-film wear-resistant coating based on the Si-O-C-N system is diffusion of the vapours produced by the liquid organoelemental agents, which are introduced into the plasma chemical reactor of the arc plasma gun, followed by plasma chemical reactions and formation of a coating on work pieces. Argon is used as plasma-supporting gas as it provides the increased durability and reliability of the plasma gun components in the course of the long-term process. The vapours of volatile liquid reagents are used as the coating-formation materials. They are supplied into the reactor by a special feedcontrol device. The power for the plasma gun is supplied by a DC inverter with special current-voltage characteristics. The sTable cooling of the reactor and the plasma gun is ensured by a cooler made from a refrigerating unit. The monitoring system of the process ensures monitoring and control of the treatment parameters and defines thickness of the applied coating in the course of its precipitation. The main technologies of applying the wear-resistant nanocoatings, which are commonly used abroad, are chemical vapour deposition (CVD) and physical vapour deposition (PVD). In the aerated FPH technology, the coating is applied in 30...3 nm layers at typical speeds of the plasma jet movement of 10...100 mm/s. Unlike the condensed coatings in vacuum performed in PVD and CVD processes, in this method the coating is formed locally, where the plasma jet contacts the base coat, and only for the multilayer coating, which is an important feature of the FPH technology. The FPH cyclic relative movement of the plasma jet and the hardened surface determines obtaining of a layered structure of the coating and allows reducing the thermal effect of the plasma on the base coat to the minimum, thus completely eliminating the softening drawing-back for all steels. The integral temperature of the hardened work pieces during the application of the coating is usually not more than 150 °С. The hardening coating is formed as a transparent film. On the polished surface it looks as an interference pattern with rainbow hues from purple-blue to green-red, depending on the thickness of the coating. The choice of the material for the coating applied by FPH method is determined based on knowledge of the wear mechanisms experienced by different products, as well as the analysis of available experience of using various compounds as coatings If we consider fundamentally any tribosystem operating under the adhesive, fatigue, oxidation and abrasive wear, the most perspective thin-film coatings would be non-metallic solids -carbides, nitrides, borides, silicides, oxides, composite and nanocomposite materials based on them, as well as cermets and diamond [2]. In this case, the coating must have the maximum adhesion and thermal expansion coefficient close to the hardened work piece, and its surface properties should comply with the characteristics that increase the product durability, i.e. they must have high hardness, chemical inertness, thermal stability, low thermal conductivity, minimal friction coefficient, etc. In recent decades, the processes of coating with chemical precipitation widely employ the elementorganic (organometallic) compounds; their use in coatings determines the improved level of safety (considering they are non-toxic) and zero explosion risk (considering they are used in the liquid state). Ii is significant that the element-organic compounds can contain all the necessary elements for producing the coatings in a single substance, which improves the effectiveness of the monitoring over the process and reproducibility of the coatings properties. The X-ray diffraction analysis confirms that after the FPH, the coating is formed in the amorphous state, with no dislocation activity, and the coating exhibits high values of resistance to plastic deformation and elastic recovery. The Si-O-C-N system coating applied with the FPH technology is characterized by high hardness at low elasticity modulus and close values of the coating elasticity modulus and the base coat material, which should objectively result in the improved wear resistance of the surface layer. Findings and discussions The object of the tribological characteristics study were the journal ball bearings ШС30 (GOST 3635-78, ISO 6125-82). The hardening was performed on the exterior ball surfaces of the inner rings of ШС30 journal ball bearings which are shown in Figure 1. The wear-resisting properties of the modified ball journal bearings and those manufactured by the factory technology were tested on a specially designed and produced installation; using the automated research system, the installation allowed determining the tribological performance of the ball journal bearings friction surfaces. The bearings test plan is presented in Table 1, which shows the numbers of the tested bearings, the method of finishing applied to friction ball surfaces of the inner ring of the bearing, the lubricant used for the test surface preconditioning and the main lubricant Table 1. Bearings test plan The bearings were tested in the following conditions: relative sliding velocity of the ball surfaces is υ = 0.84 m/s (with ball surfaces diameter d = 40 mm and rotating velocity n = 400 min -1 ); normal force loading N = 2000Н (corresponds to the pressures calculated by Hertz, about 11 MPa); type of lubrication -boundary; a predominant wear mode -fatigue; lubricant -in accordance with the test plan (see Table 1); total time of every bearing test is 6 hours. During the tests, the sensor system continuously and synchronously recorded the test time, load, friction coefficient and linear wear. Their numerical values were displayed on the PC monitor. The strain gauges were used to measure the friction torque and the load. To ensure the continuous measurement of wear in the course of the test, we developed a special scheme using an inductive sensor, which allowed having the measurement results free from the radial motion variation and thermal deformation of the tested specimen. The analysis of recorded parameters established the following indicators of tribological properties: • running-in time t 0 (hrs), which is defined as the time from the start of the test until the time when the wear curve reaches the area of normal wear; • running-in wear h 0 (micrometers) which is defined as the value of binding at the end of running-in time t 0 ; • friction coefficient value at the end of test, f; • f 0 / f -the ratio of the friction coefficient maximum value during running-in f 0 to its value at the end of the test f; • the average value of the wear rate during normal wear is I h = (h -h 0 ) / (L -L 0 ), where h (micrometers) is the total value of the sample wear during the test; L (micrometers) is the friction path covered by the sample surface during the test; L 0 = 3,6·10 9 ·t 0 ·υ (micrometers) is the friction path covered by the sample surface during the running-in; • the value of the wear rate during the total test time is I hΣ = h / L. The results of the tribology tests of the bearings are presented in Table 2 It should be noted that the applied lubricant never leaves the friction area on the working surfaces of the journal ball bearings for the duration of the test, which is ensured by its properties. Despite the significant wear of the hardened layer during the test of the journal ball bearings (initial running-in and normal wear), one should bear in mind that the period of running-in and the initial period of normal wear is the time of establishing the basic provisions and patterns of further friction and wear of the workpiece in the continuous service, which primarily affects its wear resistance and durability, along with the other relevant factors. Also, the FPH coating wear products never leave the friction area and keep being an additional solid lubricant and a means of healing of micro-defects (microcracks, microchips, microcutting effects (abrasive scratches), etc. ) at the friction surface. Having determined that the products of wear of the coating of the Si-O-C-N affect the tribological characteristics of friction pairs and can help to prevent direct transfer of coating material to the indenter, and fill microwave and to gain a foothold in microvascular roughness of the contacting surfaces, which reduces specific pressures and increase the wear resistance of the friction pair. The micrograph of the wear track in tests on a tribometer Tribometer (CSM, Switzerland) in conditions of dry friction coatings of the Si-O-C-N confirmed the formation of products that are not brought up, but remain at the bottom of the track, providing 'healing' of wear areas (Figure 2). Figure 3 shows parameters of a strip of the wear coating of Si-O-C-N, with measurements of its width. The antiwear and antifriction action of the wear coating of Si-O-C-N is stored for a long time for the entire test period (the curve of coefficient of friction decreases) of friction pairs and are not associated with the running process. When compared to the journal ball bearings manufactured by the factory technology, the wear resistance of the FPH-treated journal ball bearings increased by 5...14 times (in the values of wear rate during normal wear), by 6...9 times (in the values of wear rate for total test time). 3.Сonclusion The analysis of the comparative tests results of durability shows that the best tribological parameters are exhibited by the friction surfaces of the journal ball bearings formed by the technology of their modification with the finishing plasma hardening. The FPH technology can be applied at engineering plants as a highly efficient method of ensuring and improving the operational performance of machine parts at the stage of their manufacturing, in particular, in the manufacture of journal ball bearings.
2019-04-29T13:09:55.523Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "6a6b0add4005bfd017c18246c3ef071d295493a8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/124/1/012154", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a88c5d369d0811a24696b8552ded62633d3fabb0", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
236774545
pes2o/s2orc
v3-fos-license
Establishment of an ATLL cell line (YG-PLL) dependent on IL-2 and IL-4, which are replaced by OX40-ligand+ HK with poly-L-histidine and dermatan sulfate We established an IL-2 and IL-4 (IL2/4) - dependent adult T-cell leukemia/lymphoma (ATLL) cell line (YG-PLL) by adding poly-L-lysine (PLL) to the culture medium. YG-PLL originates from lymphoma cells and contains a defective HTLV-I proviral genome. Although YG-PLL cannot survive without IL-2/4, the follicular dendritic cell (FDC)-like cell line HK expressing OX40-ligand gene (OX40L+HK) inhibited their death in the presence of soluble neutral polymers. After the prevention of cell death, YG-PLL proliferated on OX40L+HK without IL2/4 in the presence of two kinds of positively or negatively charged polymers. In particular, dermatan sulfate and poly-L-histidine supported growth for more than 4 months. Therefore, the original lymphoma cells proliferated transiently in the presence of IL2/4, and their growth arrest was inhibited by the addition of PLL. Furthermore, YG-PLL lost IL2/4 dependency by the following 3-step procedure: preculture with IL2/4 and neutral polymers, 3-day culture with neutral polymer on OX40L+HK to inhibit cell death, and co-culture with OX40L+HK in the presence of the positively and negatively charged polymers. The extracellular environment made by soluble polymers plays a role in the growth of ATLL in vitro. INTRODUCTION Adult T-cell leukemia/lymphoma (ATLL) is a clonal disorder caused by human T-cell leukemia virus I (HTLV-I) infection. 1,2 Although HTLV-I induces the immortalization of lymphocytes in vitro, there is no evidence that HTLV-I is involved in the tumorigenesis of ATLL. Currently, ATLL is thought to be caused by additional gene abnormalities that accumulate from HTLV-I infection. 3 ATLL cells exhibit the phenotype of activated T-cells, 4 which express the IL-2 receptor. However, ATLL cells proliferate in vitro for a short period and die through apoptosis, which is inhibited by IL-2. 5 Stimulation by IL-2 or IL-4 rarely induces the proliferation of tumor cells, and reports of a growth system in vitro are rare. 6,7 To proliferate ATLL cells in vitro, we established a cell line (Hu-ATTACK) in the presence of IL-2 by coculturing with human umbilical vein endothelial cells (HUVECs), which express OX40 ligand (OX40L). 8 In normal CD4 and CD8 T-cells, T-cell receptor stimulation with the help of OX40L expands activated cells. 9 Stimulation of ATL cells with OX40L inhibited Fas-induced apoptosis. 10 Growth of Hu-ATTACK required IL2 and OX40L, which suggested that OX40L blocked apoptosis induced during cell growth by IL-2. Furthermore, among ATLL cases unresponsive to IL2/IL4 and OX40L, two cell lines (HKOX1 and HKOX2) were established by the co-culture of follicular dendritic cells (FDCs), such as the cell line HK expressing OX40L (OX40L + HK) with IL2/IL4. 11 Therefore, IL2/IL4, OX40L, and feeder effects of HK are necessary for the long-term growth of some ATLL cells. However, additional factors along with IL2/IL4 are necessary for ATLL cells that are unresponsive to IL2/4 and OX40L + HK. Compared with the extracellular environment in vitro, the extracellular space in vivo is narrow and cells are packed Establishment of an ATLL cell line (YG-PLL) dependent on IL-2 and IL-4, which are replaced by OX40-ligand + HK with poly-L-histidine and dermatan sulfate with negatively charged adjacent cells. If negatively charged, soluble high-molecular-weight molecules can be added to the in vitro culture system to make the extracellular environment similar to the intercellular space in vivo. An ATLL case in which leukemic cells that were unresponsive to IL2/4 and OX40L + HK became responsive by the addition of negatively charged, soluble polymers was previously reported. These cells developed into a cell line dependent on negatively charged polymers (HKOX3). 12 This suggested that the interaction with specific negatively charged polymers is related to the growth of ATLL with growth factors. We report that in the presence of a positively charged soluble polymer, an IL2/IL4-dependent cell line was able to be established from lymph node cells that demonstrated IL2/ IL4-dependent transient growth with or without negatively charged polymers, and this cell line proliferated on OX40L + HK without IL2/IL4 by adding other soluble polymers. Cell culture The FDC-like cell line HK 13 was kindly supplied by Dr. Choi (Laboratory of Cellular Immunology, Alton Ochsner Medical Foundation, New Orleans, LA. USA). OX40L + HK cells were established by introducing human OX-40 ligand cDNA to HK, which was described in a previous study 14 and Iscove's modified Dulbecco's medium (IMDM) + 20% FCS was used for the maintenance of OX40L + HK. Frozen primary ATLL cells and their cell lines were cultured in IMDM containing 10 U/ml of heparin, 20% human plasma, 10 ng/ml of human IL-2 (Peprotech), and 10 ng/ml of human IL-4 (Peprotech). The detection of mycoplasma infection was not examined throughout the cell culture. The original lymphoma cells and YG-PLL were cultured in 24-well or 96-well cluster dishes. While subculturing the growing cells, the volume of cultured cell suspension was adjusted to 1000 μl or 200 μl in 24-well or 96-well plates, respectively, and the precisely fractioned cell suspension was transferred to the adjacent well. Except for the primary culture, viable cell numbers were counted using 20 μl of the cell suspension, which was mixed with 20 μl of 0.4 w/v% Trypan Blue Solution [Wako Pure Chemical industry]. The culture of YG-PLL on OX40L + HK was performed in 24-well culture plates. Upon transfer to a new dish containing OX40L + HK, the mononuclear cells in the cell suspension were separated from dead cells using Ficoll ® Paque Plus. The cell growth rate was measured by counting viable cells using the same method described above. High-molecular-weight polymers and their concentration in the culture medium High Human T-cell leukemia virus-I (HTLV-I) proviral integration into cell lines was performed by an inspection agency (Special Reference Laboratory, Japan). Phenotype and genotype analyses of cell lines The origin of cell lines was determined by the fragment length comparison of the V-N-J rearrangement portion of the T-cell receptor γ-chain gene. 15 The high-molecular-weight DNA was extracted and amplified by PCR using three sets of primer mixtures labeled with fluorescent dyes. The primers used in this analysis were as follows: Establishment of cell lines A 66-year-old, HTLV-I-positive male developed generalized lymph-adenopathy after chemotherapy for erythroderma due to ATLL. He did not present with leukemia and the left inguinal lymph-node biopsy revealed ATLL. When frozen lymphoma cells were cultured in the presence of IL2/4, they proliferated vigorously until approximately two months, and stopped abruptly on repeated trials. When lymphoma cells were cultured in the same IL2/4 containing medium with or without negatively charged CSC or HRL, they grew for 54 days with HRL and 72 days with CSC, compared with 54 days in control culture. In the next step, to examine the effects of a positively charged polymer, culture in the presence of 5 μg/ml or 50 μg/ ml of PLL with or without CSC was conducted. The two cultures with 5 μg/ml or 50 μg/ml of PLL alone continued to proliferate for more than 3 months; however, those growing in the two other culture conditions i.e. PLL+ CSC, stopped growing at day 66 ( Figure 1). Therefore, the lymphoma cells proliferated for long periods with IL2/4 and 5 μg/ml or 50 μg/ml of PLL, and these cell lines were named YG-PLL. YG-PLL exhibited defective and monoclonal integration of HTLV-I on Southern blot analysis (Figure 2A). Flow cytometry of YG-PLL revealed CD2, CD3, CD4, CD5, and CD25 to be positive, and CD7, CD8, CD10, CD19, and CD20 to be negative, typical of ATLL cells ( Figure 2B). The rearrangement pattern of the T-cell receptor γ-chain gene was identical to that of the original lymphoma cells ( Figure 2C). This suggested that YG-PLL originated from the lymphoma cells of ATLL. Induction of IL2/4-free culture Three months after the start of culture, YG-PLL continued to proliferate with a doubling time of approximately 35 hours ( Figure 3A). Then, comparing the IL2/4 dependent growth rate with or without PLL, YG-PLL proliferated at the same rate as PLL for 100 days ( Figure 3A). Therefore, the phenotype of PLL dependency changed after long-term exposure to PLL. When YG-PLL was cultured without IL2/4, most of the cells died and the remaining cells survived without proliferation. Growth promotion by different types of polymers was examined by adding them to the culture medium; however, 11 polymers (PLL, PDL, PLO, PLH, PGA, CSC, HRL, PVA, PEG, DEX, and DS) had no effects on proliferation, irrespective of transient growth in CSC ( Figure 3B). Inhibition of cell death by neutral polymers on OX40L + HK Previously, ATLL-derived cell lines were established by co-culture with HUVECs or OX40L + HK, which was the transfectant of human OX40 ligand cDNA. First, YG-PLL was cocultured with HUVECs or HK with or without polymers in the absence of IL2/4. However, the inhibition of death or the promotion of growth of YG-PLL was not observed (data not shown). Then, the activity of OX40L + HK was examined by its coculture in the presence of different soluble polymers without IL2/4. In the presence of the four negatively charged polymers, YG-PLL died within 7 days ( Figure 4A). Thus, the negatively charged polymers had no activity regarding the inhibition of cell death. In the case of positively charged polymers, YG-PLL died within 7 days with PLL, PDL, or PLO, demonstrating no activity regarding the inhibition of cell death, and only PLH supported the survival of some cells for approximately two weeks ( Figure 4B). In the case of neutral polymers, all four polymers partially inhibited cell death at day 20, and PVA and PEG maintained survival for more than a month ( Figure 4C). Therefore, OX40L + HK supported the survival of YG-PLL in the presence of PVA or PEG. OX40L + HK with DS and PLH maintained proliferation after the inhibition of cell death by PVA and PEG Although OX40L + HK with PVA or PEG inhibited cell death by the deprivation of IL2/4, no growth of YG-PLL was observed. There is also the possibility that charged polymers instead of neutral polymers can sustain the growth after the inhibition of death by OX40L + HK with PVA or PEG. Next, after culturing with IL2/4 and 0.5 mg/ml of PVA and PEG (PVA/PEG) for two weeks, and a subsequent threeday co-culture with OX40L + HK with PVA/PEG in the absence of IL2/4, YG-PLL were transferred to OX40L + HK with a single charged polymer. In the presence of the eight kinds of single polymers, YG-PLL exhibited no growth except for transient proliferation in CSC ( Figure 5A). Then, the synergistic effects of positively and negatively charged polymers was examined. A few combinations induced proliferation for more than two months, but it stopped eventually (Table 1). However, DS and PLH (DS/PLH) promoted cell growth for over four months ( Figure 5B). Thus, in YG-PLL, OX40L + HK with PVA/PGA inhibits cell death caused by cytokine depletion, and OX40L + HK with DS/PLH promotes cell growth for a long period. DISCUSSION An ATLL-derived IL2/4-dependent cell line, YG-PLL, was established by the addition of PLL to the culture medium. From the start of the culture of lymphoma cells, the growth rate of lymphoma cells by IL2/4 was steady with or without CSC or PLL. This suggested that the steady proliferation by IL2/4 progressively induced growth inhibition, and PLL may counteract the inhibitory mechanism. PLL is a high-molecular-weight, positively charged polymer, and this molecule may interact with the outer cell membranes of cells that are negatively charged. Three months after the start of culture, YG-PLL grew without PLL for more than three months. This phenomenon suggested that PLL was not directly involved in cell proliferation, but introduced resistance to growth arrest. These mechanisms remain to be clarified. In general, IL2-dependent ATLL cell lines undergo growth arrest and apoptosis upon IL2 deprivation. Cell death of YG-PLL was also induced by the deprivation of OX40L + HK alone has no ability to inhibit the death of YG-PLL, as previously reported, because the OX40L + HK and IL2/4-dependent cell lines HKOX1, HKOX2, and HXOX3 were unable to survive on OX40L + HK without IL2/4. 11,12 By unknown mechanisms, neutral polymers altered the properties of YG-PLL, which facilitated the inhibition of cell death by OX40L + HK. After pre-treatment with neutral polymers and OX40L + HK, YG-PLL continued dependent proliferation on OX40L + HK and DS/PLH without IL2/4, and relative longterm proliferation was observed in PGA + PLH, CSC + PDL, and CSC + PLH. This suggests that OX40L + HK-dependent growth requires two supportive actions: growth facilitation and blocking of growth inhibition. DS/PLH possesses sufficient power for both of these actions, and the other three combinations of polymers are less effective. In a previous report, the HK cell line supported the growth of a follicular lymphoma cell line 16 or Burkitt cell lines. 17 In addition, OX40L + HK directly induced the growth of ATLL, which suggested that ATLL cells can proliferate in vivo near FDC in the presence of OX40L and molecules similar to DS/PLH. OX40L may be induced on many cell types, such as dendritic cells, endothelial cells, and blood cells, 9 which may exist near ATLL cells. The action of DS/PLH on YG-PLL remain to be elucidated. Both DS and PLH are high-molecular-weight molecules, and their interaction is limited to the outer cell membrane. In vivo, outer membranes are exposed to adjacent cells through the extracellular space, which may be substituted for specific soluble polymers in vitro. All polymers used in this study were high-molecular-weight polysaccharides, polypeptides, and polymers of hydrophilic small molecules, whose molecular structures vary. Therefore, the mechanism of action may be related to physical interactions. If similar growth mechanisms exist in vivo, their analysis is difficult because of the lack of technology. Further examination of the proliferation mechanism of YG-PLL may clarify the growth mechanism in vivo.
2021-08-03T06:23:32.835Z
2021-07-31T00:00:00.000
{ "year": 2021, "sha1": "32616bbee77cf1d2971d95ff0a33682daf0ed8e1", "oa_license": "CCBYNCSA", "oa_url": "https://www.jstage.jst.go.jp/article/jslrt/advpub/0/advpub_20058/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bc00bf5de2cea53d3084b3ccc1447cd05ea4e3b8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203830353
pes2o/s2orc
v3-fos-license
A case of primary clear cell hepatocellular carcinoma comprised mostly of clear cells Clear cell hepatocellular carcinoma (CHCC) is defined as a tumor which contains more than 50% of clear cells. However, CHCC with more than 90% of clear cells are extremely rare. We report a case of a 65-year-old woman who was found to have a solitary mass, which was histologically diagnosed as clear cell hepatocellular carcinoma composed of 90% or more clear cells. The tumor presented rim arterial phase hyperenhancement in computed tomography, magnetic resonance imaging, and computed tomography during hepatic arteriography, and was classified as LR-M category according to The Liver Imaging Reporting and Data System version 2018(LI-RADS v2018). This tumor may mimic other tumors with similar radiographic features, such as intrahepatic cholangiocellular carcinoma and metastatic tumor. Introduction Clear cell hepatocellular carcinoma (CHCC) is a type of HCC in which clear cells, which have a clearer cytoplasm than normal HCC cells, comprise 50% or more of the tumor [1] . CHCC is a rare lesion, which has been reported to account for 7.3%-12.5% of all liver cancers. In particular, CHCC in which more than 90% of the tumor comprises clear cells is extremely rare [2] . Here we describe the radiological findings of a patient with ultrasound. Magnetic resonance imaging was performed, and the lesion was suspected to be hemangioma; therefore, the physician decided to perform follow-up of the patient. However, the size of the lesion was found to have increased after 6 months, and additional analysis was performed. Blood analyses showed an abnormal platelet count (98,000/ μL) and total protein (8.3 g/dL). The following tumor markers were increased: AFP, (26.0 ng/mL), AFP-L3 (40.7%), and PIVKA II (65 mAU/mL). Abdominal ultrasound displayed an isoechoic mass with a hypoechoic rim in segment 7, with a microlobulated surface. The lesion did not have a lateral shadow and had slight posterior echo enhancement. Color flow Doppler images displayed internal vascularity, and these findings indicated that the lesion was hypervascular ( Fig. 1 ). Abdominal computed tomography (CT) displayed a lowdensity mass, in which the largest diameter was 30 mm on segment 7. Rim arterial phase hyperenhancement (rim APHE) was observed, and a capsule-like structure was observed in the equilibrium phase. Many small dot enhancements were displayed in the center of the lesion, and the center of the lesion was slightly stained ( Fig. 2 ). Abdominal magnetic resonance imaging displayed the tumor as a low-intensity region in both the in-phase and opposed-phase of T1-weighted imaging. The decreased signal intensity on chemical shift imaging was 16% [3] . T2-weighted imaging displayed the tumor as a clear high intensity lesion. Diffusion-weighted imaging showed that the lesion also had clear high signal intensity, and the Apparent Diffusion Coefficient (ADC) map showed low intensity, and these findings indicated restricted diffusion ( Fig. 3 ). A total of 0.025 mmol/kg of gadoxetic acid was injected via the antecubital vein at 2 mL/s, followed by 40 mL of physiological saline. The dynamic study included the arterial phase, portal phase, transitional phase, and hepatobiliary phase after injecting the contrast material. The lesion showed rim APHE and hypointensity in the hepatobiliary phase. CT during hepatic arteriography displayed rim APHE, and neovascularity was observed in the center of the lesion. Corona enhancement was also observed around the tumor ( Fig. 4 ). Common hepatic arteriography displayed rim enhancement ( Fig. 4 ). Partial hepatectomy was performed based on the diagnosis of malignant tumor. The fibrous capsule structure was confirmed pathologically. More than 90% of the tumor comprised clear cells, and clear cells formed an alveolar structure that was surrounded by vascular stroma. The cytoplasm of the clear cells contained a large amount of stored glycogen and few fat vacuoles. Immunohistochemical analyses showed that following characteristics: glypican 3 ( + ), Hepatocyte Paraffin 1 (HepPar-1) ( −), and epithelial membrane antigen ( −). These pathological findings and the absence of a primary lesion elsewhere by radiological analysis supported the diagnosis of primary CHCC ( Fig. 5 ). Background liver tissue corresponded to F2 of the New Inuyama classification. Discussion Cases of CHCC in which clear cells comprise more than 90% of the tumor are extremely rare. CHCC is more frequent in females than classical HCC, and 90% of patients with CHCC are reported to have cirrhosis. Almost all of the patients had hepatitis B or hepatitis C, and the CHCC was pathologically classified into the moderately differentiated type in most patients [2] . Clayton et al reported that CHCC in which clear cells comprise more than 90% of the tumor was associated with the absence of cirrhosis [4] . Our present case was similar with this previously reported case. CHCC has been reported to generally show similar findings to classical HCC [5][6][7] , such as hypervascularity in the arterial phase, washout in the portal phase, and a pseudocapsule. The present patient also showed a pseudocapsule, as well as atypical findings, such as rim APHE in the arterial phase. Diffusion-weighted imaging showed hyperintensity and the ADC map showed hypointensity. We assumed that the restricted diffusion in this case was owing to the high proportion of clear cells and small amount of interstitial space. CT during arteriography showed corona enhancement [8] . Corona enhancement is an imaging feature described in The Liver Imaging Reporting and Data System version 2018(LI-RADS v2018), and is an ancillary feature characteristic of malignancy in general, but not HCC in particular. Corona enhancement is a periobservational enhancement occurring in the late arterial phase or early portal phase, which is attributable to venous drainage from a tumor. It does not refer to periobservational enhancement attributable to arterioportal shunting, which indicates that the drainage vein is the portal vein. The drainage vessel of almost all liver tumors, except for HCC, is the hepatic vein. In our present patient, we were able to diagnose the lesion as HCC from the pseudocapsule and corona enhancement, although some characteristics of the LR-M category, such as rim APHE, were observed. This radiographic feature mimics intrahepatic cholangiocellular carcinoma and metastatic tumor. Our present patient reminded us of the importance of ultrasound, as the ultrasound images displayed typical HCC findings [9] . Additional information from contrast-enhanced ultrasonography would enable easier qualitative diagnosis [10] . CHCC is usually difficult to distinguish from metastatic clear cell carcinoma from the kidney, ovary, and adrenal gland. Immunohistochemical staining of molecules, such as glypican-3 and HepPar-1, is useful for confirming the liver origin clear cell carcinoma [2,11] . The tumor of the present patient was glypican 3 ( + ), HepPar-1 ( −), and epithelial membrane antigen ( −), and these findings were atypical for CHCC. Most cases of CHCC are positive for HepPar-1 [11] . We assumed that HepPar-1 was negative in our patient owing to the small number of cytoplasmic organelles in the tumor cells. Sakhuja et al reported a similar case to our case, and they discussed that HepPar-1 was probably negative owing to the low number of cytoplasmic organelles [12] . Furthermore, we radiologically confirmed the absence of a primary lesion in the kidney, ovary, and adrenal gland of our patient. The association between the proportion of clear cells and the prognosis of CHCC is controversial [2,13,14] . A recent study reported a more favorable prognosis in patients with higher proportions of clear cells. Our present patient had a tumor with a high proportion of clear cells, and therefore, a favorable prognosis can be expected. However, the postoperative course of our patient has not been followed, and hence the clinical outcome remains unknown. Conclusion We reported an extremely rare case of CHCC in which clear cells comprise more than 90%. The tumor presented rim APHE and was classified as LR-M category according to LI-RADS v2018. It may mimic other tumors with similar radiographic features, such as intrahepatic cholangiocellular carcinoma and metastatic tumor.
2019-09-19T09:04:04.140Z
2019-09-16T00:00:00.000
{ "year": 2019, "sha1": "ee0685cf6786aa32a012e6895070326644612d85", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.radcr.2019.08.021", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "524248e551ecdcff1fabe6260a20703fd3499e02", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4945964
pes2o/s2orc
v3-fos-license
On Implementing an Automatic Headline Generation for Discussion BBS Systems —Cases of Citizens’ Deliberations for Communities— SUMMARY Recently, the opportunity to discuss topics on a variety of online discussion bulletin boards has been increasing. However, it can be di ffi cult to understand the contents of each discussion as the number of posts increases. Therefore, it is important to generate headlines that can automatically summarize each post in order to understand the contents of each discussion at a glance. In this paper, we propose a method to extract and generate post headlines for online discussion bulletin boards, automatically. We propose templates with multiple patterns to extract important sentences from the posts. In addition, we propose a method to generate headlines by matching the templates with the patterns. Then, we evaluate the e ff ectiveness of our proposed method using questionnaires. Introduction In recent years, the general public has taken to exchanging opinions using various online platforms, such as online discussion bulletin board systems (BBSs), microblogs, and other promising collective intelligence platforms ( [1] etc.). On web-based platforms, users can freely post their opinions without constraints of time or place. In the near future, large-scale discussion aggregations, discussions, and negotiations will become possible on supporting web-based platforms, and further developments in social crowd engineering are expected ([2]- [5]). With an increase in the amount of information and posts on online discussion platforms, it would be difficult to read and understand the contents of all posts. Therefore, platforms that automatically construct a structured visualization of discussions have been proposed [1]. In particular, an open web-based forum system called COLLAGREE ( [6], [7]), which has facilitator support functions, has been developed and deployed for an internet-based town meeting in Japan. In COLLAGREE, a discussion tree is employed and a method that generates headlines for each post is required [8]. In addition to social and academic environments, linguistic resources have become increasingly rich via web developments and the technology of natural language processing (NLP) has been developed. However, it is still difficult to obtain accurate information from large-scale discussion platforms. This is because the amount of unnecessary infor- mation has also increased with the development of linguistic resources. In addition, argumentation mining which aims at automatically extracting structured arguments from unstructured textual documents has become important for internetbased discussions ( [9]- [11]). This topic has recently become of broad interest due to its potential to process information originated from the Web, in particular, from social media. Automatic structuring of discussions and argumentations is enabling breakthrough applications in social and economic sciences, policy-making, and crowd engineering. Argumentation models, methods, and applications have been proposed to combine representational needs with userrelated cognitive models and computational models for automated reasoning. Automatic headline generation is necessary for visualizing user-related cognitive models and computational models. Although a large number of studies have focused on automatic headline generation ( [12] etc.), previous approaches have not taken into account the heuristics of the discussion BBS such as the structure of the discussions because their targets have been general documents (newspapers or books etc.), not large-scale online discussion platforms. In this paper, we propose a method to automatically extract important sentences and generate headlines from posts for discussion BBSs. In our proposed method, templates with multiple patterns are used to extract important sentences from the posts, and headlines are generated by matching the templates with the patterns. For improving the quality of generating the headline, we also propose the hybrid method of automatic headline generation. In this hybrid method, we combined the proposed method using templates with a heuristic method of selecting the first sentence as the headline. The method of selecting the first sentence as the headline is effective for extracting headlines in newspapers, where important contents tend to be placed at the beginning of the article. Then, we evaluate the effectiveness of our proposed method using questionnaires. The datasets for the evaluations are log data from an internet-based town meeting in Nagoya, which was a city project for an actual town meeting of the Nagoya Next Generation Total City Planning for 2014-2018 [6]. In these evaluations, we compare our proposed approach and hybrid approach with various existing methods of automatically generating headlines using a questionnaire. Considering the above backgrounds and overviews of this paper, the main contributions of this paper are summa-Copyright c 2018 The Institute of Electronics, Information and Communication Engineers rized as follows: • This paper proposes a heuristic approach to generate headlines of a set of articles on a BBS system. The proposed approach combines heuristic pattern-based extraction of candidate texts and then applies a multisentence compression method. • This paper investigates the possibility of applying the proposed pattern-based heuristic extractions to the cases of citizens' deliberations for communities. • This paper evaluates our proposed approach on a real dataset about the cases of citizens' deliberations for communities. In the presented case, the proposed approaches have worked well compared to other baseline approaches. Although the presented approach is currently using some language-specific patterns, once we could improve the language-specific parts in our proposed templates for non-Japanese languages, it could be applied to other languages. These contributions give some directions of implementation of building a practical Knowledge Management (KM) application for complex and unstructured discussion datasets such as the citizens' deliberations for communities. The approach proposed in this paper is based on the heuristics. Therefore, it can allow application developers or operators of the BBS system to manage and control how these summarized headlines for the discussions could be formed and generated. In addition, the analysis of the proposed approach in the real dataset of the cases of citizens' deliberations for communities gives valuable knowledge in a certain new area. The remainder of this paper is organized as follows. In Sect. 2, related studies concerning automatic headline generation methods are shown. In Sects. 3 and 4, the approach for automatic headline extraction using templates and the hybrid approach combining the method using templates with heuristic method are proposed. In Sect. 5, we describe the results of the evaluation experiments and discussions. Finally, we present our conclusions. Related Works Multiple studies concerning headline generation have focused on many languages and document types ([13]- [16] etc.). According to Gattani, three primary techniques can be identified [15]. Statistics-Based Approaches These methods apply statistical models to learn correlations between words in headlines and documents. The majority of models are used in supervised learning environments; therefore, large amounts of training datasets with labels are necessary. For example, a method viewing summarization as a problem analogous to statistical machine translation based on Naive Bayes has been proposed [17]. The use of statistical models for learning pruning-rules for parse trees has also been studied ( [18] etc.). Summarization-Based Approaches Headlines can be thought of as very short summaries. Therefore, traditional summarization methods can be used to generate one-line summaries ( [19] etc.). The primary difficulty with these approaches is that they use techniques that were not initially devised for generating compressions of less than 10% of the original content, which directly affects the quality of the resulting summary [17]. The generated headline has risks of losing or changing the contextual meaning of the reused words because these approaches generate a headline by reusing words in the article, [20]. Template-Based Approaches These methods apply handcrafted linguistic rules to extract or compress important sentences in a document [21]. They are simple and lightweight; however, they fail to understand complex relationships in the text, and it is not easy to prepare templates for them. For example, Alfonseca et al. proposed an approach of extracting the syntactic patterns that a Noisy-OR model generalizes into event descriptions [22]. At the inference time, the method searches the model with the observed patterns in an unseen news collection and identifies the event that best captures the gist of the collection to retrieve the most appropriate pattern to generate a headline. However, original templates for each language are necessary in this approach. Automatic headline generation focusing on the Japanese language and online BBSs are few. Therefore, we employ this approach because it is lightweight and does not require training datasets to generate templates. In the work of Gupta et. al. [23], after annotating the parts to be extracted from the source texts, it will semiautomatically generate the sufficient patterns to extract the title. If this kind of information extraction framework could be applied to this evaluation dataset, or some machine learning based approach would be applied, the performance of the proposed approach could produce better performance than the presented patterns in this paper. However, these tools are mainly intended to make direct information extraction (i.e., extracting terms and relations among them directly), rather than matched and higher-ranked sentences from texts. Automatic Headline Generation Approaches for Multiple Sentence Compression The above three approaches used in the previous studies focused on a single article or document. However, this paper focuses on texts in online discussion BBSs; in other words, this paper can consider the task of summarizing a cluster of related sentences with a short sentence, which we call multisentence compression, to present a simple headline. The advantage and novelty of our proposed method is that we can use the characteristics of the discussions and Japanese. One popular approach to multi-sentence compression was described by Filippova, who presented a simple and robust word graph-based method to generate succinct compressions that require just a part of speech tagger and a list of stop words [12]. The advantage and the novelty of the proposed method is that it is syntax lean and requires little more than a tokenizer and a tagger. However, the approach of Filippova is evaluated under English and Spanish datasets. After that they have been proposed LSTM-based approaches for the similar purpose of their previous paper [24]. The proposed LSTM-based model outperforms the baseline in readability and informativeness. The approach of Filippova is based on learning approach, which approach a is totally different from our proposed approach. Automatic Headline Generation Using Templates BBSs contain several threads by posting their comments to the thread. A thread refers to a set of posts related to a particular topic or issue. In COLLAGREE interface, the first post of the user who creates the thread becomes the parent post, and other users reply to it as child posts. There are also cases where some users reply to a child post as grandchild posts. Our goal is to propose a method to automatically generate a headline for any post in the thread in the discussion BBS. Figure 1 shows the flowchart of the proposed automatic headline generation using templates. The process of the proposed method consists of five parts: Pre-processing, Extraction of Headline, Generation of Headline, Complement of Headline, and Post-processing. The sentence which is extracted by the method proposed is used as the headline Fig. 1 Flowchart of automatic headline generation using templates when the length of the sentence is less than twenty letters. On the other hand, When the length of the sentence is more than twenty letters, the headline is generated automatically based on the following template. The patterns in the proposed templates were prepared by analyzing the online discussions datasets. In addition, we used the heuristics of NLP and online discussions based on our experiences. Therefore, our proposed method is generic enough to be applied to various conditions because our proposed method has a enough generalization to the discussions datasets without happening the specializations of the proposed models. In addition, our proposed patters and templates don't use the technique for specializing of improving under the specific datasets at all. Pre-Processing To arrange the text of the BBS for the discussion, the following preprocessing step is conducted. The parentheses and URL are removed. These unimportant information can cause the generated summary to be redundant and should therefore be deleted before summarization. It is also essential to clarify anaphors in the original post. In reference to the result of an anaphora analysis using CaboCha [25], only anaphors that refer to a named entity are retained. Furthermore, only nouns and verbs are extracted from each post (e.g., an opinion or a sentence) based on the result of a morphological analysis. Extraction of Headline A sentence to be used as a headline is extracted from the posts consisted of several sentences. In previous studies focusing on conference notes, headlines were extracted by focusing on expressions such as "about" [26]. However, a clear structure does not exist for online discussion bulletin boards. Therefore, we propose a novel template with several patterns of sentences that seem to express the intent of each BBS participant who wrote the post. Table 1 shows a proposed template with three patterns to extract important sentences from the discussion BBS. Table 1 shows the original form of verb only; however, other inflected word forms are applied. We can match original form of verb with a morphological analysis using CaboCha on the proposed templates. The details of each pattern are as follows. The details of each pattern are as follows: • Pattern 1: Sentence including "need" or "important" Sentences including "need" or "important" are the most essential contents for the participants. Therefore, Table 1 Template for extracting the sentences Priority Pattern pattern 1 High "need" or "important" pattern 2 Medium "think," "believe" or "regard" pattern 3 Low agreements (including positive keywords) disagreement (including negative keywords) the priority of this pattern is the highest in the template. • Pattern 2: Sentence including "think" or "regard" Because the discussion BBS is a place for participants to post their opinions, sentences including "think," "believe," or "regard" are often seen. Therefore, their priority is set to medium in the template. • Pattern 3: Agreements and Disagreements Sentences including positive or negative words are important as headlines because participants express their opinions such as agreements or disagreements using these words when having a discussion. Therefore, a sentence with the highest number of positive and negative words (in absolute sum) is extracted as Pattern 3. The negative and positive words are judged using the Evaluative Expressions Dictionary ( [27], [28]). The score for words including positive words is "+1" and that for words including negative words is "−1." The sentence with the highest priority is extracted when multiple patterns are matched in Table 1. When the same patterns are matched with multiple sentences, the sentence with a higher cosine similarity between it and the parent post is extracted. We assume that the set of N nouns are appeared in the parent post and the matching sentence. In the N-dimensions, the set of nouns in the parent post is P = {p 1 , p 2 , . . . , p N }, the set of nouns in the matching sentence is Q = {q 1 , q 2 , . . . , q N }, and each element of P and Q means the frequency of occurrence of the corresponding to each noun. The cosine similarity is calculated by the following expression. The sentence with the highest score is then extracted. The procedure of extracting the sentence when the same patterns are matched with multiple sentences is described as follows: 1. The sentences vectors of nouns appeared in the parent post and the matching sentence are calculated. 2. The cosine similarity between the parent post and the matching sentence are calculated. 3. The sentence with the highest cosine similarity between it and the parent post are decided. In other words, sentences with similar words to those in the parent post are extracted. When the sentences do not match the patterns in Table 1, our method outputs "no pattern." Automatic Headline Generation The sentence which is extracted by the method proposed in the previous section is used as the headline when the length of the sentence is less than twenty letters. The headline is generated automatically based on the following template when the length of the sentence is more than twenty letters. The template is proposed based on the overall analysis of actual posts in discussions BBSs. When multiple patterns match the sentence, the pattern with the highest average weights of BM25 [29] of the noun is used. The expression of BM25 of a word (w) in n posts of set (D = {d 1 , d 2 , . . . , d n }) is defined as follows: In the above equations, f (w, d i ) is the frequency of the word(w) in the post(d i ), |d i | is the number of words in d i . avgdl is the average number of words in the post set(D), k 1 and b are previously decided parameters. In this study, we set, k 1 = 2.0, b = 0.75 are used because they are used in a general way. N is the number of posts and d f (w) is the number of posts including the word w in all posts. In our proposed method, we used BM25 because the BM25 of characteristic words in the entire BBS tends to be higher. † The part of the template is written in Japanese because it used the original Japanese grammars. Complement of Headline The headline generated using the automatic headline generation method in the previous section is insufficient because there is no clause modified by the extracted clause to explain it. Therefore, modified clauses are added using the modification parsing of CaboCha [25]. Concretely speaking, a clause to modify the clause extracted by the method in the previous section is complemented when there is an extracted clause. In addition, the clause with the highest BM25 is complemented when there are many extracted clauses. This complement is repeated by a constant number of characters. In this study, the constant number of characters is twenty. Post-Processing This step examines the generated headline to improve it. A particle remains at the end of a sentence when a headline is extracted by matching each clause. Therefore, unnecessary clauses, such as particles at the end of a sentence, are replaced and removed based on the following list † . Removal and matching are repeated for the word lists for the end of the sentence until the headline no longer changes. Removal Hybrid Method of Automatic Headline Generation We propose the hybrid method of automatic headline generation. In this method, we combine the proposed method in the previous section with a heuristic method of selecting the first sentence as the headline. This method of selecting the first sentence as the headline is effective for extracting headlines in newspapers, where important contents tend to be placed at the beginning of the article [30]. However, this method is not always effective for online BBSs because the † The part of the template is written in Japanese because it used the original Japanese grammars. lengths of posts in BBSs are shorter than those in articles. The details of the hybrid method of automatic headline generation are as follows: 1. The proposed method using the templates generates the headline automatically 2. The heuristic method of selecting the first sentence as the headline generates the headline automatically 3. The headlines generated in step1 and step 2 are evaluated by summing up the BM25 or tf-idf scores of all clauses including a noun or a verb. 4. The headline with the highest scores is selected from the one generated by the proposed method or the heuristic method Datasets for the Experiments We conducted comparative studies to evaluate the proposed method by extracting headlines using questionnaires. One dataset for the evaluations is log data from an internet-based town meeting in Nagoya, which was a city project for an actual town meeting of the Nagoya Next Generation Total City Planning for 2014-2018 [6]. In addition, another dataset is log data from an internet-based town meeting in Aichi, which was a prefecture project for an actual town meeting of the Aichi Design League [31]. In the Nagoya Next Generation Total City Planning for 2014-2018, there were 266 participants and 9 facilitators. The topics of discussion were "human rights," "the environment," "attractive," and "disasters," and there were a total of 1151 posts. In the Aichi Design League, there were 75 participants and no facilitators. The topics of discussion were "designing the town" for Aichi Prefecture. There were a total of 355 posts. These experiments did not address the posts from the facilitators because their models are completely different from those of the participants. Evaluations for Extracting the Headline The correct datasets for extracting the sentence suitable for the headline were generated by questionnaires. The most effective sentence as a headline was selected from all posts in each thread by ten undergraduate and graduate students of the Tokyo University of Agriculture and Technology. If the selected optimal headline was different among the evaluators, the sentence selected by the largest number of evaluators was considered as appropriate for the correct answer. The comparative methods for the experiment are as follows: • Proposed Method: The headline extraction method proposed in this paper • First Sentence: The method of selecting the first sentence as the headline • Random: The headline extraction method of selecting a sentence randomly "First Sentence" is effective for extracting headlines in newspapers, where important contents tend to be placed at the beginning of the article [30]. However, this method is not always effective for online BBSs because the lengths of posts in BBSs are shorter than those in articles. "Random" is a baseline method for the comparative study. Table 2 shows the accuracy rates of the three methods based on the correct dataset. Our proposed method has the best accuracy rate because the tendency of placing important content at the beginning of a post, as in newspapers, is not applicable in discussion BBSs. Instead, the beginning of a post instead tends to be a simple approval or greeting (e.g., "I agree with his idea" or "Hello"). In other words, there is a tendency that the first sentence of a post does not contain the core idea of comments or questions. In fact, simple replies such as "yes" and "of course" are commonly found. Our proposed method is also better than Random (the baseline); therefore, our proposed method is effective for discussion BBSs. Table 3 shows the accuracy rates for each pattern of our proposed method. For Pattern 1, the number of applicable cases is small and the accuracy rate is high. Posts including "necessary" and "important" generally reflect the participant's intentions. Conversely, the number of applicable cases is high and the accuracy rate is low for Pattern 2. This is because "think" and "believe" are commonly used expressions and appear in multiple sentences. For Pattern 3, the sentences including positive words are extracted as the important phrases in the post. All posts match our three proposed patterns in this experiment; therefore, our proposed method has effective templates to extract the important sentences. Evaluations for Generating the Headline We evaluated the headlines generated by our proposed method using questionnaires. The evaluation criteria for the headlines were its "readability" and "completeness" because the purpose of our proposed method was to generate headlines with high readability and core content. Each evaluator graded a headline on a five-grade evaluation scale following Tables 4 -7. The comparative methods for these experiments are as follows: Table 4 Evaluations of readability Score Description 5 The readability of the headline is extremely well. 4 The headline is mostly readable. 3 The readability of the headline is tolerable but not great. 2 The readability of the headline isn't well. 1 The readability of the headline is extremely unnatural. Table 5 Evaluations of comprehension Score Description 5 The meaning of the headline can understand, easily. 4 The meaning of the headline can understand without any difficulty. 3 The meaning of the headline can understand, somehow. 2 It is difficult to understand the meaning of the headline. 1 The meaning of the headline can't understand. The headline covers all of the post. 4 The headline covers most of the post. 3 The headline covers some of the post. 2 The headline covers parts of the post. 1 The headline covers little of the post. The headline doesn't contain the unnecessary words. 4 The headline contains the less unnecessary words. 3 The headline contains the small unnecessary words. 2 The headline contains the many unnecessary words. 1 Most words in the headline are unnecessary. • Template Only: Our proposed headline generation method using the templates only • First Twenty Letters: Extracting the first twenty letters as the headline from the posts • Hybrid (BM25): Our proposed hybrid headline generation method. The evaluation criteria in selecting the headline is BM25 • Hybrid (tf-idf): Our proposed hybrid headline generation method. The evaluation criteria in selecting the headline is tf-idf. In the proposed method, the clauses were extracted based on the template, and clauses complemented with high BM25 were generated using dependency parsing. In the First Twenty Letters method, the first 20 letters were simply extracted as the headline from the posts. Therefore, the number of letters in the headline was equal to or less than twenty. The headlines of our proposed method and the baseline method were evaluated by 11 undergraduate and graduate students at the Tokyo University of Agriculture and Technology. Table 8 shows the averages of the five-grade evaluation scores of the four evaluation criteria from the questionnaires. Table 9 shows the p-value obtained from t-test in each pair of the automatic headline extracting methods. Totally, the average scores of each proposed method in this paper ("Template only," "Hybrid (BM25)" and "Hybrid (tfidf)") are higher than that of the baseline ("First Twenty Letters") and each of our proposed methods can generate the effective headline automatically in some evaluation criteria, significantly. In addition, the total average scores of two hybrid methods ("Hybrid (BM25)" and "Hybrid (tf-idf)") are the highest compared with other methods. "Hybrid (tfidf)" is the highest in the readability and comprehension, and "Hybrid (BM-25)" is the highest in the completeness. If a headline with better readability and comprehension is considered as the better headline, "Hybrid (tf-idf)" is the best method on these experimental results. Focusing on the differences between "Hybrid(BM25)" and "Hybrid(tf-idf)," "Hybrid(tf-idf)" can generate headline with higher readability and comprehension than "Hybrid (BM25)." However, "Hybrid(BM25)" may be effective in generating the shorter headline (less than twenty letters) because BM25 considers the number of letters in the sentence and the score of "unnecessity" for "Hybrid (BM25)" is the second best and higher than that of "Hybrid (tf-idf)." Conclusion It is important to automatically generate headlines so that readers can understand the core idea of each post at a glance and easily understand the contents of each discussion. In this paper, we proposed a method of automatically extracting and generating headlines from posts by focusing on the online discussion bulletin board systems. We proposed templates with multiple patterns to extract important sentences from posts. In addition, we proposed a method to generate a headline by matching the templates with the patterns. We evaluated the effectiveness of our proposed method using questionnaires. In the task of extracting the headline from the posts, our proposed method had a higher effect than the baseline method in total. The possible future work includes the automatic headline generation method based on machine-learning. For † The red score means the t-score is less than 5%. In other words, the scores written in the red color mean no significant differences between the methods. achieving it, larger amounts of discussion data on the electronic discussion bulletin boards are necessary. Another possible future work is to improve our templates for the multilingualization. This paper focused on Japanese texts since the proposed patterns are only applicable to Japanese texts and evaluations were only given on Japanese text dataset. However, overviews of the proposed algorithm itself can be language-neutral. Therefore, patterns and templates themselves proposed in this paper have the possibility of applying to other natural languages (e.g. English), and evaluating them under the other language datasets.
2018-04-27T03:28:05.929Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "41c1340e24531f263d9a148e255a0228363c6793", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/transinf/E101.D/4/E101.D_2016IIP0017/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2603d1454c18b144d9c567650f0c61664bf4156c", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
261447125
pes2o/s2orc
v3-fos-license
Recurrent Verruconis gallopava Infection at One Year after Excision of a Solitary Pulmonary Lesion We herein report a case of recurrent infection caused by Verruconis gallopava, which is known to cause fatal phaeohyphomycosis. A 71-year-old man presented with a fever, and computed tomography revealed right chest wall thickening. Eleven years earlier, he had undergone autologous peripheral blood stem cell transplantation for a hematological malignancy. One year earlier, he had undergone excision of a solitary pulmonary nodule, from which had been detected V. gallopava. On this occasion, right chest wall surgery was performed to investigate the cause of the fever, which led to the diagnosis of recurrent infection. Even if a localized lesion is excised, additional antifungal therapy should be performed. Introduction Verruconis (previously Ochroconis) gallopava is a thermotolerant darkly pigmented septated fungus generally distributed in the environment (1,2) and is known to cause phaeohyphomycosis, a potentially fatal disease.Most human infections have been reported in immunocompromised hosts (3-9).Infection of the respiratory tract is the most common, but cutaneous and disseminated diseases involving the central nervous system (CNS) may also be seen (5-7).Its optimal therapy is still unclear due to the rarity of the disease.Thus, it is vital to share the clinical characteristics of V. gallopava infection through case reports. We herein report a case of V. gallopava infection recurrence at one year after complete resection of a solitary pulmonary lesion. Case Report A 71-year-old Japanese man presented with a fever, malaise, and abdominal pain.The patient was a lifelong resident of Japan and had a 45-pack/year smoking history.He had no pets at home. Eleven years earlier, he had undergone autologous peripheral blood stem cell transplantation (auto-PBSCT) for diffuse large B-cell lymphoma (DLBCL).Three years earlier, recurrence of DLBCL had occurred at the second lumbar vertebra and iliopsoas muscle, and sequential chemoradiotherapy had been performed.The treatment had been successful, and he achieved complete remission from DLBCL.At that time, he was diagnosed with adrenocortical insufficiency; therefore, low-dose oral steroid treatment (hydrocortisone; 15 mg/day) was started, but he had no other immunosuppressive therapies.One year earlier, follow-up computed tomography (CT) revealed a 16-mm pure ground glass nodule (GGN) in the right upper lobe and a solitary 9-mm nodule in the right lower lobe (Fig. 1A, B).Considering the likelihood of lung cancer or infection, partial resection of the right upper and lower lobes was performed under the right fifth intercostal space by thoracotomy.A darkly pigmented filamentous fungus was detected from the nodule in the right lower lobe (Fig. 1C).Later, the fungus was confirmed as V. gallopava at Chiba University by identification of the internal transcribed spacer 1 region of the ribosomal RNA gene in a phylogenetic analysis.Contrarily, the patho- At one year postoperatively, he presented to our hospital with a fever, malaise, and lower right abdominal pain.CT revealed thickening of the right chest wall and ileocecal intestinal wall, which had not been observed four months earlier (Fig. 2A, B).Regarding abdominal findings, the tissues of the ileocecal region biopsied with colonoscopy indicated granuloma with lymphocytic infiltration.Immunostaining revealed the presence of T-cells positive for CD3 and B-cells positive for CD20/CD79a.Based on these results, posttransplantation lymphoproliferative disorder (PTLD) was strongly suspected. Given that the cause of the chest wall lesion remained unclear, he was admitted to our hospital for a further examination.A physical examination revealed a fever (38.4°C) and lower right abdominal pain and moderate tenderness but without rebound pain.A laboratory examination showed moderate anemia (hemoglobin of 8.7 g/dL), an elevated white blood cell count of 11,700/mm 3 (total neutrophil and lymphocyte counts were 9,300 and 910/μL, respectively), and thrombocytopenia (platelet of 34,600/μL).The Creactive protein level was elevated at 10.4 mg/dL.Blood and urine cultures were negative. CT of the right chest wall and ileocecal intestinal wall showed worsening of the condition (Fig. 2C).Fluorodeoxyglucose positron emission tomography (FDG-PET)/CT re-vealed a high accumulation in the right chest wall and ileocecal intestinal wall (Fig. 3A-C).Surgery of the right chest wall was performed for the diagnosis, specifically for differentiation from PTLD, recurrence of lung cancer, and other diseases.Unexpectedly, the pathological diagnosis was chest wall abscess with inflammatory cell infiltration.Therefore, only drainage of the chest wall abscess was performed without thoracotomy. Grocott staining demonstrated a filamentous fungus that was similar to that detected in the previously excised pulmonary nodule.A microscopic examination of colonies isolated from the chest wall abscess showed stained blue septate hyphae and conidia characteristic of phaeohyphomycosis (Fig. 4A and B).Finally, the homology of fungi detected from the pulmonary nodule previously excised and chest wall abscesses was 99.8% (415/416).The ITS1/ITS4 primers were used for the analysis.Based on these results, we diagnosed a recurrent Verruconis gallopava infection.Contrast-enhanced magnetic resonance imaging of the brain did not show any CNS involvement.While performing continuing drainage with the catheter placed in the chest wall at the time of surgery, we started treatment with multiple antifungals (posaconazole and amphotericin B) and antibiotics (ampicillin/sulbactam) immediately.Unfortunately, however, while his clinical condition improved temporarily, he developed an ileus and died 25 days postoperatively. With the consent of the family, an autopsy was performed, which revealed an ileocecal ulcer attributed to Discussion This case highlights the fact that recurrent V. gallopava infection can occur one year after excision of the solitary pulmonary lesion. V. gallopava is widely distributed in the environment (1, 2) and is known to cause phaeohyphomycosis in animals and humans, especially in immunocompromised hosts (3-9).This fungal infection is usually localized in the respiratory tract or cutaneous area, but, in a few cases, disseminated disease, including brain abscesses, may be seen because of its neurotropism (5-7).It is considered a rare disease, reportedly accounting for only 3.8% (2/52) of clinical isolates of non-Aspergillus filamentous fungi causing inva- B × 400 A sive infection (10).Thus, it is a potentially fatal disease and a combination of surgical resection, prolonged antifungal therapy, and reduction of immunosuppression is often considered for treatment, although the optimal therapy is unclear.In particular, in a series of pulmonary V. gallopava infections, no cases were treated by surgical excision, in contrast to our case.Therefore, whether or not additional antifungal therapy before and/or after surgical excision should be performed is also unclear. Regarding Aspergillus, which is the most common fungus causing invasive fungal disease, similar to V. gallopava, Shen et al. (11) and Setianingrum et al. (12) reported that the postoperative recurrence rates of chronic pulmonary aspergillosis (CPA) were 7.1% (6/85) and 41% (25/61), respectively.They also stated that insufficient surgical resection may be a risk factor for relapse.The mean time to recurrence was 14.8 and 26 months after surgery, respectively.The recurrence rate after surgery in the first 3 years of observation was 33%, which was higher than that in the period of 3-10 years (8%) (12).In the present case, recurrence was observed at one year postoperatively, which was consistent with these previous reports of CPA.However, none of the patients with Aspergillus nodules (most of which are <3 cm) had recurrence after surgery (11, 13).The effect of antifungal therapy following surgery for CPA cases is controversial.Some studies showed no positive findings in immunocompetent patients (14, 15).In contrast, Setianingrum et al. (12) reported that antifungal therapy before surgery or both before and after surgery was protective for recurrence.In the present case, the patient had no presenting symptoms when CT revealed the 9-mm pulmonary nodule, and the postoperative pathological findings showed that the lesion was solitary and determined to be completely resected with sufficient margins.In addition, because no findings of relapsed DLBCL were observed, he was considered to have no severe immunodeficiency.For these reasons, the patient was followed closely without the any administration of antifun-gals after excision of the solitary pulmonary lesion, but unfortunately, the V. gallopava infection recurred.Although the possibility that lesions other than the excised solitary pulmonary nodule had been preoperatively present could not be denied, the cause of the chest wall infection was thought to be associated with surgical manipulation of lung resection.This is because the site of the chest wall abscess coincided with that of thoracotomy (right fifth intercostal space) performed in the previous surgery.In addition, another possible mechanism may involve his weakened immune system, attributed to auto-PBSCT and daily low-dose oral steroid treatment. Most cases of V. gallopava infection reported in the literature have involved multiple antifungal treatments, including amphotericin B followed by therapy with an azole (4, 7, 8).Although the optimal duration of therapy is unclear, longterm therapy seems essential.Regarding antifungal susceptibility, Seyedmousavi et al. ( 16) found that all azoles (including posaconazole, voriconazole, and itraconazole) had a good in vitro activity, with posaconazole having the lowest minimum inhibitory concentration, as seen in other cases (8, 10).Referring to these results of past reports, we selected posaconazole and also used amphotericin B concomitantly (4, 7, 8).Drug susceptibility tests were performed after the patient's death, with the results shown in Table .Finally, the autopsy evaluation revealed that the infection was under control.The infection was not the direct cause of his death; rather, it was attributed to an ileus due to PTLD.PTLD usually develops within the first year after hematopoietic stem cell transplantation (17).Although pathological findings of abdominal lesions indicated PTLD, our case was considered atypical, as 11 years had passed since the patient had undergone auto-PBSCT for DLBCL.Therefore, the involvement of disseminated fungal lesions in the abdomen could not be ruled out, and we ultimately did not perform treatment for PTLD, considering the risk of exacerba- tion of the infection.These results suggested that antifungal therapy was effective, but we suspected that additional antifungal therapy after excision of the solitary pulmonary lesion might have prevented the recurrence and allowed the patient to concentrate on the treatment of PTLD. In conclusion, the isolation of V. gallopava from a patient specimen should be regarded as serious.Even if a localized lesion is completely resected, close monitoring for several years after surgery is essential, and aggressive antifungal therapy should be considered, especially in an immunocompromised host.This type of infection is rare, but it can be a potentially fatal disease.To understand the clinical characteristics of this fungal infection, the accumulation of further cases is needed. Financial Support This research was funded by the Japanese Red Cross Aichi Medical Center Nagoya Daiichi Hospital Research Grant NFRCH 23-0011. Figure 1 . Figure 1.Chest computed tomography showed a 16-mm pure ground glass nodule (GGN) in the right upper lobe (A) and a solitary 9-mm nodule in the right lower lobe (B).Lung specimen from the nodular lesion in the right lower lobe.Grocott methenamine silver staining revealed darkly pigmented branching septate hyphae (C). Figure 2 .Figure 3 . Figure 2. Computed tomography images of the chest and abdomen at different time points.Normal findings were observed at four months before the symptom onset (A).Thickening of the right chest wall and ileocecal intestinal wall (arrows) were observed at the time of the symptom onset (B), which worsened after two months (arrows) (C). Figure 4 . Figure 4. Large colony grown on potato dextrose agar at 37°C on day 7 from a specimen obtained by chest wall abscess (A).A microscopic examination of colonies, isolated from the chest wall abscess, showed stained blue septate hyphae and conidia.Two-celled conidia were constricted at the septum (lactophenol cotton blue staining, ×400) (B).
2023-09-02T15:11:55.342Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "cf8352f28d658905ecdee16718e6325d22602220", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/advpub/0/advpub_2263-23/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7bfaf46f29dc15821265c42cbe53c4039b1115d7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118380089
pes2o/s2orc
v3-fos-license
Neutrino oscillations: Quantum mechanics vs. quantum field theory A consistent description of neutrino oscillations requires either the quantum-mechanical (QM) wave packet approach or a quantum field theoretic (QFT) treatment. We compare these two approaches to neutrino oscillations and discuss the correspondence between them. In particular, we derive expressions for the QM neutrino wave packets from QFT and relate the free parameters of the QM framework, in particular the effective momentum uncertainty of the neutrino state, to the more fundamental parameters of the QFT approach. We include in our discussion the possibilities that some of the neutrino's interaction partners are not detected, that the neutrino is produced in the decay of an unstable parent particle, and that the overlap of the wave packets of the particles involved in the neutrino production (or detection) process is not maximal. Finally, we demonstrate how the properly normalized oscillation probabilities can be obtained in the QFT framework without an ad hoc normalization procedure employed in the QM approach. Introduction It is well known by now that neutrino oscillations can be consistently described either in the quantum-mechanical (QM) wave packet approach, or within a quantum field theoretic (QFT) framework. 1 In the QM method [1,[3][4][5][6][7][8][9][10][11][12][13][14][15][16], neutrinos produced in weak-interaction processes are described by propagating wave packets, the spatial length of which is related to the momentum uncertainty at neutrino production, and the detected states are also described by wave packets, centered at the detection point. The transition (oscillation) amplitude is then obtained by projecting the evolved emitted neutrino state onto the detection state. In the QFT treatment [1,[9][10][11][17][18][19][20][21][22][23][24][25][26], one considers neutrino production, propagation and detection as a single process, described by a tree-level Feynman diagram with the neutrino in the intermediate state (see fig. 1). Neutrinos are represented in this framework by propagators rather than by wave functions. Both approaches lead to the standard formula for the probability of neutrino oscillations in vacuum in the case when the decoherence effects related to propagation of neutrinos as well as to their production and detection can be neglected. They differ, however, in the way they account for possible decoherence effects, with the QFT approach leading to a more consistent and accurate description. The QM method treats neutrino energy and momentum uncertainties responsible for these effects in a simplified way; in addition, it involves an ad hoc normalization procedure for the transition amplitude that is not properly justified. The goal of the present paper is to compare the two approaches and establish a relationship between them, as well as to clarify some of the procedures that are employed in the QM method from the more general and consistent QFT standpoint. Some work in that direction has been done before. In [12] neutrino wave packets were derived starting from the QFT formalism (see also a discussion in [2]). In [27], a comparison of the QM and QFT approaches was presented for the special case of Mössbauer neutrinos, i.e. neutrinos produced and detected recoillessly in hypothetical Mössbauer-type experiments (see, e.g., [26,28,29] and references therein). The new results obtained in the present work include a more advanced and general study of the QFT-based derivation of the neutrino wave packets (including the possibility that some of the external particles are not detected), matching of the QFT and QM expressions for the neutrino wave packets, study of the general properties of the wave packets describing the neutrino states (including their energy uncertainties in the case when neutrinos are produced in decays of unstable particles) and clarification of the issue of normalization of the oscillation probabilities in QM and QFT. The paper is organized as follows. To make the presentation self-contained, in Secs. 2 and 3 we review, respectively, the QM wave packet formalism and the QFT approach to neutrino oscillations. Sections 4 -6 contain our main results. In Sec. 4 we discuss how the neutrino wave packets, which are a necessary ingredient of the QM approach, can be derived starting from the QFT formalism. Next, we consider some general properties of the neutrino wave packets and discuss the conditions under which they can be approximated by Gaussian wave packets. Using the case of Gaussian wave packets as an example, we then discuss how the QFT-derived wave packets can be represented in the form usually adopted in the QM treatment. We also find expressions for the effective parameters describing the QM wave packets in terms of the more fundamental input parameters of the QFT framework. Next, we discuss the neutrino energy uncertainty in the case when neutrinos are produced in decays of unstable particles. In Sec. 5 we consider the problem of normalization of the neutrino wave packets in the QM framework and show how the normalization problem is solved in a natural way in the QFT-based approach. In Sec. 6 we discuss how one can relax some assumptions usually adopted in the QM and QFT approaches. Those include the assumption that the maxima of wave packets of all particles involved in the neutrino production (or detection) process meet at one space-time point, as well as the assumption that the mean momenta of the emitted and detected neutrino wave packets coincide. We summarize our results and conclude in Sec. 7. Some technical material is included in Appendices A and B. Review of the QM wave packet formalism We start with some generalities that are common to QFT and QM and then move on to review the QM wave packet approach to neutrino oscillations. We shall use the natural units = c = 1 throughout the paper. In quantum theory, one-particle states of particles of type A can be written as where |A, p is the one-particle momentum eigenstate corresponding to momentum p and energy E A (p) (for free particles, E A (p) = p 2 + m 2 A , m A being the mass of the particle), f A (p, P) is the momentum distribution function with the mean momentum P, and we use the shorthand notation For particles with spin, the states |A and |A, p depend also on a spin variable, which we suppress to simplify the notation. We will also often omit the second argument of f A where this cannot cause confusion. We choose the Lorentz invariant normalization condition for the plane wave states |A, p : The standard normalization of the states A|A = 1 then implies The quantity 2E A (p)f A (p) is actually the momentum-space wave function of A: 2E A (p)× f A (p) = p|A . The time dependent wave function is 2E A (p)f A (p)e −iE A (p)t = p|A(t) , where |A(t) = e −iHt |A and H is the free Hamiltonian of A. The coordinate-space wave function Ψ A (t, x) is the Lorentz-invariant Fourier transform of p|A(t) : 2 or In the QFT framework, it can be written as where x ≡ (t, x) andΨ A (x) is the second-quantized field operator of A. Using the standard decomposition of the fieldΨ A (x) in terms of production and annihilation operators, one can readily obtain (5) from (7) and (1). Note that expressions (5) and (6) can describe both bound states and propagating wave packets (in the case of bound states or particles propagating in a potential, the relation E A (p) = p 2 + m 2 A simply has to be replaced by the proper dispersion relation). A wave packet is obtained when the momentum distribution function f A (p, P) is sharply peaked at or close to a nonzero mean momentum P, 3 i.e. when the momentum dispersion σ p satisfies σ p ≪ |P|; for the rest of this section we will assume this to be the case. The wave function (6) then describes a wave packet whose maximum of amplitude is located at x = 0 at time t = 0. A wave packet that is peaked at coordinate x 0 at time t 0 is obtained by acting on the state |A by the space-time translation operator e iP x 0 , whereP µ is the 4-momentum operator. For the coordinate-space wave function this yields instead of eq. (6). Eqs. (6) and (8) represent wave packets that propagate with the group velocity v ≡ ∂E A (p) ∂p | p=P = P E A (P) and in general spread with time both in the longitudinal direction and in the directions transverse to their mean momentum. The spreading is due to the fact that different momentum components of the wave packet have slightly different velocities p/E A (p). Let us now consider neutrino oscillations in the framework of the QM wave packet formalism, sometimes also called the "intermediate wave packet" approach. Neutrinos produced or absorbed in charged-current weak interaction processes are considered to be flavour eigenstates ν α (α = e, µ, τ ), which are coherent linear superpositions of mass eigenstates ν j (j = 1, 2, 3) with coefficients given by the elements of the leptonic mixing matrix U αj . The mass eigenstates are represented by the corresponding wave packets. If a neutrino of flavour α was produced at time t P at a source centered at x P , its momentum-space wave function at a time t > t P is Here the subscript P shows that the wave packet corresponds to a neutrino produced at the source. Note that the index α at ν αP (t) simply indicates that the emitted neutrino was of flavour α at its production time t = t P ; it is, of course, no longer so for t > t P . The shape of the wave packet of the jth mass-eigenstate neutrino is given by the momentum distribution function f jP , which is determined by the mechanism and conditions of neutrino production. In the QM framework, however, the neutrino production and detection processes are not explicitly taken into account; therefore the functions f jP are postulated rather than determined, with the corresponding momentum widths estimated from the localization properties of the production process. Usually, the wave packets are taken to be of the Gaussian form where σ pP characterizes the momentum uncertainty of the produced neutrino state, and similarly for the state of the detected neutrino. The advantage of Gaussian wave packets is that they allow most calculations to be done analytically (the same is also true for Lorentzian wave packets, see ref. [27]). The state of the detected neutrino ν β is described by a wave packet peaked at the detection coordinate x D . In the momentum-space representation it is given by where the subscript D stands for detection. The momentum distribution functions f kD are governed by the properties of the detection process; however, just as for neutrino production, in the QM approach these functions are postulated rather than determined. Note that, although the assumption P = P ′ is adopted in most studies, in general there is no reason to expect the mean momenta of the produced and detected wave packets to coincide. We will discuss this point in more detail in Sec. 6. The amplitude for the transition ν α → ν β is obtained by projecting the evolved neutrino production state onto the detection state: where t D is the detection time, T ≡ t D −t P > 0 and L ≡ x D −x P . Performing the projection in momentum space, we obtain from (9) and (11) For future reference, we shall also write this as a superposition of the amplitudes corresponding to the contributions of different neutrino mass eigenstates: with The oscillation probability is given by the squared modulus of the transition amplitude: Since in most experiments the neutrino emission and detection times are not measured, the standard procedure is then to integrate P (ν α → ν β , T, L) over T . This gives Substituting here the transition amplitude (13) yields, up to a normalization factor, the standard probability of neutrino oscillations in vacuum provided that all decoherence effects are negligible. The normalization factor can then be fixed by requiring that the oscillation probability satisfy the unitarity condition (see Sec. 5 for a more detailed discussion). Neutrino oscillations in QFT In the QFT approach (which is sometimes also called the "external wave packet" formalism), neutrino production, propagation, and detection are considered as a single process, described by the Feynman diagram shown in fig. 1, with the neutrino in the intermediate state. In our overview of the QFT formalism we will mostly follow ref. [2]. Assume that the neutrino production process involves one initial state and one final state particle (besides the neutrino). Likewise, we will assume that the detection process involves only one particle besides the neutrino in the initial state and one particle in the final state. The generalization to the case of an arbitrary number of particles involved in the neutrino production Figure 1: Feynman diagram describing neutrino production, propagation and detection as a single process. and detection processes is straightforward and would just complicate the formulas without providing further physical insight. All external particles will be assumed to be on their respective mass shells. 5 The states describing the particles accompanying neutrino production and detection ("external particles") can be represented in the form (1). For the initial and final states at neutrino production we can write and similarly for the states accompanying neutrino detection: We assume these states to fulfill the normalization condition (4). Some (or all) of the mean momenta of the external particles Q, K, Q ′ and K ′ may vanish, i.e. the states in eqs. (17) and (18) can describe bound states at rest as well as wave packets. The amplitude of the neutrino production -propagation -detection process is given by the matrix element whereT is the time ordering operator and H I (x) is the charged-current weak interaction Hamiltonian. 6 Note that no neutrino flavour eigenstates have to be introduced in the QFT 5 Since only one particle is assumed to be in the initial state of the production process, it must be unstable. This will be of no importance for us here because, as was already mentioned, the results are easily generalized to the case of an arbitrary number of external particles. Possible instability of the parent particle will be taken into account in Sec. 4.4. 6 We consider neutrino production and detection at energies well below the W -boson mass, so that H I is the effective 4-fermion Hamiltonian of weak interactions. framework, and the indices α and β simply refer here to the flavour of the charged leptons participating in the production and detection processes. From eq. (19) it is easy to calculate the transition amplitude in the lowest nontrivial (i.e. second) order in H I using the standard QFT methods. The resulting expression corresponds to the Feynman diagram of fig. 1 and can be written as Here the sum runs over all intermediate states (i.e. different neutrino mass eigenstates), and the quantity A p.w. j (q, k; q ′ , k ′ ) is the plane-wave amplitude of the process with the jth neutrino mass eigenstate propagating between the source and the detector: Here x 1 and x 2 are the 4-coordinates of the neutrino production and detection points, the quantitiesM P (q, k) andM D (q ′ , k ′ ) are the plane-wave amplitudes of the processes P i → P f + ν j and D i + ν j → D f , respectively, with the neutrino spinorsū j (p, s) and u j (p, s) excluded. The choice of the 4-coordinate dependent phase factors corresponds to the assumption that the peaks of the wave packets of particles involved in the production process are all located at x 1 = x P at the time t 1 = t P , whereas for the detection process the corresponding peaks are all situated at x 2 = x D at the time t 2 = t D (we will discuss in Sec. 6 how this assumption can be relaxed). The integral in the second line of eq. (21) gives the coordinate-space propagator of the jth neutrino mass eigenstate. It is convenient to switch to shifted 4-coordinate variables x ′ 1 , x ′ 2 , defined according to Taking into account that / p + m j = s u j (p, s)ū j (p, s), one can then rewrite eq. (21) as where are the full amplitudes (with the neutrino spinors included) of the processes P i → P f + ν j and D i + ν j → D f , respectively, and we have taken into account that the matrix elements M jP (q, k) and M jD (q ′ , k ′ ) involve the left-handed chirality projection, so that only the lefthanded spinors u jL (p) andū jL (p) contribute to the sum over the neutrino spin variable s. Substituting (22) into eq. (20), we finally obtain Here the so-called overlap functions Φ jP (p 0 , p) and Φ jD (p 0 , p) are defined as Note that they are independent of x P and x D . Expressions (24) and (25) are the main results of the QFT-based approach to neutrino oscillations [2,12]. Comparing the QM and QFT approaches to neutrino oscillations Let us now compare the results of the QM and QFT approaches to neutrino oscillations. Consider first the transition amplitude (24) obtained in the QFT formalism. The integration over the neutrino 4-momentum in this expression can be done in different order. Here it will be more convenient for us to integrate first over p 0 and then over p (the opposite order will be used in Sec. 5). Since the distance L between the neutrino source and detector is macroscopic, the phase factor in the integrand of eq. (24) undergoes fast oscillations and the integral is strongly suppressed except when the intermediate neutrino is on the mass shell. Thus, the dominant contribution to the integral is given by the residue at the pole of the neutrino propagator at Eq. (24) can therefore be rewritten as where Θ(x) is the Heaviside step function. Deriving neutrino wave packets in the QFT-based approach Let us now compare eqs. (27) and (13). We see that the two equations are of the same form and actually coincide if we identify the QM wave packets as where the functions Φ jP and Φ jD were defined in eq. (25). The obtained result can be easily understood. Indeed, as follows from the definition of Φ jP (p 0 , p), for p 0 = E j (p) (i.e. on the mass shell of ν j ) this quantity is the probability amplitude of the production process in which the jth mass eigenstate neutrino is emitted with momentum p; but this is nothing but the momentum distribution function of the produced neutrino, i.e. the momentum-state wave packet f jP (p). A similar argument applies to the neutrino detection process and f jD (p). The wave packets f jP (p) and f jD (p) in eq. (28) are not normalized according to (4), though they can be easily modified to satisfy this condition. However, as we shall see in Sec. 5, this is not necessary and actually would be misleading. An alternative method of deriving neutrino wave packets in the QFT framework, based on the S-matrix approach, was suggested in [12]; the obtained results are equivalent to those in eqs. (28) and (25). Let us now consider the wave packet describing the produced neutrino state in more detail (the state of the detected neutrino can be studied quite analogously). According to (28), the momentum distribution function f jP (p) characterizing the state of the emitted neutrino of mass m j is essentially given by the on-shell function Φ jP (E j (p), p). Since the matrix element M jP (q, k) is a smooth function of the on-shell 4-momenta p and q, whereas the wave packets of the external states are assumed to be sharply peaked at or near the corresponding mean momenta, one can replace M jP by its value at the mean momenta and pull it out of the integral. Eqs. (25) and (28) then yield where the 4-momenta Q and K are defined as From eq. (29) (or eqs. (25) and (28)) one can draw some important conclusions about the properties of the neutrino momentum distribution functions f jp (p) which determine the emitted neutrino wave packets: • Since the quantities f P i (q, Q) and f P f (k, K) depend only on the properties of the external particles, and the j-dependence of the matrix elements M jP (p, q) comes through the on-shell neutrino spinor factors [(2p 0 ) −1/2 u j (p, s)] p 0 =E j (p) , which depend on j only through the neutrino energy, the functions f jP (p) depend on the index j solely through the neutrino energy E j (p). This, in particular, means that for ultra-relativistic or quasi-degenerate in mass neutrinos the momentum distribution functions of all neutrino mass eigenstates are practically the same (provided that their energy differences |E j − E k | are small compared to the energy uncertainty σ eP ). • Because the integral over the 3-coordinate x in eq. (29) yields δ (3) (q − k − p), and the momentum distribution functions f P i (q, Q) and f P f (k, K) are sharply peaked at or near their respective mean momenta Q and K, the neutrino momentum distribution functions f jP (p) are sharply peaked at or close to the momentum P ≡ Q−K, with the width of the peak σ pP dominated by the largest between the momentum uncertainties of the states of P i and P f . Taking into account eq. (5), eq. (29) can be rewritten as Thus, the momentum distribution function that determines the wave packet of the emitted neutrino is essentially the 4-dimensional Fourier transform of the product of the coordinatespace wave functions of the external particles participating in the neutrino production process, taken under the condition that the components of the neutrino 4-momentum are on the mass shell. Eq. (31) can be readily generalized to the case when more than two external particles participate in the neutrino production process: the expression Ψ P i (t, x)Ψ * P f (t, x) in the integrand of (31) should simply be replaced by the product of the wave functions of all particles in the initial state of the production process and of complex conjugates of the wave functions of all particles in the final state (except the neutrino). The neutrino wave packet in coordinate space ψ jP (x) is obtained from eq. (31) by performing the Fourier transformation over the 3-momentum variable p according to the transformation law (6), which gives Here p ≡ |p| and we have used eq. (23) to extract the p-dependent factor (2p 0 ) −1/2 = 2 −1/2 (p 2 + m 2 j ) −1/4 from M jP (Q, K). The integral over p in eq. (32) can be expressed in terms of the modified Bessel function K 1 [31], giving where The integral in the second line of eq. (32) is greatly simplified in the limit of vanishing neutrino mass: Note, however, that in this limit all neutrino species travel with the same speed and therefore the wave functions in eq. (34) cannot describe decoherence due to the separation of wave packets. In order to take possible wave packet separation effects into account the more accurate expression (33) has to be used. Alternatively, one can employ the momentumrepresentation wave function (31). Expression (33) for the wave function of the produced neutrino state Ψ jP (t, x) allows a simple interpretation. Note that 1 is the scalar retarded propagator in the coordinate representation. Therefore the neutrino wave packet (33) is essentially the convolution of the neutrino source (the role of which is played by the neutrino production amplitude Ψ * P f (x)M jP Ψ P i (x)) with the retarded neutrino propagator, in full agreement with the well known result of QFT. Note that only the scalar part of the propagator contributes to Ψ jP (x); this is because the coordinate space and momentum space neutrino wave functions Ψ jP (x) and f jP (p) are scalars in our formalism. The spinor factors are included in the matrix elements M jP and M jD (note that these quantities are also scalar, whereas the amputated matrix elementsM jP andM jD , i.e. those with the neutrino spinors removed, have spinorial indices). In our discussion of the wave packets of the emitted neutrino states, we were assuming that the momentum distribution functions of all the external particles accompanying neutrino production are known. This implies, in particular, that all particles in the final state of the production process are "measured", either by direct detection or through their interaction with the medium in the process of neutrino production. It is quite possible, however, that some of the particles accompanying neutrino production escape undetected; this is, e.g., the case for atmospheric or accelerator neutrinos born in the process π ± → µ ± + ν µ (ν µ ), in which the final state muon is normally not detected. It is also possible that some of the particles accompanying neutrino detection are "unmeasured". How can one determine the neutrino wave packets in those cases? To answer this question, let us recall that the momentum uncertainty σ pP characterizing the emitted neutrino depends in general on the momentum uncertainties of all the external particles at neutrino production and is dominated by the largest among them (see the discussion after eqs. (29) and (30)). In particular, in the case of Gaussian wave packets, one has [2,12] σ For more than two external particles at production, the sum on the right-hand side of this relation would contain the contributions of the squared momentum uncertainties of all these particles. Now, if a particle goes "unmeasured" in the neutrino production process, its momentum uncertainty cannot affect the momentum uncertainty of the emitted neutrino state and therefore can be neglected. To put it differently, undetected particles are completely delocalized, and therefore, according to Heisenberg's uncertainty relation, have vanishing momentum uncertainty. This means that undetected particles can be represented by states of definite momenta, i.e. by plane waves. If, for example, the particle P f at production is undetected, one has to replace in eq. (29) the momentum distribution function where V is the normalization volume, and in eq. (31) the coordinate-space wave function Ψ P f (x) by e −iKx / 2E P f (K)V , with eqs. (32) -(34) modified accordingly. The mean momentum of the neutrino state depends, of course, on the momentum of the undetected particle; if the latter can take values in some range, the same will be true for the mean momentum of the emitted neutrino state. In this case the flux of emitted neutrinos will be characterized by a continuous spectrum. In most of our discussion in this subsection we concentrated on the wave packets of the produced neutrino states. Our consideration, however, applies practically without changes to the detected neutrino states; the corresponding formulas can be obtained from eqs. (29) and (31) General properties of neutrino wave packets We have already considered some of the general properties of the neutrino wave packets in the previous subsection. In particular, we have found that the momentum distribution functions f jP (p) of mass-eigenstate neutrinos ν j depend on the index j only through the neutrino energy E j (p), and that the functions f jP (p) are sharply peaked at or near the momentum P = Q − K, with the width of the peak dominated by the largest between the widths of the functions f P i and f P f . Further insight into the general properties of the neutrino wave packets can be gained by comparing expressions (25) with their plane-wave limits. If the external particles were described by plane waves, the quantities Φ jP (E j (p), p) and Φ jD (E j (p), p), which determine the neutrino wave packets, would be just equal to the matrix elements of the neutrino production or detection processes divided by the factor √ 2EV for each external particle and multiplied, correspondingly, by (2π) 4 δ (4) (q − k ∓ p). The latter factors represent energy-momentum conservation at the production and detection vertices. As follows from (25), in the case when the external particles are described by wave packets, the quantities Φ jP (E j (p), p) and Φ jD (E j (p), p) (and therefore the momentum distribution functions of the neutrino wave packets) correspond to "smeared δ-functions", representing approximate conservation of the mean energies and mean momenta of the participating particles. How exactly this smearing occurs will depend on the form of the wave packets of the external particles, and to move ahead one has to specify this form. A particularly useful and illuminating example of a specific form of the external wave packets, and the one most often used in the literature, is the case of Gaussian wave packets. We will employ this example to illustrate the general properties of the neutrino wave packets. Let us discuss first the conditions under which an arbitrary wave packet can be accurately approximated by a Gaussian one. For simplicity, we will consider here 1-dimensional wave packets. This is a good approximation in the case when the distance between the neutrino source and detector is very large compared to their sizes, so that the neutrino momentum is practically collinear with L = x D − x P (the generalization to the 3-dimensional case is straightforward). Consider a wave packet described by a momentum distribution function f (p), sharply peaked at some value P 0 of the momentum. We can write this function in the exponential form as The Gaussian approximation corresponds to the case when the integral over p of the function f (p) multiplied by any function of p that is smooth in the vicinity of P 0 can be evaluated in the saddle point approximation. Indeed, in this approach one expands the function g(p) around its minimum at p = P 0 and keeps the terms up to and including the quadratic one: This precisely means the wave packet f (p) is approximated by the Gaussian one. The validity condition for this approximation is given in terms of the derivatives of the function g at P 0 : It can be satisfied for a wide range of functions f (p). However, it is easy to construct wave packets for which it is not satisfied. Consider, e.g. a class of wave packets with integer n and C n a constant, which can be found from the normalization condition for f (p). It is easy to check that condition (38) is equivalent to 1/4n ≪ 1. Thus, the momentum distribution functions (39) can be accurately approximated by the Gaussian ones only when n ≫ 1. This condition, in particular, is not satisfied for Lorentzian wave packets. Matching the QFT and QM neutrino wave packets Let us now discuss how one can match the QFT and QM wave packets of neutrinos. Using the case of Gaussian wave packets as an example, we shall find out how the effective parameters describing the QM wave packets can be expressed in terms of the more fundamental parameters entering into the QFT approach. We start by introducing some notation (we mostly follow [2,12] here). The coordinate uncertainty σ xP i characterizing the wave function of the initial state particle P i in the neutrino production process is related to its momentum uncertainty σ pP i by and similarly for all other external particles. One can also introduce the effective coordinate uncertainty of the production process σ xP , which is connected to the effective momentum uncertainty of this process σ pP defined in eq. (35) by a relation similar to (40), or equivalently This formula has a simple physical interpretation: since the neutrino production process requires an overlap of the wave functions of all the participating particles, the effective uncertainty of the coordinate of the production point is determined by the particle with the smallest coordinate uncertainty. This is in accord with the already discussed fact that the effective momentum uncertainty at production σ pP , which determines the momentum uncertainty of the produced neutrino, is dominated by the largest among the momentum uncertainties of all the external particles involved in neutrino production. Next, we define the effective velocity of the neutrino source v P and its effective squared velocity Σ P as If v P i ∼ v P f , they are approximately equal to, respectively, the velocity and squared velocity of the particle with the smallest coordinate uncertainty. We will also need the quantity σ eP defined through This quantity can be interpreted as the effective energy uncertainty at neutrino production [2]. It can be also shown that 0 ≤ λ P ≤ 1, i.e. 0 ≤ σ eP ≤ σ pP . We can now discuss the results obtained in the QFT framework in the case when the external particles are represented by Gaussian wave packets. The function Φ jP (E j (p), p), which coincides with the momentum distribution function of the emitted mass-eigenstate neutrino ν j , can be written as [2,12] where is the normalization factor and Here and E j (p) was defined in eq. (26). Note that in the limit when the external particles are represented by plane waves (σ pP i → 0, σ pP f → 0), the first equation in (25) yields as discussed in the previous subsection. From eq. (35) and the fact that σ eP ≤ σ pP it follows that in this limit the momentum uncertainty σ pP and energy uncertainty σ eP of the produced neutrino state vanish as well; thus, if the external particles are described by plane waves, then so is the produced neutrino. As follows from eq. (48), the plane wave limit corresponds to exact energy and momentum conservation at production. 8 This can also be seen from eqs. (44) and (46): indeed, in the limit σ eP , σ pP → 0 the right hand side of (44) is proportional to the product of the energy and momentum conserving δ-functions. For finite values of σ pP and σ eP , eqs. (44) and (46) yield Gaussian-type "smeared delta functions", i.e. describe approximate conservation laws for the mean momenta and mean energies of the wave packets, for which are responsible, respectively, the first term and the second term in (46). Let us now try to cast expressions (44) and (46) into the form usually adopted in the QM wave packet approach to neutrino oscillations. We want to reduce Φ jP (E j (p), p) to an expression similar to that in eq. (10). To this end, we expand the neutrino energy around the point p = P and keep terms up to the second order: Here The lower index j corresponds to the neutrino mass eigenstates, while the upper indices k and l number the components of the 3-vectors p, P and v j (i.e. v k j is the kth component of the group velocity of the jth neutrino mass eigenstate). The function g P (E j (p), p) defined in (46) can then be written as where Let us now try to represent g P (E j (p), p) in the form where the parameters δ andγ j are to be determined by comparing eqs. (54) and (51). They describe, respectively, a shift of the neutrino mean momentum compared to the naive expectation p = P and a modification of the overall normalization of the neutrino wave function. The effective momentum uncertainty characterizing the QM neutrino wave packet can be obtained by diagonalizing the matrix α. The squared uncertainties of the different components of the neutrino momentum are given, up to the factor 1/4, by the reciprocals of the eigenvalues of the matrix α. In general, these eigenvalues are different, i.e. the neutrino momentum uncertainty is anisotropic. This actually means that expression (10) for a 3dimensional Gaussian wave packet is oversimplified: its exponent has to be replaced by . Comparing eqs. (54) and (51), one finds that the shift δ of the neutrino mean momentum satisfies the equation whereas the parameterγ j is given bỹ The full solutions of eqs. (56) and (57) are given in Appendix A; here we present the results obtained in the leading order in the small parameter (E j − E P )/E j . 9 The diagonalization of the matrix α kl gives in this limit where the z axis was chosen in the direction of v j − v P . For δ k andγ j one finds From eq. (58) it follows that the neutrino momentum uncertainty in the direction of v j −v P is smaller than those in the orthogonal directions. Note that for non-relativistic sources (v P ≪ 1) the direction of v j − v P essentially coincides with that of the mean neutrino momentum, and eq. (58) means that the longitudinal uncertainty of the neutrino momentum is smaller than the transversal ones. To understand this property qualitatively, one can imagine the neutrino production region (i.e. the region where the wave packets of the particles involved in the production process have significant overlap) to be approximately spherical with radius of order σ xP . Then, the transverse extent of the neutrino wave packet will also be O(σ xP ). On the other hand, its longitudinal spread is determined by the duration of the production process (i.e. the time interval during which the wave packets have significant overlap), which is given by ∼ σ xP /δv, where δv is the relative velocity of the two external particles. Summarizing the results of the current subsection, we can see that the QM neutrino wave packets can match those obtained in the QFT framework if one applies the following changes to the QM results: • The momentum uncertainties of the neutrino mass eigenstates are replaced by the effective ones, defined in eq. (58). They are in general different in different directions. • The mean momentum P is shifted according to P → P eff = P + δ, where the components of δ are given in eq. (59). • The wave packet of each neutrino mass eigenstate gets an extra factor N j = exp[−γ j ], whereγ j is given in (59). From the last point one can see that if the differences of the energies of different neutrino mass eigenstates are small compared to the energy uncertainty σ eP , the additional factors N j are essentially the same for the wave functions of all neutrino mass eigenstates and can be included in their common normalization factor. If, on the contrary, this condition is violated, the coherence of the emission of different neutrino mass eigenstates will be lost [15]. As follows from eq. (58), the effective momentum uncertainties that should be used to describe the wave packets of emitted neutrinos in the QM formalism are not just equal to the true momentum uncertainty at production σ pP , as naively expected, but also depend on the energy uncertainty σ eP , which is an independent parameter, as well as on the neutrino velocity v j and the effective velocity of the neutrino production region v P . Except for v j ≃ v P , the momentum uncertainty along the direction of v j − v P is dominated by the smaller between σ pP and σ eP , which turns out to be σ eP . This is related to the fact that neutrinos propagate over macroscopic distances and therefore are on their mass shell, which enforces the relation E j (p)σ eP ≃ |p|σ pP eff (see Sec. 5.2 of ref. [15]). In the limit σ eP , v P → 0, which corresponds to a stationary neutrino source approximation [2], the effective longitudinal momentum uncertainty σ z pP eff vanishes, even though the true momentum uncertainty σ pP is nonzero. This implies an infinite coherence length, in accordance with the well known result for the stationary case [6,20]. It also confirms the expectation that wave packets of Mössbauer neutrinos, which are emitted in a quasi-stationary process, have a very large spatial extent [26,32]. We have discussed here the matching of the QFT and QM wave packets of the produced neutrino states; for the wave packets of the detected states the consideration is completely analogous. The case of an unstable neutrino source Let us now consider the situation when neutrinos are produced in decays of unstable particles. Once again, we will assume that the external particles are described by Gaussian wave packets. Compared to the standard formalism that leads to eqs. (44) and (46), one now has to introduce the following modifications: 1. The energy of the parent particle P i acquires an imaginary part, i.e. one has to replace ]Γ 0 , Γ 0 being the rest-frame decay width of P i . This amounts to replacing the energy difference E P defined in (47) according to E P → E P − iΓ/2. 2. The integration over time in the formula for Φ jP in eq. (25) now has to be performed from 0 to ∞ rather than from −∞ to ∞ (assuming that t = 0 is the production time of the parent particle P i ). As a result of these modifications, eqs. (44), (46) get replaced by where with and t P ≥ 0 being the time of maximum wave packet overlap in the production region (see Sec. 3). The p-dependent Gaussian factor in eq. (61) describes, as before, an approximate conservation of mean momenta at neutrino production, whereas the factor I 1 is responsible for an approximate conservation of mean energies. Unlike in the case of stable particles considered in Sec. 4.3, the latter does not have a Gaussian form. Note the different time dependence of the different terms in the exponent in the integrand of the first integral in (62): the terms proportional to a and b are multiplied by t−t P , whereas the term ∝ Γ is multiplied by t. This is because the former two terms reflect the fact that the peak of the wave packet of P i is located at the neutrino production point x = x P at the time t = t P , whereas the wave function of this particle exhibits an overall exponential suppression starting from its creation time t = 0. Calculating the integral in (62), we find where erf(x) is the error function. The limiting cases of interest can now be obtained from the relevant expansions of this function, but it is actually easier to study them starting directly with the expression for I 1 in eq. (62). Indeed, the integrand of I 1 contains the exponential and oscillating factors, and therefore the integration domain that gives significant contribution to I 1 is provided that the right hand side of this condition does not exceed t P ; if it does, for negative t the domain is limited by |t| > t P . Let us note that, while the parameter b may vanish, b cannot, as it has a non-zero imaginary part Γ/2. Consider first the limit √ a ≫ |b|, √ a t P ≫ 1, which implies In this case one can setb ≃ b and also extend the lower integration limit in eq. (62) to −∞, which gives I 1 ≃ π/a exp[−b 2 /4a]. Substituting this into eq. (61) yields, up to the extra factor e −(Γ/2)t P , the old result of eqs. (44) and (46). If instead of the second condition in (66) one considers the opposite limit σ eP t P ≪ 1, the lower integration limit in the last integral in (62) can be set equal to zero. Since the error function goes to zero for small arguments, if follows that the result in this case is just 1/2 of that in the case σ eP t P ≫ 1. Thus, we conclude that in the limit σ eP ≫ Γ/2 the approximate conservation of mean energies is given by the same (in this case Gaussian) law as in the case of the stable neutrino source, with the same energy uncertainty σ eP . This is an expected result. Consider now the limit √ a ≪ |b|, √ a t P ≪ 1, or 10 In this case one can neglect the term −at 2 in the exponent in the last integral in (62), which yields This is the usual Lorentzian energy distribution factor corresponding to the decay of an unstable parent state. Thus, we conclude that in the case when Γ/2 ≪ σ eP the factor in Φ jP (E j (p), p) that is responsible for the approximate conservation of mean energies in the production process is essentially the same as in the case of a stable neutrino source (Gaussian in the case we considered), whereas in the opposite limit, Γ/2 ≫ σ eP , it is given by the Lorentzian energy distribution corresponding to the natural linewidth of the source Γ/2. In the intermediate case Γ ∼ σ eP , the energy-dependent factor in Φ jP (E j (p), p) is neither Gaussian nor Lorentzian, with the effective energy uncertainty being of the same order as σ eP and Γ. 5 Oscillation probabilities and the normalization conditions Normalization of oscillation probabilities in the QM approach In the QM wave packet approach to neutrino oscillations, one has to normalize the oscillation probability P αβ (L) by hand by requiring that it satisfy the unitarity constraint β P αβ (L) = 1 . This is an ad hoc procedure that is not properly justified within the QM formalism; however, it seems to be unavoidable in that approach. In particular, one can readily make sure that the standard normalization of the neutrino wave packets ν j |ν j = 1, or, equivalently, |f j (p)| 2 d 3 p/(2π) 3 = 1, does not lead to the correct normalization of the oscillation probability (that this is indeed the case can be easily verified by using Gaussian wave packets as an example). Moreover, as we will show below, no independent normalization of the produced and detected neutrino states can lead to the correct normalization of the oscillation probability in the QM wave packet approach. Let us now consider the unitarity constraints on the oscillation probabilities and their connection to the normalization conditions in more detail. As it is presented in eq. (69), the unitarity condition simply reflects the fact that during the propagation between the source and detector neutrinos are neither destroyed nor (re)created: once a neutrino is produced, it can only change its flavour. 11 Therefore, for a fixed initial flavour α, the probabilities P αβ (L) of neutrino conversion to all final flavours β sum to unity. Note that the quantities P αβ (L) by construction depend only on the distance L between the source and the detector and not on the propagation time T , which is integrated over (see eq. (16)). 12 Thus, the meaning of the unitarity condition (69) is that, once a neutrino is produced, the probability that it will be found at a given distance L from the source at some time between zero and infinity is equal to one, provided that all flavour states are accounted for. Similarly, if one introduces P αβ (T ) ≡ d 3 L P αβ (T, L), it will also satisfy a unitarity condition analogous to (69). In other words, once the neutrino is created, the probability to find it (in any flavour state) at a fixed time T after its production somewhere in space is equal to one. The unitarity conditions can only be satisfied if the oscillation probabilities are properly normalized. It is important to note, however, that in a consistent formalism unitarity must be satisfied automatically rather than being imposed by hand. How about the un-integrated probability P αβ (T, L), should it satisfy a unitarity constraint similar to (69)? Obviously, for arbitrary L and T the answer to this question is negative. Indeed, unless |L − vT | σ x where v is the average group velocity of the neutrino wave packet and σ x is its spatial length, the probabilities P αβ (T, L) are vanishingly small for all α and β. Thus, unitarity cannot be used to normalize the un-integrated probability. The normalization of P αβ (T, L) can, however, be fixed differently: in the limit T → 0, L → 0, i.e. when the produced neutrino did not have time to evolve yet, the probability must satisfy the initial condition P αβ (0, 0) = δ αβ . Note that the above limit should be understood in the sense that L and T are small compared to the oscillation length but large compared to, respectively, the sizes of the spatial localization regions and time scales of the neutrino emission and absorption processes. From eqs. (14) and (15) it then immediately follows that the amplitudes A j (T, L) corresponding to neutrino mass eigenstates must satisfy A j (0, 0) = e iφ where the real phase φ is the same for all j. By a rephasing of the momentum distribution functions of either emitted or detected neutrinos, this common phase can be eliminated, and one finally gets A j (0, 0) = 1. Now, let us check if this condition is fulfilled in the particular case of Gaussian wave packets. For the momentum distribution functions of the produced and detected neutrino states normalized according to eq. (4), i.e. having the form (10), a straightforward calculation . (70) For σ pP = σ pD and P = P ′ both factors on the right hand side of the last equality are smaller than one, and therefore the condition A j (0, 0) = 1 is clearly violated. Moreover, the dependence of the result on the parameters of the produced and detected neutrino states does not factorize; this means that no independent normalization of these states can lead to the correct normalization of the amplitude, as pointed out above. It is actually quite easy to understand why this happens. The integral in eq. (70) is nothing but the overlap integral of the wave functions of the produced and detected neutrinos. If these wave functions are normalized to unity and the momentum (or energy) spectra of the emitted and detected states do not coincide, this overlap integral is always less than one. In reality, the spectra of the emitted and absorbed neutrino states are determined by the physical nature and experimental conditions of the neutrino production and detection processes, which are always different. This, in particular, means that a fraction of the produced neutrinos may simply not be detectable. For instance, if the threshold in the detection process (either the physical threshold or the one imposed by energy cuts of the detected events) is higher than the maximum energy of the emitted neutrino, no detection will be possible at all. Mathematically, the fact that the overlap integral (70) is always less than one is a consequence of the Schwarz inequality |(f, g)| 2 ≤ (f, f )(g, g), where the equality is only reached if f = const · g. Even if one adopts the unrealistic assumption f jP (p) = f jD (p) (which for Gaussian wave packets would mean P = P ′ and σ pP = σ pD ), this will not solve all the normalization problems of the QM wave packet approach. The condition A j (0, 0) = 1 will be satisfied in this case; however, the physically observable oscillation probability P αβ (L) defined in eq. (16) will still not be properly normalized, and the unitarity condition (69) will not be satisfied. Indeed, from eq. (14) it follows that unitarity requires that [15] dT |A j (T, L)| 2 = 1 for all j. Obviously, the fulfilment of the condition A j (0, 0) = 1 does not enforce (71); therefore in the QM formalism condition (71) has to be imposed by hand. It is not difficult to understand why yet another normalization problem arises: integration over the time T should actually be considered as time averaging in the QM approach, and the integral on the right hand side of eq. (16) should be normalized by dividing it by the characteristic time ∆T which depends on time scales of both the neutrino production and detection processes. This follows from the fact that the amplitude A j (T, L) is substantially different from zero only when |L − vT | σ x , where the effective length of the wave packet σ x is determined by both the neutrino production and detection processes, which gives ∆T ∼ σ x /v. It is difficult to calculate the quantity ∆T precisely, and the simplest way out is just to impose the unitarity condition by hand, which yields the correct normalization of P αβ (L). 13 Thus, in the QM approach there are two sources of the normalization problems: lack of the overlap of the wave functions of the produced and detected neutrino states and the necessity of integration over the neutrino propagation time T . We will show now how both these problems are naturally solved in the QFT-based formalism. Generalities Let us start with recalling the operational definition of the neutrino oscillation probability. In a detection process that is sensitive to neutrinos of flavour β, the detection rate is 14 where σ β (E) is the detection cross section and j β (E) is the energy density (spectrum) of the ν β flux at the detector. If a source at a distance L from the detector emits neutrinos of flavour α with the energy spectrum dΓ prod α (E)/dE, the energy density of the ν β flux at the detector is where P αβ (L, E) is the ν a → ν b oscillation probability for neutrinos of energy E, and we once again assumed for simplicity neutrino emission to be spherically symmetric. Substituting this into eq. (72) yields the rate of the overall neutrino production-propagation-detection process: The oscillation probability can now be extracted from the integrand of eq. (74) by dividing it by the neutrino emission spectrum, detection cross section and the geometrical factor 1/4πL 2 : Note that an important ingredient of this argument is the assumption that at a fixed neutrino energy E the overall rate of the process factorizes into the production rate, propagation 13 Note that once the normalization condition (71) is enforced, one can demonstrate that the resulting oscillation probability P αβ (L) is Lorentz invariant [15]. This is not trivial in the QM approach because the QM formalism is not manifestly Lorentz covariant. 14 We omit the obvious factors of detection efficiency and energy resolution that are not relevant to our argument. (oscillation) probability and detection cross section. Should such a factorization turn out to be impossible, the very notion of the oscillation probability would lose its sense, and one would have to deal instead with the overall rate of neutrino production, propagation and detection. Now let us come back to the QFT-based treatment of neutrino oscillations and try to cast the rate of the overall process in the form of eq. (74). To this end, we return to eq. (24) for the amplitude of the process but, unlike in the previous sections, perform in it the integration over 3-momentum before integrating over the energy variable p 0 ≡ E. In doing so, we will make use of a result obtained by Grimus and Stockinger [20], which states that, for a large baseline L, positive A and a sufficiently smooth function ψ(p), whereas for A < 0 the integral behaves as L −2 . This result was obtained in [20] in the limit L → ∞, but a careful examination of the derivation shows that its applicability condition is actually L ≫ p j /σ 2 p , which we will assume to be satisfied. Applying (76) to eq. (24) yields where Next, we note that, just as the quantities Φ jP (E j , p) depend on the index j only through the neutrino energy E j , the functions Φ jP (E, p j l) depend on the index j only through the neutrino momentum p j . Therefore, to simplify the notation we will denote The overall probability of the neutrino production-propagation-detection process P tot αβ (T, L) is the squared modulus of the amplitude (77): We will actually need the integral of this probability over the time T (the reasons for integration over T will be discussed in Secs. 5.2.2 and 7): We use the tilde here to stress thatP tot αβ (L) is not a probability but rather a time-integrated probability, which has the dimension of time. Note that (81) contains an incoherent sum (integral) over contributions from different energy eigenstates. This means that only the amplitudes corresponding to the same neutrino energy interfere, as it is in the stationary case. This is related to the integration over time T and is a reflection of the fact that the time-integrated nonstationary probability is equivalent to the energy-integrated stationary probability [2,15]. It should also be stressed that, although the integration over E in eq. (81) is formally performed over the interval (−∞, ∞) (recall that E coincides with the variable p 0 of eq. (24)), the contribution of the unphysical region of negative energies is actually negligible and can be discarded. This is a consequences of the fact that Φ P,D are sharply peaked at positive values of energy E P,D , with the peak widths satisfying σ eP,eD ≪ E P,D . To simplify the following consideration, we will once again assume that the neutrino production and detection processes are isotropic. In our approach this means that we have to average the quantities Φ P (E, p j l) and Φ D (E, p j l) over the directions of the incoming particles P i and D i , which amounts to averaging over the directions of l. We can therefore denote Φ P,D (E, p j ) ≡ dΩ l 4π Φ P,D (E, p j l) and drop l from the arguments of Φ P,D in eq. (81) and all the subsequent expressions. Relaxing the isotropy assumption would complicate the analysis but would not change the final result for the probability of neutrino oscillations. The next step is to calculate the neutrino production and detection probabilities. As can be seen from eq. (24), neutrinos in the intermediate state are considered in our framework as plane waves weighted with the factors Φ P,D . We therefore describe the external particles by wave packets and neutrinos by plane waves. Application of the standard rules of QFT then yields 15 The spectral density of emitted neutrino flux, dP prod α (E)/dE, is obtained by removing the integration over energy on the right hand side of the last equality in (82). For the detection probability we obtain where the normalization volume V comes from the plane-wave description of the incoming neutrino. Note that the expression for the production probability P prod α in eq. (82) does not contain the factor 1/V even though it is also calculated for plane-wave neutrinos. This is because neutrinos are in the final state at production, and the calculation of their phase space volume involves integration over V d 3 p. The case of continuous fluxes of incoming particles A direct inspection of the expressions for the probabilities of neutrino production and detection as well as of the probability of the overall production-propagation-detection process obtained above shows that they are independent of the total running time of the experiment t. This is because they were calculated for individual processes with single external wave packets of each type, and the microscopic production and detection time intervals were assumed to be centered at fixed instants of time t P and t D , respectively. On the other hand, in practice one is usually interested in the total probabilities for the processes to occur within a macroscopic time interval of length t or in interaction rates. Normally, the probabilities are proportional to t, while the rates are t-independent. In the wave packet approach, this can be achieved if we take into account that in realistic situations one often has to deal with continuous fluxes of incoming particles. (We will comment on the opposite case of stationary initial states in the next subsection.) Let us start with calculating the production rate and detection cross section in the case of steady fluxes of the incoming particles. Consider some interval of time T 0 that is large compared to the time scales of the neutrino production and detection processes. Let the number of the projectile particles P i entering the production region (e.g. a spherical region of radius σ xP around the point x = x P ) during this interval be N P . Then the number of particles P i entering the production region during the interval dt P is dN P i = N P (dt P /T 0 ). If the production probability in the case of the individual process with single external wave packets is P prod α , the probability of neutrino emission during the finite interval of time The integration is trivial because P prod α is actually independent of t P due to invariance with respect to time translations. Note that the probability P prod α is proportional to t. We can therefore define the production rate in the usual way: Let us now consider the detection cross section. Normally, a cross section is defined for a single, fixed target particle. However, the initial-state particles D i in our treatment of the 16 If the flux of the incoming particles is not steady, the number of particles entering the production region over the time t is given by t 0 ρ P (t P )dt P , where ρ P (t P ) is the distribution of these particles with respect to t P , normalized according to T0 0 ρ P (t P )dt P = N P . The right hand side of the first equality in eq. (84) then has to be replaced by t 0 ρ P (t P )P prod α dt P . For a steady flux the distribution is uniform, i.e. ρ P (t P ) = N P /T 0 = const. detection process are described by moving wave packets, which enter the detection region that is centered at a fixed point x = x D . Therefore, our treatment of neutrino detection should be similar to that of the production process. For the individual detection process with single external wave packets of each type the detection probability is given by eq. (83). Let us now assume that the number of the particles D i entering the detection region during the interval of time T 0 is N D . Then we obtain for the time-dependent detection probability in the case of steady incoming fluxes of D i and neutrinos and for the detection rate To obtain the detection cross section we have to divide this rate (more precisely, the summand of the sum over k that enters into (87)) by the flux of incoming neutrinos j νk = n νk v νk , where n νk is the number density of the detected ν k and v νk is their velocity. With our normalization (one particle in the normalization volume) we have n νk = 1/V , and from eqs. (83) and (87) we finally obtain Now we proceed to the calculation of the rate of the overall production-propagationdetection process. Since we want to calculate this quantity not for a single process with individual wave packets of external particles but for steady fluxes of incoming P i and D i , we have to integrate the T -dependent probability of the single process P tot αβ (T, L) given by eq. (80) over both t P and t D . Proceeding in the same way as before, we find for the probability of the process for steady fluxes of the incoming particles P tot αβ (t, L) = Introducing the new integration variablesT ≡ (t P + t D )/2 and T = t D − t P , we obtain It can be readily shown that in the limit of large t (much larger than the time scales of the neutrino production and detection processes) the integral I 1 coincides with the quantity P tot αβ (L) defined in eq. (81), whereas I 2 and I 3 give negligible contributions (see Appendix B). Therefore for large t eq. (90) can be rewritten as The rate of the overall process is then 17 The case of stationary initial states If the initial state particles P i and/or D i are in stationary states rather than being described by moving wave packets, the above consideration has to be slightly modified. Consider, for example, neutrino production in decays of unstable particles P i bound in a solid. Let ρ P (t P ) be the probability distribution function for the decay times of the parent particles in the source, and let N P be defined by the condition T 0 0 ρ P (t P )dt P = N P . If T 0 is short compared to the lifetime Γ −1 of P i , one has ρ P (t P ) = const = N P /T 0 . If, on the contrary, T 0 Γ −1 , ρ P (t P ) will usually have an exponential form. 18 The neutrino production probability is then again given by eq. (84), just as in the case of a continuous flux of incoming particles P i . The situation with bound-state stationary particles D i in the detector can be considered quite similarly. One can assume D i to be stable. If the source creates a steady flux of neutrinos, then for the ensemble of D i in the detector the distribution ρ D (t D ) of the detection times t D is uniform and is given by N D /T 0 , where N D is defined by the normalization condition The detection probability is then given by eq. (86). Thus, with these re-interpretations of N P and N D , the expressions for the neutrino production and detection probabilities and rates and the detection cross section obtained in the previous subsection remain valid in the case of stationary initial states as well. We are now in a position to obtain the normalized oscillation probability. The oscillation probability in the QFT approach In the case when the rate of the overall production-propagation-detection process at a fixed neutrino energy factorizes, the oscillation probability should be obtainable from eq. (75). 17 For the reader willing to check the dimensions of our expressions, we note that from the definitions of Φ P,D it follows that they have dimension m −3/2 . It is then easy to see that the amplitude (77) and the probabilities (81)-(84), (86), (89) and (91) are dimensionless, the cross section (88) has the dimension of squared length (or m −2 ), and the rates (85), (87) and (92) have the dimension of inverse time (or m), as they should. 18 An exception is the case where P i in the source are continuously replenished. In this case, the function ρ P (t P ) will depend on the time dynamics of the production of these particles. Substituting eqs. (92) and (85) (withP tot αβ and P prod α defined in eqs. (81) and (82)) and eq. (88) into this relation, we find (93) The quotation marks here are to remind us that we yet have to prove that this quantity can indeed be interpreted as the oscillation probability. The alert reader has probably noticed that, while the integral in (74) is taken over the energies of different neutrinos in the neutrino flux, the integration in (81) is performed over the energy distribution within the wave packet of an individual neutrino. This, however, does not invalidate our argument leading to eq. (93). The reason for this is that the following two situations are known to be experimentally indistinguishable [1,6]: (a) a flux of neutrinos described by identical wave packets, each with an energy spread f (E), and (b) a flux of neutrinos, each with a sharp energy, with the overall energy distribution φ(E) = |f (E)| 2 . Let us now examine expression (93). First, we note that when the produced neutrinos are either ultra-relativistic or quasi-degenerate in mass, i.e. when |p j −p k | ≪ p j , p k , the probabilities of emission of different neutrino mass eigenstates are characterized by essentially the same transition matrix elements and the same phase space volumes, i.e. to a very good accuracy these probabilities only differ by the factors |U αj | 2 . Therefore, one can replace the factors Φ P (E, p j ) in eq. (82) by the value Φ P (E, p) calculated at the average momentum p, and also replace the factors p j by p, and pull them out of the sum. A similar argument applies to the detection cross section. As a result, in the denominator of (93) we can replace where in the last equalities we have used unitarity of the leptonic mixing matrix. 19 Note that such a procedure cannot in general be applied to the numerator of eq. (93). Indeed, the latter contains interference terms proportional to products of the functions Φ P taken at different momenta, p j and p k , where p j was defined in (78). The momentum distribution functions Φ P are all peaked at the same momentum P , therefore, if |p j −p k | exceeds the width of the peak σ pP , the interference terms will be strongly suppressed. The same argument applies to the detected neutrino states Φ D (E, p j ) and Φ D (E, p k ), which also enter into these interference terms and which are peaked at momentum P ′ and have widths σ pD . For ultra-relativistic or quasi-degenerate neutrinos (i.e. when |p j − p k | ≪ p), one has |p j − p k | ≃ ∆m 2 jk /2p; if 19 It should be stressed that the mean momentum p is defined here as an average over different mass eigenstates of the momenta p j = (E 2 − m 2 j ) 1/2 taken at the same fixed value of energy E. It is therefore different from the mean momentum P of the individual wave packets, introduced earlier, for which the average was taken over the spread of momenta (or energies) within the wave packet. ∆m 2 jk /2p σ p where σ p is the effective momentum uncertainty dominated by the smallest between σ pP and σ pD , the interference terms in (93) (and therefore neutrino oscillations) will be strongly suppressed. Physically, this can be traced to the lack of coherence at neutrino production and/or detection. It can be shown that production or detection decoherence is equivalent to the lack of localization of, respectively, the production or detection process [1,4,15]. If, on the contrary, i.e. the difference |p j −p k | is small compared to the effective width of the neutrino momentum distribution, the production/detection coherence condition is satisfied, and the oscillations can be observed. 20 Since the effective momentum uncertainty σ p is actually dominated by the energy uncertainty σ e (see Sec. 4.3), condition (95) is equivalent to the one in eq. (60). By comparing eqs. (81) and (93), it is easy to see that in the approximation (94), when the spectral density of the production probability and the detection cross section are independent of the elements of the leptonic mixing matrix, they can be factored out of the sum over j and k in eq. (81), which leads to a factorization of the form (74) . This means that the oscillation probability P αβ (E, L) can be defined as a sensible quantity and is given by With the help of eq. (94), it is easy to make sure that this expression automatically satisfies the unitarity condition (69), i.e. is properly normalized. Thus, the QFT-based approach allows one to identify the conditions under which P αβ (L, E) can be sensibly defined, and also gives the correctly normalized expression for this probability. The condition for the existence of well-defined oscillation probabilities is that neutrinos are either ultra-relativistic or nearly degenerate in mass. Note that the factorization of the integrand of (81) according to (74) does not by itself mean that the oscillation probability (96) is production and detection independent; for this, one would also have to require that condition (95) be fulfilled [15]. If it is satisfied, all the momenta p j , p k in the interference terms are sufficiently close to each other, and one can replace also in the numerators of eqs. (93) and (96). As a result, the factors |Φ P (E, p)| 2 |Φ D (E, p k )| 2 in the numerators and denominators of these equations cancel, and one obtains Since for ultra-relativistic or quasi-degenerate neutrinos p j − p k ≃ −∆m 2 jk /2p, this is just the standard formula for the probability of neutrino oscillations in vacuum. The above QFT-based considerations also allow one to shed some light on the meaning of the normalization condition imposed on the oscillation probability in the QM wave packet approach, which looks rather arbitrary within the QM framework. As was pointed out in Sec. 4.1, the QM and QFT approaches can be matched if the QM quantities f jP and f jD are identified with the QFT functions Φ jP (E j , p) and Φ * jD (E j , p), respectively. The latter, however, bear information not only on the properties of the emitted and absorbed neutrinos, but also on the production and detection processes. The QM normalization procedure that is tailored to obtain an expression for the oscillation probability satisfying the unitarity condition can then be easily seen to be equivalent, in the limit (94), to the division of the overall rate of the process by the production rate and detection cross section, as in eq. (96). Some additional comments In this section we will comment on some issues pertaining to the description of neutrino oscillations in the external and intermediate wave packet approaches that were not discussed or were only briefly mentioned above. We will also show how one can relax some assumptions usually adopted in the QM and QFT approaches. Unequal mean momenta of the produced and detected neutrino states In the vast majority of derivations of the neutrino oscillation probability within the QM wave packet framework, it was assumed that the mean momenta of the produced and detected neutrino states, P and P ′ , coincide. 21 There are, however, no reasons for this to be the case. Indeed, the mean momenta of the emitted and absorbed neutrino states are determined by the kinematics and experimental conditions of the neutrino production and detection processes, respectively, and those are in general different. Let us now examine the consequences of P = P ′ using, as before, the case of Gaussian wave packets as an example. It was shown in Sec. 4.3 that the neutrino wave packets derived in QFT can be cast into the form usually adopted for neutrino wave packets in the QM approach if one uses for the latter the effective (in general anisotropic) momentum uncertainties, shifted mean momenta and modified normalization factors. Since the expressions for the QM wave packets look simpler, we will study the implications of unequal mean momenta of the produced and detected neutrino states within the QM formalism. We will also ignore possible anisotropy of the neutrino momentum uncertainties. Taking it into account would just complicate the calculations without changing the essence of the result. We will now calculate the amplitude (15) corresponding to the emission and absorption of the neutrino mass eigenstate ν j . To do so, we expand the energy E j (p) around a momentum p 0 which we do not specify for now: An important point is that, strictly speaking, using such an expansion in the integral in (15) is only justified if P and P ′ are not too far from each other and if p 0 = O(P, P ′ ), which we will assume. As we shall see, under these conditions the final result will be insensitive to the choice of the expansion point p 0 . In eq. (98) we have neglected higher order terms in the expansion; this corresponds to neglecting the spreading of the neutrino wave packets, which is unimportant for our argument. The Gaussian wave packets for the produced neutrino states are given by eq. (10), and the detected wave packets have a similar form, with σ pP and P replaced by σ pD and P ′ , respectively. Substituting these expressions and expansion (98) into eq. (15) yields, after a simple Gaussian integration, Herep and we have used the relation E j (p 0 ) + v j (p − p 0 ) ≃ E j (p) which follows from (98). As can be seen from eq. (99), the dependence on the expansion point p 0 has completely disappeared from the amplitude. The meaning of the obtained result is quite transparent: as usual, the amplitude A j (T, L) contains a normalization factor, a plane wave factor calculated at some mean momentum (in this casep) and the envelope factor exp[−(L − v j T ) 2 /4σ 2 x ]. However, on top of this, it contains the extra factor exp[−(P − P ′ ) 2 /4(σ 2 pP + σ 2 pD )], which is a reflection of the approximate conservation of the mean neutrino momentum. It suppresses the amplitude unless the difference of the mean momenta of the produced and detected neutrino states is small compared to the effective total momentum width (σ 2 pP + σ 2 pD ) 1/2 . Note that this width is dominated by the larger of σ pP and σ pD , unlike the momentum width σ p defined in (100), which is dominated by the smaller of them. Non-central collisions In our discussion of the external wave packet formalism, as well as in all other treatments of this topic we are aware of, it was assumed that the peaks of the wave packets of all the external particles participating in neutrino production (or detection) meet at the same space-time point. In other words, it was assumed that at production these peaks are all located at the point x = x P at the same time t = t P , whereas at detection the peaks of the wave packets of the external particles are all located at x = x D at the same time t = t D . Is this always true and how crucial is this assumption? While in some situations, such as neutrino production in decays of unstable particles, such an assumption may indeed be justified, this is not in general so when neutrinos are born in scattering processes, where the collisions of the wave packets in the initial state may be non-central. Let us consider non-central collisions assuming, as before, only two external particles at production and two at neutrino detection (the generalization to an arbitrary number of external particles is straightforward). One obvious consequence of this is that the neutrino production and detection are only possible if the minimum distances between the peaks of the participating wave packets do not exceed significantly the sizes of, correspondingly, the production and detection regions. Consider the production process. We shall now assume that the wave packets of the external particles P i and P f are given by expressions of the type (8), with the phase factors in the integrand being e −iq(x−xa) and e −ik(x−x b ) , respectively. This means that the peak of the wave packet describing P i is located at x = x a at the time t = t a , and the peak of the wave packet of P f is located at x = x b at the time t = t b . We will consider the positions of these peaks at the same time, i.e. we will take t a = t b . It is natural to choose t P to be the value of this common time corresponding to the minimum distance between the two peaks, i.e. t a = t b = t P , 22 The coordinate x P can then be chosen to lie anywhere between x a and x b on the line connecting them; in particular, one can choose x P = x a or x P = x b . Neutrino detection can be considered quite similarly. One can now repeat the calculations presented in Sec. 3, arriving at the transition amplitude that can again be written in the form (24). However, the expressions for Φ jP (p 0 , p) and Φ jD (p 0 , p) in eq. (25) have to be modified: their integrands should be multiplied, respectively, by e iq(x P −xa)−ik(x P −x b ) and e iq ′ (x P −xc)−ik ′ (x P −x d ) , where x c and x d are the positions of the peaks of the wave packets representing D i and D f at detection at the time t D when the distance between these two peaks reaches its minimum. This can be taken into account by redefining the momentum distribution functions of the wave packets according to The newly defined momentum distribution functions share a crucial feature with the old ones: they decrease rapidly when the deviation of the corresponding momenta from their peak values exceed the relevant momentum uncertainties (i.e. the widths of the peaks) σ pP i , σ pP f , σ pDi or σ pDf . In addition, the new momentum distributions do not exhibit fast oscillations when the momentum variations are smaller than, or of the order of, the corresponding widths of the peaks. Indeed, consider neutrino production. Our choice of x P implies that On the other hand, as we mentioned above, the production is only possible when the distance between the peaks of the wave packets of the external particles |x a −x b | is smaller than, or of the order of, the size of the production region, which is of the order of [max{σ pP i , σ pP f }] −1 . A similar argument applies to neutrino detection. Therefore the variation of the exponents of the exponential factors in eq. (101) is 1 and these factors do not undergo fast oscillations across the peaks of the momentum distributions. From the above considerations it follows that the properties of the redefined momentum distributions are essentially the same as those of the old ones. All the results of the present paper therefore apply to the case of non-central collisions as well, if one substitutes the original momentum distribution functions by those redefined according to eq. (101). Discussion and summary In this paper we have compared the quantum mechanical approach to neutrino oscillations, where neutrinos are described by wave packets, with the quantum field theoretical method, where they are represented by propagators connecting the neutrino production and detection vertices in a Feynman diagram, whereas the external particles are described by wave packets in order to localize the process in space and time. We have shown how the neutrino wave packets underlying the QM approach can be derived in QFT by comparing the QM and QFT expressions for the transition amplitude. Equivalently, the wave packet representing the emitted neutrino can be obtained as the convolution of the neutrino source (the production amplitude) with the retarded neutrino propagator, in accord with the well known result of QFT. Quite analogously, the wave packet of the detected neutrino can be obtained as the convolution of the neutrino detection amplitude with the advanced neutrino propagator, with the result then taken at the time corresponding to neutrino detection. We have studied the general properties of QFT-derived wave packets representing the produced neutrino states and demonstrated that the wave packets of mass eigenstates ν j depend on the index j only through the neutrino energy E j , and that in the momentum representation they are given by the production amplitude multiplied by "smeared delta functions" describing approximate conservation of mean energy and mean momentum at production. The widths of these "smeared delta functions" are determined by the largest among the corresponding widths of the external particles involved in the neutrino production process. Similar conclusions apply to the wave packets of the detected neutrino states. We also identified the conditions under which general neutrino wave packets can be approximated by Gaussian ones. Using Gaussian wave packets as an example, we then demonstrated that the neutrino wave packets derived in QFT can be cast into the form they are usually assumed to have in the QM formalism, provided that (i) The momentum uncertainty of the QM approach is replaced by the effective one, which depends not only on the true momentum uncertainty at production (or detection), but also on the corresponding energy uncertainty, as well as on the neutrino velocity and the effective velocity of the neutrino production (or detection) region. Moreover, these momen-tum uncertainties are different in different directions, i.e. are anisotropic. The longitudinal effective momentum uncertainties of the produced and detected neutrino states are dominated by the energy uncertainties characterizing, respectively, the neutrino production and detection processes, whereas the transverse effective momentum uncertainties coincide with the corresponding true momentum uncertainties. (ii) The mean momentum of the neutrino state is shifted from its naively expected value. (iii) The wave packets of different mass eigenstates acquire (in general different) extra overall factors. Thus, the simplistic QM wave packet approach may need QFT-motivated modifications; however, once these modifications have been done, one can still work within the QM framework without losing any essential physical content. We have also studied the energy uncertainties characterizing the neutrino wave packets in the case of unstable neutrino sources and have shown that in general these uncertainties depend both on the decay rate of the parent particle and on the inverse time scale of the overlap of the wave packets of the external particles. The neutrino energy uncertainty is dominated by the larger of the two. In the last part of the paper, we have discussed in detail the normalization of the QM and QFT expressions for the oscillation probability P αβ (L). We have seen that in the QM framework P αβ (L) has to be normalized by hand in order to fulfill the unitarity relation β P αβ (L) = 1. There are two reasons why this ad hoc procedure is unavoidable in the QM approach. First, as we have demonstrated, no independent normalization of the produced and detected neutrino states can lead to the correct normalization of the oscillation probability because the overlap integral of these states is always smaller than unity in the realistic case when their momentum distributions are different. The overlap integral depends on the characteristics of the produced and detected states in a non-factorizable way, and so the problem cannot be cured by just modifying the normalization of these states. Second, the QM formalism involves an integration over the unobserved difference of the neutrino detection and production times T = t D − t P , which leads to yet another undefined factor -the time interval by which one has to divide the result in order to recover the correct dimension of the oscillation probability. In the QM method both these problems are solved by imposing unitarity of the oscillation probability by hand. We have demonstrated how the QFT approach avoids all the normalization problems of the QM formalism and naturally leads to the correctly normalized oscillation probability that automatically satisfies the unitarity condition, provided that neutrinos are ultra-relativistic or quasi-degenerate in mass. If this requirement is not fulfilled, the interaction rate cannot be factorized into the production rate, propagation (oscillation) probability and detection cross section, so that the oscillation probability is undefined. In that case one would have to deal instead with the overall rate of the neutrino production-propagation-detection process. The QFT approach also allows one to understand the physical meaning of the QM normalization recipe: By imposing unitarity by hand one implicitly rids the calculated transi-tion probability of the probabilities of neutrino production and detection, thus extracting the sought oscillation probability. A comment on the the integration over T is in order. Such an integration is involved in both the QM and QFT approaches to neutrino oscillations. In the QM framework, it has to be introduced to account for the fact that the neutrino's time of flight is not measured (or at least not measured accurately enough) in realistic experiments. At the same time, in our QFT treatment of neutrino oscillations, it emerges naturally from the observation that in real situations one has to deal with continuous fluxes of incoming particles (or with ensembles of neutrino emitters and absorbers in the case of bound stationary initial states) rather than with individual acts of neutrino production, propagation and detection, in which single wave packets of the external particles of each type are involved. Can one still sensibly define an unintegrated oscillation probability P αβ (T, L) for such a single act? We will argue now that the probability P αβ (T, L) is not a useful quantity since it is unmeasurable (or almost unmeasurable). An important point here is that in practice both L and T can only be measured with some accuracy. If we consider the T -integrated probability P αβ (L) which depends (in addition to neutrino energy) only on L, then this quantity is well defined only if the error ∆L in the measurement of L is small compared to the variations of L over which the probability changes significantly. This means that this error must be small compared to the neutrino oscillation length l osc , and this condition is normally easily satisfied. If one considers the case when the time T is measured, whereas the distance L is not (even though it is hard to imagine how such a situation could be realized in practice), then one would have to integrate P tot αβ (T, L) over L, and the resulting probability would be a function of T . Then the situation would be similar -the error ∆T in the determination of T would have to be small compared to the change of T over which P αβ (T ) varies significantly, which is ∼ (l osc /v) with v the neutrino velocity. The situation would be quite different if one considers the unintegrated probability P αβ (T, L) which depends on both T and L -in this case the requirements on ∆L and ∆T would be by far more demanding. Indeed, P αβ (T, L) is substantially different from zero only when |L − vT | σ x , where σ x is the spatial length of the neutrino wave packet. This means that for uncorrelated variations of L and T this probability varies significantly when L changes by σ x (which can be extremely small) and T changes by (σ x /v). The latter quantity is essentially given by the largest between the time scales of the neutrino production and detection processes. Thus, the un-integrated probability P αβ (T, L) can only be accurately measured if the distance L is measured with an accuracy better than the length of the neutrino wave packet and simultaneously the time between neutrino emission and detection is measured with an error that is small compared to the duration of these processes. Such a possibility appears rather unrealistic at the very least. It follows from our results that there are both intricate relations and important differences between the QM and QFT approaches to neutrino oscillations. In the following table, we compare the main features of these two approaches. QM approach QFT approach 1. Simple and transparent description of neutrino oscillations. Neutrino production and detection processes are not properly taken into account. Simplified description of neutrino energy and momentum uncertainties. Most complete description of neutrino production, propagation and detection. Accurate treatment of neutrino energy and momentum uncertainties. The formalism is more complicated than that of the QM approach. 2. Produced and detected neutrino states are flavour eigenstates, defined according to Only mass eigenstates are considered. (In fact, defining flavour eigenstates in QFT poses great difficulties because they do not form a physically meaningful Fock space [33].) 3. Mass eigenstates composing neutrino flavour eigenstates are described by wave packets whose form is postulated rather than derived and the parameters (momentum uncertainties) are estimated from the properties of the production and detection processes. Derivation of neutrino wave functions is not possible since they depend on the dynamics of neutrino production and detection, and particle creation and annihilation cannot be described in QM. Because neutrinos are only in the intermediate states, their wave functions are not necessary for the formalism (but can be derived from the production and detection amplitudes according to well defined rules). Wave functions of the external particles accompanying neutrino production and detection have to be known. Undetected external particles can be described by plane waves. 4. The oscillation amplitude is obtained by evolving the produced neutrino state in time and then projecting it onto the detected state. The amplitude of the combined process of neutrino production, propagation, and detection is computed according to the Feynman rules. Time evolution of the neutrino states in QM corresponds to their on-shell propagation in QFT. Projection in QM corresponds to the integration over the momentum of the intermediate neutrino in QFT. 5. The oscillation probability has to be normalized by hand by imposing the unitarity condition. (The physical meaning and justification of this normalization procedure is elucidated in QFT.) The oscillation probability P αβ (L) that is properly normalized and satisfies the unitarity constraint is automatically obtained from the formalism in the case when neutrinos are ultra-relativistic or quasi-degenerate. Otherwise P αβ (L) is undefined. In summary, we have explicated the close relation between the quantum mechanical and quantum field theoretical approaches to neutrino oscillations and have shown how QFTapart from providing expressions for oscillation probabilities and event rates in its own right -can be used to derive the input parameters required for the QM approach and to elucidate some QM procedures which were not properly justified or fully understood within that approach. We have also clarified several subtle points regarding neutrinos from unstable sources, the case of unequal mean momenta of the produced and detected neutrino states and the normalization of the oscillation probability. where C j (E) ≡ Φ P (E, p j l)Φ D (E, p j l) (2E) e −iET +ip j (E)L . Integrating in (B3) by parts we find B jk = 2πi The first term on the right hand side vanishes because so do the functions Φ P,D at E → ±∞. Therefore eq. (B4) means B jk = −B * kj . On the other hand, from the definition (B2) of B jk it follows that B jk = B * kj . Hence, B jk = 0 and I 2 + I 3 vanishes.
2012-11-20T09:40:30.000Z
2010-01-27T00:00:00.000
{ "year": 2010, "sha1": "3967dabe9adf146fa8705b415a528dbeb5f13f75", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP04(2010)008.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "57af4cd6f5e9cef03e01e7a90b91edcf7733c2da", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10407875
pes2o/s2orc
v3-fos-license
Choroidal thickness in patients with coronary artery disease Purpose To evaluate choroidal thickness (CTh) in patients with coronary artery disease (CAD) compared to healthy controls. Design Cross-sectional. Methods Setting: Ambulatory clinic of a large city hospital. Patient population: Thirty-four patients had documented CAD, defined as history of >50% obstruction in at least one coronary artery on cardiac catheterization, positive stress test, ST elevation myocardial infarction, or revascularization procedure. Twenty-eight age-matched controls had no self-reported history of CAD or diabetes. Patients with high myopia, dense cataracts, and retinal disease were excluded. Observation procedures: Enhanced depth imaging optical coherence tomography and questionnaire regarding medical and ocular history. Main outcome measures: Subfoveal CTh and CTh 2000 μm superior, inferior, nasal, and temporal to the fovea in the left eye, measured by 2 readers. Results CTh was significantly lower in patients with CAD compared to controls at the subfoveal location (252 vs. 303 μm, P = 0.002) and at all 4 cardinal macular locations. The mean difference in CTh between the 2 groups ranged from 46 to 75 μm and was greatest in the inferior location. Within the CAD group, CTh was significantly lower temporally (P = 0.007) and nasally (P<0.001) than subfoveally, consistent with the pattern observed in controls. On multivariate analysis, CAD was negatively associated with subfoveal CTh (P = 0.006) after controlling for diabetes, hypertension, and hypercholesterolemia. Conclusions and relevance Patients with CAD have a thinner macular choroid than controls, with preservation of the normal spatial CTh pattern. Decreased CTh might predispose patients with CAD to high-risk phenotypes of age-related macular degeneration such as reticular pseudodrusen and could serve as a potential biomarker of disease in CAD. Results CTh was significantly lower in patients with CAD compared to controls at the subfoveal location (252 vs. 303 μm, P = 0.002) and at all 4 cardinal macular locations. The mean difference in CTh between the 2 groups ranged from 46 to 75 μm and was greatest in the inferior location. Within the CAD group, CTh was significantly lower temporally (P = 0.007) and nasally (P<0.001) than subfoveally, consistent with the pattern observed in controls. On multivariate analysis, CAD was negatively associated with subfoveal CTh (P = 0.006) after controlling for diabetes, hypertension, and hypercholesterolemia. PLOS Introduction The choroid supplies blood to the outer one-third of the neuroretina and the retinal pigment epithelium (RPE) and represents the sole provider of oxygen and nutrients to the avascular fovea. Despite its function in maintaining the retina, details of the choroidal circulation remained largely unknown due to poor resolution and reproducibility of previous choroidal imaging techniques, such as indocyanine green angiography [1] and ultrasound [2]. Imaging of the choroid was dramatically improved with the development of spectral domain optical coherence tomography (SD-OCT) and was further augmented with the advent of enhanced depth imaging SD-OCT (EDI SD-OCT) by Spaide and colleagues in 2008 [3]. Developing techniques, such as swept source optical coherence tomography (SS-OCT) and OCT angiography [4], have allowed segments of the choroid to be visualized down to nearly the capillary level, opening up a new world of research in this previously underexplored ocular tissue. The imaging techniques described above have allowed for the study of the choroid in both a qualitative and a quantitative manner. In particular, much attention has been paid to choroidal thickness (CTh), a structural parameter that is typically defined as the distance between the outer border of the hyperreflective RPE and the hyperreflective inner border of the sclera on SD-OCT [3]. CTh can be easily measured using inbuilt calipers on OCT imaging software, making it accessible in research and clinical settings. While the true correlation between CTh and in vivo choroidal function, such as choroidal blood flow, remains uncertain [5], CTh is the closest objective marker of choroidal health available with present imaging techniques, making it a topic of great interest in outer retinal health. CTh has been found to decrease significantly with age [6] and to vary with numerous systemic and ocular diseases [7]. The healthy choroid typically measures 250-400 μm subfoveally [8], with a decrease in thickness in the temporal and nasal directions [8,9]. The relationship between the choroid and cardiovascular disease (CVD) is of particular interest due to its possible use as a biomarker of CVD and in identifying patient cohorts at increased risk for outer retinal disease. Because the choroid is a highly vascular end organ with the greatest blood flow per mm 3 in the body [10], it might be susceptible to arteriosclerotic processes common in other end organs. However, studies of CTh in patients with systemic diseases have shown variable results. Severe hypertensive retinopathy with serous retinal detachments has been associated with hypertensive choroidopathy and choroidal thickening [11]. Uncomplicated hypercholesterolemia without other vascular disease has been associated with choroidal thickening [12], whereas cigarette smoking [13,14], ocular ischemic syndrome [15], chronic heart failure [16], and systemic essential hypertension [17,18] have been linked to a thinner choroid. Carotid artery stenosis [19,20] and diabetes without diabetic retinopathy [21,22] have shown contradictory associations with CTh. In this study, we compared macular CTh in 34 patients with CAD to macular CTh in 28 healthy controls. Decreased CTh in patients with CAD would support a connection between cardiac disease and outer retinal diseases such as age-related macular degeneration (AMD). Subject recruitment and imaging Subjects and controls were recruited between January 2014 and September 2015 from the outpatient cardiac and primary care clinics of a large city hospital. New York University School of Medicine Institutional Review Board approval was obtained (Federalwide Assurance #00004952). Inclusion criteria for patients with CAD were as follows: clinically documented history of cardiac catheterization demonstrating greater than 50% obstruction in at least one coronary artery, positive stress test, ST segment elevation myocardial infarction (MI), or revascularization procedure (stent or coronary artery bypass graft). Controls included patients without a documented or self-reported history of CAD (including procedures/conditions listed above) or CAD-equivalent conditions, including peripheral artery disease, history of stroke, or diabetes. Patients with high myopia (> 6D), AMD, advanced cataracts, or a history of retinal vascular disease, retinal dystrophy, retinal surgery, or laser photocoagulation were excluded from both the CAD and control groups. Patients with CAD were age-matched to controls. After obtaining written informed consent, all subjects completed a detailed questionnaire regarding ocular and medical history, with a focus on CVD. Subjects then underwent near infrared and EDI SD-OCT imaging of both eyes using the Heidelberg Spectralis HRA+OCT (Heidelberg Engineering, Inc., Franklin, MA, USA) with eye-tracking ability. Macular volume scans consisted of 16 horizontal lines, each line an average of 9 B-scans, in a 15˚by 20˚rectangular pattern. Images with quality < 20 dB were excluded. Image analysis CTh was measured by 2 trained independent graders (MA and LC) using a built-in ruler tool in the Heidelberg Eye Explorer software (Fig 1). CTh measurements were averaged between the two readers. The left eye of each subject was selected for analysis. Readers were blinded to CAD status at the time of image reading. CTh was defined as the distance between the outer border of the hyperreflective RPE and the hyperreflective inner surface of the sclera. CTh was measured below the fovea, which was defined as the lowest point of the retina visible on macular SD-OCT slices, and 2000 μm away from the fovea in 4 cardinal macular regions: superior, inferior, temporal, and nasal. The superior and inferior points corresponded to 8 slices above and 8 slices below the foveal slice; the temporal and nasal points were identified using the ruler tool at the foveal slice (Fig 1). In cases of poor image slice quality, non-centered scans, or scans in which the sclerochoroidal border was not visible, no measurement was taken at the point in question. Statistical analysis Statistical analysis was performed using Microscoft Excel (Microsoft Corp., Redmond, WA, USA) and SPSS 22.0 (IBM Corp., Armonk, NY, USA). For all tests, a p-value less than 0.05 was considered statistically significant. An inter-observer correlation coefficient was calculated for CTh measurements by two readers. In addition, measurements were repeated by one of the readers (MA) for a subset of images to calculate an intra-observer correlation. CTh was compared pointwise between patients with CAD and controls, and the macular pattern of CTh was also compared between patients with CAD and controls. Multivariate linear regression was conducted to evaluate relative effects of potential confounders on CTh. Results Complete data was collected for 34 patients with documented CAD and 28 healthy controls. The mean age was 60.9 ± 6.8 years (range 45-76 years) for patients with CAD and 59.9 ± 5.2 years (range 51-71 years) for controls (P = 0.51; Mean Difference: 1 year, 95% confidence interval (CI): -4.1, 2.18). Characteristics of the study and control groups are shown in Table 1. Baseline demographics, including gender and ethnicity, were comparable between the 2 groups. Patients with CAD were more likely to have hypertension, hyperlipidemia, and diabetes compared to controls. Prevalences of various cardiovascular diagnoses in the CAD group are shown in Fig 2. Nearly 70% of the CAD population had suffered an MI, while the remaining had other evidence of CAD, such as a history of an abnormal stress test or a history of previous positive cardiac catheterization. The inter-observer correlation coefficient was 0.79 for the 2 CTh readers. The intraobserver correlation coefficient was 0.95 for CTh reader MA. A significantly thinner choroid Fig 3). Differences in CTh between the CAD and control populations varied at each macular point from 46 to 75 μm, depending on the location, with the inferior location showing the greatest difference and the nasal and temporal locations showing the least difference. CTh at the 4 cardinal macular locations was compared to subfoveal CTh for subjects with CAD and controls to assess the pattern of CTh. In subjects with CAD, there was no significant difference between subfoveal CTh and CTh at the superior or inferior locations; in contrast, the choroid was significantly thinner at the nasal and temporal locations when compared to the fovea (Table 3A). A similar pattern was observed in the control group, with the superior, inferior, and subfoveal locations having similar CTh and the nasal and temporal locations being thinner than the subfoveal CTh (Table 3B). Multivariate linear regression was conducted to evaluate the effect of diabetes, hypertension, and hypercholesterolemia on subfoveal CTh. A strong negative association between CAD and CTh remained even after controlling for all 3 potential confounders (P = 0.006). Discussion CAD is the leading cause of death worldwide in both men and women [23], accounting for 1 in every 4 deaths in the United States [24] and 31% of deaths worldwide [25]. Despite the use of multiple clinical markers and risk factors for CAD, there remains a continued interest in finding new clinical and examination tools to better assist in risk stratification for CAD. This is of particular importance in women, for whom typical risk stratification tools, such as the Framingham Risk Score, often fail to detect underlying cardiac disease [26], possibly due to the preponderance of atypical symptoms and coronary microvascular disease [27,28]. The use of ocular examination as a method of CAD risk stratification has been proposed due to its unique ability to view the vasculature of the posterior segment in vivo and in a noninvasive manner, thus providing a snapshot of vascular health. Until now, much focus has been on investigating the connection between retinal vasculature changes and CVD, likely due to the ease of viewing these vessels on clinical examination. There is a known connection between retinal arteriolar narrowing and cardiovascular events, at least in some subpopulations [29]. However, the use of retinal vasculature as a biomarker for CAD has been problematic due to the difficulty in quantifying retinal vascular findings in a standardized way [29]. The ability to easily visualize the choroid clinically using EDI SD-OCT provides new opportunities for research into both quantitative risk stratification in CAD using CTh and improved understanding of outer retinal health in CAD patients. The choroid is typically described as having 5 layers: Bruch's membrane, the choriocapillaris, Haller's and Sattler's vascular layers, and the suprachoroidea, or suprachoroidal space (a 10-15 μm layer of giant melanocytes interspersed between flattened processes of fibroblastic cells) [30,31]. The choriocapillaris is a network of fenestrated capillaries 20-40 μm in diameter arising from medium-sized arteries in Sattler's layer and larger arteries in Haller's layer [10]. As shown by Hayreh, using in vivo fluorescein angiography studies, the choroid is arranged in a lobular pattern, with each end artery supplying a single segment and no anastomoses between these segments [32]. With the advent of EDI SD-OCT, it is now quick and easy to visualize the choroid, which is the major blood supply of the outer neuroretina and RPE, and measure CTh. Our major finding was a strong, independent negative association between history of CAD and CTh. CTh is known to be affected by a variety of systemic and ocular factors, of which age and axial length are 2 major ones [33]. Gender has also been associated with CTh differences, with most studies showing that men have greater CTh than women, likely due to hormonal factors and sympathetic tone [7,34]. Our CAD and control groups showed no significant difference in age or gender distribution, reducing the likelihood that these factors accounted for the differences in CTh that we observed. The seemingly obvious connection between the vascular components of the choroid and other vascular beds of the body has produced a number of studies on the relationship between CTh and various cardiovascular diseases and risk factors. A single study of 56 patients with congestive heart failure showed lower subfoveal CTh compared to age-and gender-matched controls [16]. Although hypertensive retinopathy has been associated with increased CTh [11], correlations between CTh and systemic hypertension in healthy retinas have been inconsistent, with one study showing significantly thinner CTh compared with healthy controls [17] and another showing no significant association [18]. Similarly, internal carotid artery stenosis has been variably associated with CTh, with one study showing a positive correlation between extent of stenosis and CTh [19] and another showing an inverse relationship [20]. A study by Agladioglu et al. noted an inverse relationship between internal carotid artery diameter and CTh; however, this finding was in healthy patients without stenosis [35]. A single study evaluated the relationship between hypercholesterolemia and CTh, showing CTh to be significantly higher in patients with increased total cholesterol compared to controls; however, all cases of hypercholesterolemia were treated [12]. In our CAD group, subfoveal CTh was significantly lower than that of normal controls, even after correction for the presence of hypertension, hypercholesterolemia, and diabetes. The possible physiologic basis for a relationship between CTh and CAD is intriguing. The term CAD generally refers to the atherosclerotic disease of medium to large vessels, and choroidal vessel diameter is more on par with the coronary microvessels than the large-diameter coronary vessels [29,36]. Microvascular coronary disease occurs when there is demonstrable coronary ischemia in the absence of the angiographically obstructive atherosclerosis seen in our CAD group. Study of the contribution of the coronary microvasculature to pathogenesis and events in patients with CAD has been limited by the challenge of observing the coronary microvasculature, which typically requires myocardial biopsy. Techniques such as myocardial contrast echocardiography may allow inference into microvascular function based on flow parameters, but this inference is not straightforward. Further studies will be required to parse out the contribution of coronary microvascular disease to the connection between CAD and decreased CTh. Our finding of significantly lower CTh in patients with CAD has possible implications for retinal disease in patients with CAD. Both the photoreceptors and the entire fovea are highly dependent on the choroid for function, with over 90% of oxygen provided to the photoreceptors coming from the choroidal circulation [37]. A large number of studies have investigated the connection between CAD and AMD, producing mixed results. In a study by Duan et al., patients with choroidal neovascularization were 26% more likely to develop MI compared with controls after adjusting for age, gender, race, and hypertension at baseline [38]. Other studies have found similar relationships between early and late AMD and CVD and its risk factors [39][40][41][42]. However, conflicting studies have shown no relationship [43], or even an inverse relationship, between CVD and AMD [44][45][46][47]. Recent studies have suggested that reticular macular disease, a high risk sub-phenotype of AMD consisting of reticular pseudodrusen and decreased CTh, may have an even stronger correlation with CAD than does typical AMD [48,49]. Decreased CTh in the absence of retinal abnormalities in CAD may represent the precursor to reticular macular disease. CTh is known to be thickest at the subfoveal region, with thinning occurring in the nasal and temporal directions [8,9]. Some lesions are known to differentially affect specific regions of the retina, such a reticular pseudodrusen which are most often found in the superior macula [48]. Reticular pseudodrusen also has a newly emerging association with CAD, and for this reason, we were particularly interested in understanding the topographical pattern of decreased CTh observed in CAD patients compared with controls. This pattern was replicated in the control population of our study but, importantly, also in the CAD population. Our study has a number of limitations. The study groups were relatively small. Although patients with high myopia were excluded from the study groups, we did not collect quantitative axial length data and therefore cannot account for variations in CTh due to myopia of less than 6D, and thus cannot rule out any smaller effects of axial length on CTh. We did not have information on carotid artery stenosis for our CAD or control groups. In addition, because we recruited patients with CAD during the afternoon clinic and control patients during both the morning and afternoon clinic, we were unable to control for diurnal variation in CTh, which is thought to be 20-30 μm from morning to evening [50][51][52]. However, on further analysis of our imaging timings, the average difference between time of imaging of CAD compared to controls was approximately 2 hours, which would equate to a 5 μm or less difference between the 2 groups. Strengths of the study include well-characterized, prospectively recruited subjects with CAD from cardiology clinics and carefully selected, age-matched controls. Poor-quality imaging data was excluded. The differences in CTh between the groups were highly significant, and these differences were found at 5 measurement points in the macula. In conclusion, we evaluated CTh in a CAD group compared to age-matched controls, finding an independent, negative association between CTh and CAD. These findings suggest that CTh may serve as an important disease marker in CAD, providing important information on both systemic cardiovascular health and susceptibility to diseases of the outer retina and RPE. Our findings warrant future research on the connection between the choroid and other vital vascular systems of the body. Further studies may employ SS-OCT imaging in patients with CAD to better understand which choroidal layers are contributing to the decreased CTh we observed. Supporting information S1
2018-04-03T01:23:43.941Z
2017-06-20T00:00:00.000
{ "year": 2017, "sha1": "bf370b3e01f6ef6df112669350b383cbdde1561e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0175691&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf370b3e01f6ef6df112669350b383cbdde1561e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
138519825
pes2o/s2orc
v3-fos-license
Influence of Water Content on the Flow Consistency of Dredged Marine Soils In present time, dredged marine soils (DMS) are generally considered as geo-waste in Malaysia. It is also known to contain high value of water and low shear strength. Lightly solidified soils such as soilcement slurry and flowable fill are known as controlled low strength materials (CLSM). On site, the CLSM was tested for its consistency by using an open-ended cylinder pipe. The vertical and lateral displacement from the test would determine the quality and workability of the CLSM. In this study, manufactured kaolin powder was mixed with different percentages of water. Cement was also added to compare the natural soil with solidified soil samples. There are two methods of flowability test used, namely the conventional lift method and innovative drop method. The lateral displacement or soil spread diameter values were recorded and averaged. Tests showed that the soil spread diameter corresponded almost linear with the increasing amount of water. The binder-added samples show no significant difference with non-binder sample. Also, the mixing water content and percentage of fines had influenced the soil spread diameter. Introduction Removal of dredged marine soils (DMS) from the sea bed is required in order to clear the passage way of ships. In Malaysia, about 300,000 m³ volumes of DMS were gathered and removed as part of maintenance dredging [1]. Soft and problematic soils like DMS posses low shear strength and high water content. Generally, the shear strength value of DMS is less than 50 kPa [2]. With such condition, any development made on top of this unimproved soil would risk with slope failure and non-uniform settlement. In addition, transporting DMS to the designated dumping sites would cause monetary and environmental implications. With these reasons, DMS are likely to be disposed rather than to be reused. Numerous studies show that by inducing DMS with some binders and fillers, its geotechnical properties would be improved. Other than reused as potential construction materials such as brick and cement [3,4], DMS was primarily used as reclamation fills. Binders and fillers such as cement, lime, bottom ash, fly ash and steel slag [5][6][7][8] enabled DMS to be reused as reclamation fills. Large reclamation projects in Australia, Japan and Singapore had successfully utilized DMS as backfill materials [9][10][11]. There are several and similar terms to describe the modified fluid-like solid used as reclamation fill. Super geomaterial (SGM) and Composite geomaterial (CGM) are both mixture of dredged soil, binder, granular materials and lightweight materials [12,13]. Self compacting material (SCM) and Controlled low strength materials (CLSM) are also related where the mixture includes less binder and granular materials [14,15]. Despite the different terms used, its function as engineered and flowable fills remained the same. This present study relates the flow consistency of soil with water content. Increasing amount of moisture would provide more fluid-like substance, thus ease the deployment of fills. Materials Kaolin and DMS are type of soils that contain high percentage of fines and categorized as fine-grained soils. Table 1 shows the physical properties of the related samples. In this study, manufactured kaolin FM-C powder was used to resemble DMS. Different percentages of water were added into the soil sample, denoting the phase of semi-solid to liquefied form. Manufactured cement powder was also used. The addition of cement in the soil mixture was to observe the differences of flow consistency with non-solidified soil sample. Cement with percentages of 5% and 10% cement note the minimum and maximum of binder content in this study. Aspect ratio The test procedure was based on standard [16], where an open-ended cylinder was used to fill the soil mixtures. Various size and dimension of the ring pipe cylinder used in other studies were stated in Table 2. Diameter to height aspect ratio of the cylinder was used to compare the present and standard cylinder. The present ring pipe cylinder used in this study is 52 mm of diameter and 30 mm of height. As calculated, the ratio of the present cylinder size was less than the standard cylinder size (0.00037<0.51). Hence, the dimension of the ring pipe cylinder used in present study is permissible. Flowability test methods. Batches of soil mixture were mixed with 0.75W L , 1.00W L , 1.25W L , 1.50W L , 1.75W L and 2.00W L of its liquid limit (W L ). The variance of water content was intended to examine the behavior of soil that undergoes such levels of saturation. Another series of soil mixture incorporated with 5% and 10% of cement were also mixed with the same pre-determined moisture content. The flowability test was then prepared in two methods, namely the conventional lift method and innovative drop method. Lift method consist of filling the soil mixture inside a ring cylinder. As the ring cylinder was slowly lifted up, the soil samples would then segregated by its own weight and form lateral displacement of soil spread. Note that before filling, the ring has been lubricated with oil or water. Due to the cohesive nature of fine grained soil, most of the soil sample adhered to the wall of the cylinder. Therefore, an innovative method that allow the soil samples to be drop at raised level was conducted. Similarly, the soil sample was filled inside the ring cylinder on a base platform. By removing the platform, the soil sample would drop at the height of 175 mm as refer to Figure. 1. It is comparable with the lift method since the soil sample for fall method segregated by gravity. Only the average value of soil spread diameter was measured and recorded. Influence of water content and binder. The average values of soil spread diameter (S D ) were plotted against water content (WC) as shown in Figure 2. Clearly the soil spread diameter was influenced by the water content. The soil spread diameter increased with the increasing amount of moisture. The flowability of soil and binder mixture was also made in order to examine whether distinct values of spread diameter occurred. With the addition of cement, less spread diameter can be observed with the increasing dosage of 5% and 10%. However, there were no such significant differences. It is likely that the cemented soil samples reacted with water which had caused cement hydration. Influence of method. Flowability by innovative fall method resulted with larger spread diameter than the conventional lift method. With more water content, the soil matrix tends to loosen and break apart. At raised level, samples by fall method were dropped by its gravitational weight. Both of the reasons contributed to the large displacement. Even so, both methods show increasing trend line in spread diameter against water content graphs. Influence of ring cylinder. Comparison values of data between kaolin and samples from [12] were tabulated in Table 3. The index values of mixing water content and liquid limit [WC M / W L ] of kaolin were predetermined. Apparently, the index values of spread diameter and initial diameter [S D /D] were less than the others. Based from the dimension of the ring cylinders used, only the ring height for kaolin differs from the others. Despite the difference, the dimension of ring cylinders was not the main reason for the different values. Even if the ring height for kaolin sample is the same, the S D /D value still less than the others ranged values. Illustrated in Fig.3, the normalized graph of S D /D and WC M / W L shows the distinct values of kaolin and other soil samples. According to Table 1, the mixing water content [WC M ] and percentages of fines [%] were less than the other samples. Obviously, higher water content and finer samples would produce large S D values. Conclusion In this study, series of soil mixture with different water percentages and binder contents were conducted. The purpose of this study is to examine the influence of moisture towards the soil samples regardless whether non-solidified or solidified. The findings are summarized as follows; x Higher water content would resulted higher spread diameter x By adding minimum and maximum dosage of cement namely 5% and 10%, it had affected the spread diameter accordingly. It is highly due to hydration effect between cement and water. x Non-solidified and solidified samples do not have that significant difference of spread diameter and water content relationship. x Despite the test methods used, both methods shows similar trend line of spread diameter and water content relationship. x Mixing water content and percentage of fines had affected the soil spread diameter.
2019-04-29T13:13:43.101Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "ed6998cd69e253ea9baada890d195cab7fe1c108", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/41/matecconf_icongdm2016_01094.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0c8e51e4beb72421f7e6d88f89b4694336b1e19c", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
233863451
pes2o/s2orc
v3-fos-license
Subjective QoE assessment method for 360 ◦ videos : This paper proposes a subjective assessment method for video quality and sense of presence for 360 ◦ videos. Since it is necessary to view 360 ◦ videos with a wide field of view, we adopted a subjective assessment method that allows repeated viewing of 10 s long videos. First, by analysis of the subject’s observation behavior (i.e., head movement), it was clarified that the appropriate number of observation repetitions was 2. The relationship between the video quality and the sense of presence was then quantified using the proposed method. Finally, we determined the number of subjects necessary to derive stable evaluation results. Introduction In recent years, with the development of virtual reality (VR) video technology, various services and applications using VR videos have become widespread, especially in the fields of entertainment, architecture, medical science, education, tourism, etc. Designing VR systems based on quality of experience (QoE) is important for users' comfort. To do this, it is necessary to understand the QoE characteristics of VR videos from various viewpoints. Conventional studies have mainly focused on video quality, sense of presence, immersion, and viewing safety as QoE factors for VR videos. Duan et al. [1] analyzed the effects of the coding rate, video resolution, and frame rate on VR video quality. Tran et al. [2] studied appropriate objective quality metrics for 360 • videos. Kim et al. [3] proposed an objective VR sickness assessment network. However, subjective quality assessment methods for VR videos are not well discussed and established. In the absolute category rating (ACR) method, which is a typical subjective quality assessment method for 2D/3D videos, the video quality is assessed using a test sequence of approximately 10 s [4]. In ITU-T and VQEG, however, it has been argued that 10 s may be too short of a viewing time for evaluating the quality of 360 • video. This paper proposes a subjective assessment method in which the same video sequence is repeatedly viewed so that the video can be viewed from various directions and evaluated. First, by analyzing the subject's head movement to measure observation behavior, the appropriate number of observation repetitions was clarified. The relationship between the video quality and the sense of presence was then quantified using the proposed method. Finally, we determined the number of subjects necessary to derive stable evaluation results. 2 Head movement analysis during subjective assessment Head movement measurement It is necessary to secure sufficient time to view 360 • videos from various directions to stably assess the video quality and sense of presence. Therefore, a QoE assessment method that observes a 10-second video sequence multiple times is a candidate. To clarify the subject's viewing behavior characteristics when the number of viewings was changed, the subject's head movement during the assessment test was measured. In this study, an environment that allows viewing a 360 • video was constructed using Unity. HTC VIVE was used for a head-mounted display (HMD), and head movement was measured using the HMD accelerometer. As video contents, 9 kinds of 10-second video sequences were taken with a fixed 360-degree camera. The quality of each video was varied to create three grades and a total of 27 test conditions. For subjective assessment, we used three methods in which the number of viewings of the test video sequence was changed from 1 to 3, as shown in Fig. 1. If the subject views the same video content multiple times, the viewing behavior may change. For this reason, each subject assessed different video contents with different qualities when the number of viewings was changed. A total of 9 conditions were assessed in three experiments with different viewing times. Therefore, the results for one assessment method included a total of 54 conditions. The subjects were 18 non-experts (16 male and 2 female college students) in video quality. In the head movement measurement, pitch, yaw, and roll angles were sampled at 90 Hz from the HMD accelerometer and acquired to analyze the head movements of the subject during viewing. In this analysis, however, roll angles that had little influence on the analysis of the viewing behavior were excluded. From this data, the average amount of movement of the subject's head for 10 s was derived. After these experiments were completed, we asked the subjects to answer a questionnaire about the appropriate number of viewings to assess for 360 • videos. Experimental results As a result of the measurements, the maximum amount of head movement of the subjects was 180 • for the pitch angle and 430 • for the yaw angle. Because the yaw angle of many subjects was larger than the pitch angle, only the yaw angle data was used to analyze head movement. As a result, when the number of viewings was 1, 2, and 3, the average amount of movement was 417.5 • , 370.5 • , and 373.3 • , respectively. There was a statistically significant difference at the 5% level between the result when the number of viewings was 1 and the result when the number of viewings was 2 or 3. This is because head movement during viewing was stable because of the time allowed to view the video sequence multiple times. Moreover, according to the results of a questionnaire asking about the appropriate number of viewings for assessment of 360 • video, the average number of viewings was 2.28. Therefore, it is appropriate to set the number of viewings to 2. Evaluation of relationship between video quality andsense-ofpresence for 360-degree videos In this section, the relationship between video quality and sense of presence of 360 • videos is derived using the proposed subjective assessment method described in the previous section. Moreover, the number of subjects required to obtain stable evaluation results is determined. Subjective assessment test The environment for subjective assessment was the same as that described in the previous section, except that a SteamVR Media Player was used as the application to play the 360 • videos. The test sequences were 10-second videos shot by a fixed camera, and the following three types were prepared by considering the spatial definition and movement of the videos (video coding method: H.264/MPEG-4 AVC, video resolution: 3840 × 1920p, video Frame rate: 30 fps): • Penguins: A crowd of penguins all around, • Elephant: One of the elephants in a meadow approaching, • Seal: A seal swimming in the sea. These videos were re-encoded to change their qualities in four grades. Therefore, the number of test conditions was 12. The subjective assessment method proposed in the previous section was used, and the number of viewings of the test sequence was 2. The video quality was evaluated on a five-grade quality scale (5: excellent, 4: good, 3: fair, 2: poor, 1: bad). Moreover, the sense of presence was defined as "the feeling of seeing the real thing and/or the feeling of being there," and evaluated on a five-grade scale (5: extremely present, 4: present, 3: neither, 2: absent, 1: not at all). The subjects were 25 non-experts (22 male and 3 female college students) in video quality. The subjective video quality and sense of presence were represented as a mean opinion score (MOS) calculated by averaging the scores of 25 subjects. Figure 2 shows that the MOS for sense of presence tend to be higher than that of video quality, regardless of the type of video content. That is, it was found that the sense of presence decreases as the video quality deteriorates, but the amount of decrease in the sense of presence tends to be less than that of the video quality. This is because even if the video quality is slightly lower, the subject can view the 360-degree video from a free-viewpoint, and a certain sense of presence is maintained. Fig. 2. Relationship between video quality and sense of presence. Number of subjects required for subjective assessment The relationship between the number of subjects and the stability of the MOS was analyzed using the results obtained in subsection 3.2. The stability was expressed as the mean of the 95% confidence interval (MCI) for the MOS. The stability of the MOS when the number of subjects was less than 25 was derived by averaging the MCI obtained from randomly selected subjects in three patterns. Figure 3 shows the relationship between the number of subjects and the average MCI. It was found that the average MCI decreased as the number of subjects increased. Fig. 3. Relationship between the number of subjects and the averaged MCI. According to ITU-T Recommendation P.915 [5], which defines the quality assessment method for 3D videos, the required number of subjects for assessing 3D videos was derived based on an MCI of about 0.32 when 2D videos were assessed by 24 subjects. Therefore, we determined that the required number of subjects for 360 • videos should be 0.32 or less. As a result, it was found that the number of subjects assessing the video quality and sense-of-presence was at least 17 and 23, respectively. Therefore, it is more difficult to assess the sense of presence than the video quality. Conclusion This paper proposed a subjective assessment method for 360 • videos in which the same video sequence is viewed repeated. The results of measuring the subject's head movements and the questionnaire, showed that the number of repetitions should be 2. Using the proposed assessment method, we quantified the relationship between video quality and sense of presence for 360 • videos and analyzed the number of subjects required to obtain stable evaluation scores. As a result, it was found that the tendency of the sense of presence to decrease was slower than of the video quality for 360 • videos. Moreover, from the viewpoint of stability of evaluation scores, it was shown that 18 or more subjects are required for video quality assessment, and 23 or more subjects are required for sense of presence assessment. In the future, we need to clarify the evaluation characteristics of video quality and sense of presence when 360 • videos are distributed over a network and to establish a subjective assessment method for operability in virtual space.
2020-12-31T09:06:11.069Z
2021-03-01T00:00:00.000
{ "year": 2020, "sha1": "5bbfd6fd62cb1704e54cff279c8366a7ec449d31", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/comex/10/3/10_2020XBL0139/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "dc13594a63a03e867aee1b1c6e57a76d8dc8d905", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
266163010
pes2o/s2orc
v3-fos-license
Toward Scalable and Transparent Multimodal Analytics to Study Standard Medical Procedures: Linking Hand Movement, Proximity, and Gaze Data This study employed multimodal learning analytics (MMLA) to analyze behavioral dynamics during the ABCDE procedure in nursing education, focusing on gaze entropy, hand movement velocities, and proximity measures. Utilizing accelerometers and eye-tracking techniques, behaviorgrams were generated to depict various procedural phases. Results identified four primary phases characterized by distinct patterns of visual attention, hand movements, and proximity to the patient or instruments. The findings suggest that MMLA can offer valuable insights into procedural competence in medical education. This research underscores the potential of MMLA to provide detailed, objective evaluations of clinical procedures and their inherent complexities. INTRODUCTION Measurement and data collection instruments structure how we gather research data, whereas models and theories structure how we define what qualifies as valuable information [48].Once integrated into scientific practice, the instruments inspire new theoretical concepts and pave the way for their acceptance within the scientific community [19].Learning analytics (LA) involves collecting and analyzing educational data to understand better and improve learning [10] and multimodal multichannel trace data is suggested to hold promising potential in providing richer insights into domains of learning across various educational settings [3].However, learning and its processes are complex.Thus, the more comprehensive and transparent data on learners, environments, and interactions can be traced [49], the better possibilities of analyses and utilizations emerge.Utilizing diverse forms and sources of data, in other words, multiple data modalities [4] can enhance the precision and scope of understanding learner behaviors in their contexts [14]. Despite the potential for educational research, multimodal and multichannel data collection presents methodological challenges such as instrumentation errors, lack of accuracy and replicability, handling data with varying dimensions (e.g., sampling rates, temporal alignment), securing internal and external validity, and ensuring the reliability of measures [3,52].Also, when collecting such data, a major issue with many commercial and proprietary measurement systems is the lack of financial scalability, methodological transparency, and control over the underlying algorithms used for data collection and analysis, which can lead to questions about the reliability, validity, and ethics [12,52].Open-source technologies and accessible APIs of hardware instruments can provide promising approaches for constructing scalable and transparent measurement systems [e.g., 32]; however, the systems are often in prototyping stages and might sacrifice accuracy or portability for affordability [35]. This study represents a case of exploratory experimentation [6,13,45] aiming to construct instrumentation for multimodal measurement and analysis of behavior in the context of nurse education.Efficient teamwork is essential in health care, and multimodal approaches to analyze complex dynamical behavior could provide insight, for example, into collaborative practices between health care professionals in educational settings and the field [53].Specifically, this study describes a minimum viable experiment (MVE) [13] to discover regularities concerning the complex dynamical behavior of a person conducting a medical ABCDE examination procedure (see, Section 2.3).The research aims to answer the following research question: What elements of the ABCDE procedure can be reconstructed from the multimodal hand movement, proximity, and gaze data by mainly utilizing affordable technology? BACKGROUND 2.1 Multimodal learning analytics Multimodal learning analytics involves collecting, synchronizing, and analyzing various high-frequency data sources like video, logs, audio, and biosensors to study learning in various settings [4].Different kinds of multimodal multichannel data streams are the key ingredients of MMLA, and Molenaar et al. [37] categorized them as physiological, behavioral, and contextual data.Physiological data, such as heart rate (HR) and electrodermal activity (EDA), have been associated with, for example, cognitive load management [e.g., 29] and emotional states [e.g., 51].Behavioral data obtained, for example, using eye tracking and wearable motion detectors, can capture aspects of learners' activities like movement accuracy and situation awareness as they engage with learning content [7, 39, e.g.,].Contextual data like video recordings, positioning data, and self-reported measures offer insights into learners' interactions and experiences within various environments and learning situations [e.g., 17,21,47]. MMLA is expected to produce relevant, accountable, and actionable representations and interpretations while respecting the privacy of the stakeholders [2,14].For this purpose, the use of interpretable and hyperparameter-free predictive models can produce minimal overhead, maximizing methodological transparency; an example of such a method is the Extreme Minimal Learning Machine (EMLM) with full data as reference points and Mean-Absolute-Sensitivity (MAS) to estimate the feature importances [33].Ouhaichi et al. [40] review concluded four trending themes of the MMLA research: 1) addressing different contexts of learning, 2) focusing on self-regulated and collaborative learning processes, 3) encapsulation of multisensory affections from heterogeneous data, and 4) use of modern tools and methods for data analysis.Specifically, MMLA research conducted in real educational contexts, in other words, in the wild, is suggested to hold the potential for providing personalized learning experiences [35].Regarding MMLA, this research aims to integrate behavioral data from hand movement, proximity, and eye-tracking utilizing scalable and transparent approaches to facilitate research of collaborative learning processes in the wild. Behavioral dynamics in health care Eye-tracking has become an instrumental tool in the medical field, for example, understanding cognitive load and assessing practitioner efficiency.Tokuno et al. [47] reviewed cognitive load assessment tools in surgical education, revealing a range of subjective and objective measures.Subjective tools included questionnaires like the NASA Task Load Index (NASA-TLX), while objective measures encompassed physiological parameters like heart rate variability, gaze entropy, gaze velocity, and pupil size.Also, gaze metrics have been utilized to assess non-technical skills like situation awareness in health care [e.g., 7].Ahmadi et al. [1] evaluated the mental workloads of Intensive Care Unit (ICU) nurses during their 12-hour shifts, focusing on how stress impacts their eye movement metrics.Their results suggested that periods of high stress seem to be associated with increased eye fixations and gaze entropy and decreased saccade duration and pupil diameter.Wright et al. [50] utilized mobile eye-tracking to analyze visual attention patterns during an ultrasound-guided anesthesia procedure to differentiate between proficiency levels of practitioners.Their results showed that experienced medical professionals had fewer visual fixations, spent less time on the procedure, and exhibited less visual entropy, suggesting that eye-tracking can offer objective measures for assessing procedural competence and distinguishing expertise levels. Proxemics-the study of how humans perceive and use space-and examinations of body movements can provide useful information on behaviors in medical education.Already Momen and Fernie [38] utilized a wireless Sony game controller's hardware, including a 3-axis accelerometer, to identify six nursing activities around a patient to improve hand hygiene prompts.By attaching five sensors to a nurse's body and analyzing the movements, the research found that the 1-Nearest Neighbour classifier was the most effective in identifying the activities.Morita et al. [39] used Bluetooth accelerometers and optical position tracking to examine microsurgical technical skills.Fernandez-Nieto et al. [17] pointed out the importance of spatial abilities in nursing, especially in effective team interactions and clinical procedures.Using indoor positioning sensors, their research transformed raw positioning data from nursing education classes into meaningful proxemics constructs like co-presence in interactional spaces, socio-spatial formations, and presence in spaces of interest with the aim of facilitating nurses' reflection, learning, and professional development in simulationbased training.However, indoor positioning systems often require stationary installations bound to a specific space. Overall, the multimodal analytical advancements in medical education emphasize the importance of real-time assessment and its challenges.Cloude et al. [9] considered metacognition and selfregulation in clinical reasoning and argued that medical education faces challenges in effectively analyzing learning during activities, as most educational settings utilize intermittent assessments that miss real-time information on knowledge, skills, and abilities, highlighting a need for approaches like MMLA.Furthermore, based on an MMLA implementation in nursing education, Martinez-Maldonado et al. [35] pointed out that MMLA systems need to be trustworthy and address data incompleteness while balancing highquality data capture with portability and affordability of sensors and consider users' concerns about potential distractions and inconvenience due to being monitored. The ABCDE approach Healthcare professionals utilize various standard procedures when diagnosing patients.In this study, we focus on one such procedure that goes by the acronym ABCDE, which stands for Airway, Breathing, Circulation, Disability, and Exposure.It refers to a systematic protocol primarily used in emergency medicine but applies to other healthcare areas [46].It serves as a universal approach for patient assessment and directs medical professionals, particularly nurses, in conducting an efficient and comprehensive assessment of a critically ill patient's condition [42].The completion of the assessment involves five stages consisting of different simultaneous and continuous assessment and treatment steps [46].The procedure starts with an airway assessment to ensure the patient has a clear breathing passage.The patient's respiratory rate and quality are then examined during the breathing analysis.The patient's blood pressure and heart condition are evaluated during the circulation inspection.In the disability stage, neurological function is examined, typically through a quick assessment of the patient's responses.The final step involves a prompt but thorough examination of the patient's body to look for any additional symptoms of disease or trauma.The main aim of the ABCDE approach is that healthcare professionals can accurately prioritize treatments and interventions by consistently following a protocol that simplifies complex clinical situations, allowing them to establish common situational awareness among the medical team and save valuable time [46]. Despite the wide use of the ABCDE approach in various clinical settings, Schoeber et al. [42] found that healthcare professionals' theoretical knowledge of the approach varies based on the professionals' type of department, profession category, and age.The result suggests a need to more closely examine the underlying individual differences beyond theoretical knowledge.For example, eye-tracking has been successfully used in evaluating the medical professionals' performance in the ABCDE approach.Fernández-Méndez et al. [16] utilized eye-tracking to study how lifeguards performed the ABCDE approach.They found that the lifeguards' performance was misaligned with multimodal data: none of the lifeguards completed the approach correctly, but most of their visual fixations during the assessment procedure were shared between the essential areas for the approach, indicating that eye-tracking could be a valuable method for evaluating the performance of medical procedures.Lee et al. [31] utilized eye-tracking, log data, and selfreported cognitive load measurements to assess the performance of the ABCDE approach between experts and novices in a medical simulation game.Their results indicated that experts outperformed novices regarding speed, accuracy, and cognitive load, associated with higher prior knowledge. MATERIALS AND METHODS 3.1 Experimental setting Two nurse educators specialized in critical care conducted the ABCDE procedure on an actor patient.The experiment was conducted in a classroom, simulating a real medical examination room with real medical equipment and a hospital bed (Figure 1).An actor patient played the role of a patient who had arrived from an appendectomy, a common surgery operation.The task of the participating nurses was to conduct the ABCDE procedure for evaluating the patient's condition.The participant was required to work close to the patient's bed and utilize the instrument table positioned 6 meters away from the center of the hospital bed.Multimodal measurement was used to record the participants' hand movements, gaze dynamics, and proximity data.Both participants performed the procedure twice, and the data were collected for the initial and repeated experiments. Apparatus The measurement system (Figure 2) consisted of wireless and wired sensors and recording devices connected to a Raspberry Pi 4 (8 GB) microcomputer that served as a hub for collecting and synchronizing the data streams and forwarding them to a recording laptop through the Lab Streaming Layer (LSL).The system's architectural design is aimed at being extendable for adding additional measurement instruments and scalable for measuring multiple subjects.Apart from the eye-tracking device, the other devices were relatively affordable and accessible and utilized open-source technologies.In this study, the system was capable of real-time measurement and synchronization of five data modalities: hand movements using wireless accelerometers, proximity estimation based on Bluetooth Low Energy (BLE) signal, eye tracking using Tobii Pro Glasses 3, video recording and discrete markers used for real-time annotation.Markers were used as a reference point for evaluating the latency of each individual measurement device.In general, the highest latency of the system was assessed to be approximately 50 ms.Wearable accelerometers and the eye tracker were wireless, allowing free and safe movement of the participants.Hand movement.The micro:bit is a small, versatile, affordable, and programmable open-source ARM-based microcontroller intended for educational and learning purposes, focusing on teaching children the fundamentals of programming and electronics.It includes a variety of sensors, input and output options, and an environment for block-based programming.For example, the micro:bit contains a built-in 3-axis accelerometer that can detect motion, orientation, and tilt.Promoting the constructionist approach by encouraging building interactive projects and engaging with technology, the micro:bit facilitates hands-on learning and fosters creativity and problem-solving skills (Austin et al., 2020).Micro:bit has been used in several studies relating to computing education (e.g., Andersen, 2022).However, to our knowledge, it has not been utilized as a measurement device for scientific work.This study aimed to pilot and evaluate micro:bit as a scientific instrument.Thus, two micro:bit devices were connected to an add-on shield to enable battery power and wireless wristband use.The devices were attached to the wrists of the participants.The built-in 3-axis accelerometer measured hand movements with a sampling rate of 40 Hz.Devices sent the raw accelerometer signal values using BLE connection to a third micro:bit connected to the RPi receiver. Proximity.Spatial behavior in terms of proximity was measured using the Received Signal Strength Indicator (RSSI) of the accelerometers, which were connected using BLE to the third micro:bit serving as a receiver.RSSI in Bluetooth technology is a metric that quantifies the power level of a received radio signal.It is commonly used to estimate the distance between devices, as signal strength typically decreases with increasing distance.By employing wireless Bluetooth-based instruments and RSSI, researchers can utilize proximity-based methods [e.g., 36].RSSI values can be influenced by various factors, such as environmental conditions and obstacles that interfere with radio waves [e.g., 27].In this study, no significant structures were interfering with the Bluetooth signal.Thus, the raw RSSI signal values between hand movement sensors and the receiver were used, and the signal was calibrated based on the closest and farthest distance to the point of interest (POI).The third receiver micro:bit was placed on the chest of the actor patient, serving as the POI (Figure 1).The farthest point was chosen to be the table containing some of the medical instruments the participants had to use in the procedure.The experiment was designed spatially so that the participant moved mainly around the patient's bed and between the bed and the medical instrument table. Gaze.Tobii Pro Glasses 3 eye tracking device (sampling rate 50 Hz) was used to record participants' eye movements.The raw signal was the (x, y) coordinate of the participants' gaze in the visual measurement plane of the device.The coordinate values were continuous and in [0, 1].Blinks were coded as missing values because they caused interruptions in the measurement signal.Tobii Pro Glasses 3 API was used to communicate with the eye-tracking device.The synchronization of the accelerometer and eye-tracking signals was verified by asking the subject to fixate gaze on a stationary point and perform a slow vertical head movement while the micro:bit was attached to the forehead of the subject wearing the eye-tracking glasses.For example, a similar approach has been used to synchronize eye-tracking and motion-capture systems [5]. Figure 3 shows the synchronized vertical head movement (up and down) measured using the micro:bit and the slowly changing vertical eye movement when fixating on a stationary point.The use of raw gaze signals in this study allowed context-free analysis without the need to define areas of interest (AOI). Video and markers.The video recorder setup consisted of a laptop and a webcam.The webcam stored the raw video file on the laptop's local hard drive and sent the video stream's frame numbers over LSL to the recording laptop.Synchronized frame numbers enabled the synchronization of the video with other multimodal data.Also, the laptop was used to send and synchronize keyboard markers over LSL for live annotation of the experiment.Markers were used to sequence the multimodal data according to the steps and phases of the ABCDE procedure.Synchronized video and markers served as ground truth for validating the analysis results. Analysis procedure To reduce the incoherence in the RSSI signal, proximity was operationalized based on the strongest signal of the accelerometers in the right hand (rh) and left hand (lh) for each time point t, = ( ℎ , ℎ ).Based on the calibration measurements in the experiment, the signal was discretized as a binary variable to indicate the time points when the participant was working beside the patient and beside the medical instrument table.A missing value suggested that the participant was located somewhere in the intermediate space.Hand movements were operationalized using the velocity of the movement.Before calculating velocity, the signal was preprocessed by applying a Savitzky-Golay filter for denoising [24,41]. Entropy provides a useful metric for understanding the degree of variability, disorder, or unpredictability in the studied data or system.For example, entropy has been used to examine the development of attention to faces [18] and webpage aesthetics [20].Stationary gaze entropy reflecting the overall spatial dispersion of gaze was used to operationalize gaze dynamics between explorative (i.e., wider gaze dispersion) and exploitative (i.e., limited gaze dispersion) phases where lower entropy is interpreted to indicate more exploitative, spatially focused, and coherent visual focus [18,22,30,44]. In the context of information theory, entropy quantifies the uncertainty or randomness of a set of outcomes or events.Entropy can be quantified using the Shannon entropy [43], defined as the average Shannon information content of an outcome [34].In other words, it quantifies the average amount of information needed to describe an outcome from a random variable following a given probability distribution.Measured gaze data concerns two coordinate variables.Using the logarithm base 2, Shannon entropy is measured in bits [43] and the joint entropy of two variables is [34]: Raw gaze signal was preprocessed using cubic spline imputation [e.g., 18] to deal with the missing coordinate values caused by blinks.A probability distribution of the continuous gaze measurement signal was needed for calculating the joint entropy.To create a probability distribution of the continuous gaze measurement signal, the data was discretized into equally sized bins representing the state space of gaze behavior.In other words, the discretization divided the measurement plane of the eye tracker as a 100 x 100 matrix, each cell depicting the probability of gazing at that section of the visual plane during a specific time period.Entropy was calculated for a sliding window of 5 seconds.To evaluate the robustness of the approach, different discretization group sizes (i.e., 10, 25, 50, 75) and sliding windows (i.e., 2, 3, 4, 6) were tested.However, the results were qualitatively the same. The measurements were visualized using a behaviorgram, a graphical representation that visually depicts patterns of behavior, interactions, or activities over time.Like exploratory data analysis, visual analytics aims to uncover knowledge and acquire insight from complex data sets [11].Behaviorgrams can be used in visual analytics to understand the behavior of individuals or groups in contexts such as psychology and human-computer interaction [e.g., 8].The custom extended behaviorgram (Figure 4) presented in this study exploits dimensional stacking and the dense pixel technique [25,26] to visualize temporal relationships of all the measured dimensions. The central axis of the behaviorgram represents temporal hand movement velocities as an accelerograph.The accelerograph is asymmetrical concerning the central line, the lower part representing the right hand and the upper part representing the left hand.Color coding of the accelerograph exploits the dense pixel technique, a sort of heatmap that depicts higher RSSI signal strength in a brighter color, indicating higher proximity to the POI.The accelerograph's upper temporal segment illustrates the participant's binary position (i.e., beside the patient, beside the instrument table).The lower segment depicts the gaze entropy, where the mean entropy was set as the threshold for marking a segment as denoting low entropy (i.e., more coherent and spatially focused visual perception).Furthermore, the extended behaviorgram was reduced to a more simplified behaviorgram (Figures 5 and 6).The simplified behaviorgram captures proximity and combines the dimension of hand movement and gaze entropy, specifically illustrating the participant's behavioral dynamics concerning the patient.Behaviorgrams were discretized into broad behavioral phases based on video observation and marker annotations. RESULTS Based on visual analytics, the expected steps in the ABCDE procedure, and validation according to video recordings and annotations, behaviorgrams were found to consist of four phases (Figures 5 and 6).In Phase I, the nurse retrieved medical instruments and attached them to the patient, which involved hand movements close to the patient while maintaining high visual focus when handling the instruments.Phase II consisted of monitoring respiration frequency by observing the patient's chest, confirming visually other vital signs from the medical monitor (IIa), and using a stethoscope to auscultate the patient's chest (IIb).In phase II, the nurses were positioned either close to the patient or in the intermediate space between the patient and the instrument table.Phase II corresponds to the Breathing assessment in the ABCDE approach, and it is characterized by high visual attention as measured using gaze entropy because the tasks require focusing on the patient and the monitor.Phase IIa involved only visual observations without hand movements, and in Phase IIb, some hand movements can be seen in the behaviorgram because of the use of a stethoscope. The latter part of the ABCDE procedure (Phase III) involved fetching instruments from the instrument table and performing small examination operations close to the patient (e.g., measuring body temperature and giving medication).Thus, Phase III corresponds to the Circulation, Disability, and Exposure assessments in the ABCDE approach, and it is characterized by changes in proximity and alternating hand activity combined with low gaze entropy (i.e., coherent visual perception).Phase IV consisted of retrieving a checklist and reviewing the patient's condition according to the list.The phase was characterized by changes in proximity, a few short periods of hand movements, and visual focus. Accelerographs representing hand movement velocity showed specific dynamical patterns based on the phases of the ABCDE procedure.The preparation (Phase I) involved attaching medical instruments to the patient, which is seen as a continuous period of high hand movements in all behaviorgrams.Specifically, hand disinfection was clearly shown as having high peaks in velocity and a higher distance from the patient because it was performed at the instrument table.For example, Nurse 1 performed three hand disinfections in Phase III in the repeated experiment (Figure 5b).On the other hand, the phases where the nurse mainly observed the patient's condition visually were characterized by low hand activity (Phase IIa, IIb, and IV). DISCUSSION AND CONCLUSION The results of this study underscore the potential of using multimodal learning analytics in understanding behavioral dynamics in the medical field.Utilizing relatively affordable technology and visual analytics, the research was able to trace the different phases of the ABCDE procedure and discern the behavioral patterns associated with each phase.The clear co-occurrence of hand movement activity, gaze entropy, and spatial location across various stages suggest that these metrics provide insights into the dynamics of the procedure.Notably, the low gaze entropy indicated periods of consistent visual perception throughout the procedure, suggesting that medical professionals frequently alternate between explorative and exploitative gazes, especially during intricate procedures.This is particularly significant when considering the importance of visual attention in medical tasks and how it can influence the outcome of procedures. Integrating multimodal data into a behaviorgram revealed distinct visual patterns based on the different phases of the ABCDE procedure.The results showed that preparation for the procedure, breathing assessment, Circulation/Disability/Exposure phase, and review phase could be identified.Different periods of eye-hand coordination can be distinguished when combining gaze entropy with information from the accelerograph (i.e., high hand activity and high visual focus, low hand activity and high visual focus).Specifically, phases characterized mainly by visual observation displayed visual focus and reduced hand activity, thus allowing for the differentiation between manual and observational phases of the procedure.The proximity measure captured the movement of the nurse between the patient and the instrument table.In general, multimodal behaviorgrams and results based on visual analytics could be linked with actual behavioral dynamics during the procedure. The results highlight that multimodal multichannel data collection approaches could and should be examined for validity before feeding the data to complex machine learning and artificial intelligence algorithms.Before engaging with more complex analysis techniques, it can be helpful to utilize visual analytics to examine the potential patterns in the data.Such an approach could assist in validating the measurement procedures and facilitating transparency of the more complex analyses.The results provided initial evidence of validity and reliability: results aligned with visual analytics and observed behavior in marker-annotated video recordings for both the initial and repeated examinations for both subjects.In other words, the results showed evidence of within-subject and between-subjects similarity. The multimodal multichannel measurement in this study utilized relatively affordable technology (i.e., Raspberry Pi, micro:bit), enabling the techniques to be scaled for multiple subjects.The results showed that micro:bit has the potential to produce accurate multimodal measurement data while Raspberry Pi functions as a recording device.The expensive part of the instrumentation was the eyetracking device; however, more affordable devices are potentially being introduced to the market as technology advances.Scalable and transparent measurement and analysis of behavioral dynamics can enable research in the wild, which refers to approaches to studying and understanding human behavior and technology interactions in real-world, everyday settings, as opposed to controlled lab environments [23,35].For example, such approaches can enable research in medical situations where an observer can not enter the room of a patient [e.g., 15].Furthermore, Kolbe and Boos [28] pointed out the limitations of traditional team research methods in healthcare, which often focus on static descriptions rather than dynamic team processes over time.They suggested that more profound insights into the intricacies of teamwork can be achieved by adopting methodological approaches that consider dynamics, such as event-and time-based observations, social sensor-based measurement, and micro-level coding.Thus, potential applications of the approach presented in this research could include the analysis of situation awareness, professional noticing, joint visual attention in collaborative tasks, understanding the dynamics of patient care, and exploring how medical instruments are handled in real-world scenarios.These insights could inform training programs, process improvements, and even technology design for healthcare contexts.However, it is worth being aware of and clearly defining the limits and scope of the multimodal approaches; in other words, "noting one's paradigm's relatively well-marked perimeter is a hallmark of sound and responsible science" [48, p. 288].In conclusion, this research adds value to medical education by emphasizing the importance of integrating multimodal measures to understand medical professionals' behavior during standard procedures comprehensively. Limitations and future research Like other similar studies implementing an exploratory approach, this study can be criticized due to its lack of a control group and the generalisability of the results.Furthermore, while behaviorgram provides a comprehensive overview of behavior over time, it does not capture nuanced processes underlying specific actions.Finally, the study utilized a very small sample size, and the generalizability of the results to other medical procedures beyond the ABCDE approach remains to be explored.Future studies need to utilize more comprehensive analysis techniques and delve deeper into the individual differences among professionals and how these might influence the observed behaviors.Furthermore, the critical issue for future work is to examine how to bridge observed behavioral dynamics with cognitive functions and outcomes. Figure 1 : Figure 1: Experimental setting where the point of interest (POI) indicates the reference point for proximity estimation.The gray area indicates an area close to the patient, whereas the blue area is a specific area far from the patient. Figure 2 : Figure 2: An overview of the apparatus used to measure and record all five data streams. Figure 4 :Figure 5 : Figure 4: An example of a behaviorgram fusing the dynamics of the multimodal data
2023-12-12T06:41:13.018Z
2023-12-08T00:00:00.000
{ "year": 2024, "sha1": "528fa5dc95763217d85972eaa757795828f1636e", "oa_license": "CCBY", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3605098.3635929", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "528fa5dc95763217d85972eaa757795828f1636e", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
256549990
pes2o/s2orc
v3-fos-license
Combining thermal, tri-stereo optical and bi-static InSAR satellite imagery for lava volume estimates: the 2021 Cumbre Vieja eruption, La Palma Determining outline, volume and effusion rate during an effusive volcanic eruption is crucial as it is a major controlling factor of the lava flow lengths, the prospective duration and hence the associated hazards. We present for the first time a multi-sensor thermal-and-topographic satellite data analysis for estimating lava effusion rates and volume. At the 2021 lava field of Cumbre Vieja, La Palma, we combine VIIRS + MODIS thermal data-based effusion rate estimates with DSMs analysis derived from optical tri-stereo Pléiades and TanDEM-X bi-static SAR-data. This multi-sensor-approach allows to overcome limitations of single-methodology-studies and to achieve both, high-frequent observation of the relative short-term effusion rate trends and precise total volume estimates. We find a final subaerial-lava volume of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$212\times {10}^{6}\pm 13\times {10}^{6}\; \text{m}^{3}$$\end{document}212×106±13×106m3 with a MOR of 28.8 ± 1.4 m3/s. We identify an initially sharp eruption-rate-peak, followed by a gradually decreasing trend, interrupted by two short-lived-peaks in mid/end November. High eruption rate accompanied by weak seismicity was observed during the early stages of the eruption, while during later stage the lava effusion trend coincides with seismicity. This article demonstrates the geophysical monitoring of eruption rate fluctuations, that allows to speculate about changes of an underlying pathway during the 2021 Cumbre Vieja eruption. www.nature.com/scientificreports/ ing eruptive events, as was the case during the 2021 Cumbre Vieja, La Palma volcanic eruption. Satellite-based volcano monitoring often relies on thermal, optical and Synthetic Aperture Radar (SAR) data analysis. Thermal Earth Observation (EO) provides valuable information for estimating the lava effusion rate 8 and has been a well-established technique for volcano monitoring since the early 1980s, beginning with the NASA Land Remote Sensing Satellite (Landsat) Thematic Mapper (TM) series and the Advanced Very High Resolution Radiometer (AVHRR) onboard the National Oceanic and Atmospheric Administration (NOAA) satellites 9,10 . Important developments in automated thermal hotspot detection approaches are based on the Moderate Resolution Imaging Spectrometer (MODIS) provided by the Middle InfraRed Observation of Volcanic Activity (MIROVA) system 11 and the MODIS volcano detection algorithm (MODVOLC) 12 . The high capabilities for thermal anomaly detection of the MODIS successor mission Visible Infrared Imaging Radiometer Suite (VIIRS) sensor were confirmed by 13,14 . Other automated hotspot detection systems such as HOTVOLC 15 and HOTSAT 16 enable volcanic activity analysis in near real-time as they are based on the high temporal resolution data from geostationary satellites (e.g., the Geostationary Operational Environmental Satellite (GOES) and Meteosat). Thermal satellite imagery has been used to investigate a variety of thermal volcanogenic emitting phenomena, such as lava lakes 17 , active lava flows 14 and lava domes 18 . Harris and Rowland as well as Harris and coauthors give a comprehensive review on the relationship between effusion rates and thermal emission of lava flows, and how to derive lava effusion rates from thermal satellite imagery 5,19 . As described in detail in Coppola and coauthors 20 , based on the original heat balance approach of Pieri and Baloga 21 , two main approaches for lava effusion rate estimation from thermal satellite data have been reported in literature: the thermal infrared (TIR) data method 22,23 , and the mid infrared (MIR) data technique 24 . Here, the following terms are used as defined in Harris and coauthors 23 : The effusion rate is the instantaneous rate at which lava is erupted at any time. The mean output rate (MOR) is the entire erupted lava volume (after the end of the eruption) divided by the total eruption duration. The time-averaged discharge rate (TADR) is the lava volume emplaced averaged over a given time period. According to Wright and coauthors as well as Harris and coauthors, for satellite data-based analysis of effusion rates the TADR is considered as the most suited method as satellite sensors measure changes in lava volume not over the whole eruption duration but over a given time period prior to each satellite image acquisition 22,23 . Examples for some recent eruptions studied with thermal EO data-based TADR estimates are the 2014-2015 Holuhraun eruption 20 , the 2018 eruption and sector collapse at Anak Krakatau 25 or the 2018 Kīlauea volcano eruption 14 . High (HR) and very high-resolution (VHR) optical satellite imagery are ideally suited for the detailed analysis of, e.g., lava flows and pyroclastic density currents 26 . Optical stereo data enables the generation of an up to date digital surface model (DSM) of the lava flow topography. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) can acquire stereoscopic images at 15 m spatial resolution for deriving DSMs. The time difference between the two along-track ASTER images is a few seconds only 27 . Modern sensors such as Pléiades with its tri-stereo acquisition applicability enable the generation of a detailed DSM of the volcano surface. For example, Bagnardi and coauthors compared topographic information derived from post-eruptive Pléiades imagery and pre-eruptive TanDEM-X data to measure the erupted lava area, volume, and the MOR of the 2014-2015 Fogo Volcano eruption 28 . In contrast to optical and thermal sensors is SAR the only system that provides useful information of the Earth's surface almost completely independent of weather and daylight and also during explosive eruption events when the visibility and applicability of optical and thermal sensors are limited by meteorological or volcanic ash clouds, respectively. Analysis of SAR amplitude data is a well-established tool for volcano monitoring also when major changes occur on the surface of the volcano, e.g. due to explosive eruptions 29,30 and allowed the monitoring of the aligned craters at Cumbre Vieja 31 . SAR interferometry (InSAR) can be applied for measurement of slow terrain motion in unvegetated snow free areas 32 . However, classical, repeat-pass InSAR cannot be applied on sites of strong surface changes between the two SAR acquisition from which the interferometric phase is derived. Topographic differences calculated from two DSMs with one derived from a pre-eruption InSAR pair and a second one from a post-eruption InSAR pair gives information about the erupted lava volume and about the MOR 33,34 . However, for detailed TADR estimates, more information, i.e. also information about the lava flow topography during an ongoing eruption are required. Here, bi-static InSAR data acquisitions are very useful. For instance, in bi-static mode the mission TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) provides two SAR images acquired over the same area at the same time (i.e. there is no temporal de-correlation of the interferometric phase), which enable the generation of an up to date DSM of the study site 35 . For example, Poland used a time series of TanDEM-X acquisitions and generated differential DSMs to measure the TADR of subaerial lava at Kīlauea Volcano, Hawai'i 36 . Hence, multi-sensor remote sensing, i.e., a combination of data from different Earth observation disciplines can make a significant contribution to the understanding of volcanic processes 37 . Example studies are described by Walter and coauthors investigating the 2018 flank collapse of Anak Krakatau 25 , Plank and coauthors analyzing the 2018 dome collapse at Kadovar 18 , and Shevchenko and coauthors describing the 2018-2019 eruption episode at Shiveluch volcano 38 . In our study, we follow a multi-sensor data approach and describe for the first time a combination of thermal, tri-stereo optical and bi-static InSAR satellite imagery for analyzing lava effusion rates and volume. The methods are described in detail in "Methods" section. We investigated the 2021 Cumbre Vieja, La Palma eruption, which is briefly described in the next section. The 2021 Cumbre Vieja, La Palma eruption. On September 11, 2021, began a seismic swarm that gradually intensified over the next days 39 . This seismic swarm indicated a magma pathway that propagated along the ing of an 800 m long fissure on the mid-western flank of Cumbre Vieja, more precisely in the area of Hoya de Tajogaite, El Paso. The eruption intensified over the next weeks and was characterized by lava fountaining from multiple vents aligned in direction NW-SE 31 , with sometimes four to five vents being active simultaneously, Strombolian explosions, and advancing lava flows towards the western coast of La Palma Island (Fig. 1). Dense geophysical and geochemical monitoring allowed recording temporal changes in high quality 39 and assessment of possible volcano-tectonic control 40 . The lava flows entered into the ocean from September 28 onwards and initiated the formation of three lava deltas that eventually connected to one large and one smaller lava delta. Finally, the lava flows were regularly traced by the Copernicus Emergency Management Service (EMS), yielding a lava area of over 12.4 km 2 . Thereby, around 3000 houses were destroyed and over 7000 people were displaced [41][42][43] . The eruption had a duration of 12 weeks, which is four times of the duration of the last eruption in 1971. Locally lava flows exceeded 30 m thickness, but a systematic volume estimate and discharge estimation was not achieved. Due to the high impact hazards, the area was hardly approachable by field sensors, or localized drone flights allowing estimation of the dimension and evolution of surface changes. Therefore, we acquired and analyzed multiple sensor satellite data. During the initial dike intrusion period, multi-temporal differential SAR interferometry analysis of PAZ, TerraSAR-X/TanDEM-X, Cosmo-SkyMed and Sentinel-1 showed over 40 cm deformation along the line-of-sight of the ascending passes of the affected slope towards the sea 44 . To study the spatio-temporal evolution of lava flows and ash deposition, the following eruption was monitored by SAR amplitude and VHR multispectral imagery (e.g., Pléiades and GeoEye). For example, the Rapid Mapping service of Copernicus EMS was activated and produced 64 products (https:// emerg ency. coper nicus. eu/ mappi ng/ listof-compo nents/ EMSR5 46). The spatio-temporal evolution of the lava flow area is well documented by the EMS maps. However, these maps do not answer the question about the temporal evolution of lava effusion rate and volume. In this study, we answered this question by investigating of multi-sensor satellite imagery. Results We jointly analyzed thermal MODIS and VIIRS, VHR optical tri-stereo Pléiades and bi-static TanDEM-X SAR satellite data to investigate the 2021 Cumbre Vieja eruption event. Eventually we compare trend changes to a seismic catalogue acquired by independent methods, to develop a conceptual model explaining the observations. The data and methods are described in detailed in "Material and methods" section. www.nature.com/scientificreports/ Lava volume estimates from VHR satellite imagery. Comparison of the TanDEM-X and Pléiades DSMs with the pre-eruption LiDAR DSM enabled us the generation of lava flow thickness maps (Fig. 2) and the calculation of the lava flow volumes (Table 1) for the dates of the VHR satellite data acquisitions. First, in order to correct for potential offsets, the TanDEM-X and Pléiades DSMs were compared with the pre-eruption LiDAR DSM at areas close to the lava flow, but unaffected by it. Second, the offset corrected TanDEM-X and Pléiades DSMs were cut to the lava flow areas according the corresponding Copernicus (EMS) mapping (cf. Table 2) and the lava volume were derived for these areas by difference calculation between the co-/post eruption TanDEM-X and Pléiades DSMs and the pre-eruption LiDAR DSM (cf. "DSM generation and lava volume estimates from optical tri-stereo data" and "DSM generation and lava volume estimates from bi-static TanDEM-X SAR data" sections for details). The uncertainty values reported are based on height differences of the satellite data-based DSMs to the pre-eruption LiDAR DSM within areas not affected by the eruption. Figure 2 also shows the formation of two lava deltas at the southern part of the lava flow (October, Fig. 2a), which then connected to one large delta (November, Fig. 2b). A third lava delta formed later at the northern part of the lava flow (December, Fig. 2c). Two profiles of the pre-eruption and post-eruption topography as well as of the lava flow thickness are shown in Fig. 3. The uphill vents area and the coastal lava deltas show the highest lava flow thickness. Lava effusion rate and volume estimates from thermal satellite imagery. Figure 4 shows the TADR derived via the combined analysis of MODIS and VIIRS thermal imagery. This analysis is based on the empirical relationship between the radiative power measured by the thermal sensor over the lava field (volcanic radiative power, VRP) and the silica content of the lava, which regulates the viscosity and thereby the flowing properties of the lava 24 (cf. "Lava effusion rate and volumes estimates from thermal satellite imagery" section for more details). The first 4 days of the eruption showed relatively low effusion rates of ~ 1.2 m 3 /s. But, from September 24, 2021 onwards and especially from September 27 onwards, a strong increase of the effusion rates up to values of 42.7 ± 21.3 m 3 /s were observed. Seismic data showed a reactivation of the 12-15 km cluster on September 27, 2021 (00:00 UTC), indicating the activation of a deeper magma source. Therefore, from that date onwards, we assume a change of the lava composition from tephrite to basanite as was observed in previous eruptions at La Palma (e.g. in 1949 and 1971 45,46 ). This lava composition change comes along with a decrease of the lava viscosity (cf. Table 2). Change of lava composition, which causes a decrease of the viscosity, leads to an increase of the effusion rate 14,24 . For the TADR we used the silica content of tephrite in the beginning of the eruption and then replaced it by the silica content of basanite from September 27, 2021 onwards. From beginning of October until beginning of December 2021 we observed an average effusion rate of 13.3 m 3 /s with peaks of increased effusion rates on October 21, November 15 and December 2. Then, the effusion rate continuously declined down to 0.1 m 3 /s from December 15 until the end of the observation period on December 25, 2021. The official eruption finish date was December 13, 2021. Thus, these very low eruption rates from December 15 until December 25 are either due to residual heat of the recent lava flow emplacement or that minor but non-zero material continued flowing in well insulated lava tubes. Thus, we can identify two main phases, a rising phase during the first 2 weeks of the eruption (Phase I cf. Fig. 4), followed by a long waning phase (Phase II cf. Fig. 4) lasting ten to 12 weeks (depending on the end of the eruption, as mentioned above). The waning phase is interrupted by short pulses: A smaller increase of the effusion rate during mid-October (Phase IIb, dashed-outline blue box in Fig. 4), and two stronger pulses during mid-November (Phase IIc) and again in late November/beginning of December (Phase IId; blue boxes in Fig. 4). Figure 4 also shows the cumulative lava volume (cf. "Lava effusion rate and volumes estimates from thermal satellite imagery" section) and its temporal evolution as estimated from thermal satellite imagery by computing the integral of the consecutive TADR estimates. We see a strong increase of the lava volume at end of September 2021 as shown in the TADR estimates. This is followed by an almost linear growing trend until December 15, when the lava volume reached a steady state (flattening of the volume curve). The thermal data estimates give a final erupted lava volume of 103 × 10 6 ± 51 × 10 6 m 3 , which results in a MOR of 12.3 ± 6.1 m 3 /s for the period of high thermal activity (97 days) or a MOR of 13.9 ± 7.0 m 3 /s for the official eruption period (85 days and 8 h), respectively. Considering the first 5 weeks of the waning phase (Phases IIa and IIb, cf. Fig. 4), we can apply a simple mathematical model ( TADR = −17.91 m 3 s × ln(d) + 77.422 m 3 s , with d representing the day of eruption) explaining the trend to 82.54%. Moreover, if we only consider these first 5 weeks of the waning trend (Phases IIa and IIb), we could predict the duration of the eruption with 88.23% confidence. I.e., based on only the information available during Phases IIa and IIb, we would expect the eruption to end after 75 days, which is 10 days earlier than the official eruption end (cf. previous paragraph). The reason for the underestimation of the eruption duration are the two short effusion rate pulses (in the Phases IIc and IId) during mid-November and again in late November/ beginning of December. "Comparison with seismic observations" section discusses the Phases IIc and IId in detail and compares the TADR measurements with seismic observations. Lava volume estimates from multi-sensor satellite data. Based on the VHR TanDEM-X lava volume measurements, we calibrated the thermal estimates of Fig. 4. This calibration was performed by replacing the thermal volume estimates by the next available VHR TanDEM-X lava volume measurement acquired during the eruption (cf. "Methods" section "Lava volume estimates from multi-sensor satellite data"). This calibrated time series combines information about short-term eruption rate changes, derived from the high frequent thermal observations, with precise estimates of the absolute volume (TanDEM-X) whenever available. Figure 5 shows the original and the calibrated thermal data-based volume time series together with the volume measurements derived from the VHR TanDEM-X and Pléiades satellite data. The calibrated thermal estimates slightly still www.nature.com/scientificreports/ www.nature.com/scientificreports/ underestimate the final lava volume measured by Pléiades. But, considering the relatively high uncertainty of the empirical thermal methodology (cf. "Lava effusion rate and volumes estimates from thermal satellite imagery" section), we see that the calibrated thermal estimates are within the uncertainties of the Pléiades estimates. Table 1. Volume estimates based on VHR satellite data (*the software packages CATENA and Agisoft Metashape follow different processing schemes, cf. "DSM generation and lava volume estimates from optical tri-stereo data" section). Copernicus EMS only reports about the final erupted (subaerial) lava volume. In contrast to this, in our study we analyzed in addition to the post-eruptive Pléiades tri-stereo dataset, also a series of co-eruptive TanDEM-X bi-static datasets and a dense time series of thermal EO data (MODIS and VIIRS), which enabled us to measure not only the final erupted lava volume, but also its spatio-temporal evolution. Using a structure-from-motion approach, Civico and coauthors generated a detailed (0.2 m resolution) post-eruptive DSM of the 2021 Cumbre Vieja lava flows based on drone imagery 47 . The authors report for the subaerial deposit of lava flows and proximal fallout a volume of 217 × 10 6 ± 6.6 × 10 6 m 3 and for the subaerial lava flows alone a volume of 177 × 10 6 ± 5.8 × 10 6 m 3 (cf. Fig. 5). The first mentioned lava volume matches well with our Pléiades data-based measurements of the final lava volume of 212 × 10 6 ± 13 × 10 6 m 3 (CATENA DSM) and 208 × 10 6 ± 11 × 10 6 m 3 (Agisoft Metashape DSM) and is also very close to the aforementioned measurements of Copernicus EMS. The second mentioned volume of Civico and coauthors 47 , i.e. the volume of the lava flows alone, is ca. 15 to 18% lower than our or the Copernicus EMS estimates. All the methodologies for differential DSM based volume measurements, VHR optical satellite, bi-static InSAR or the drone measurements of Civico and coauthors 47 rely on a good match of the pre-eruption and the co-/post-eruption DSM. For this precise information about areas not covered by ash deposits is required. This is much better possible with the very precise drone DSM compared to the lower resolution satellite data-based DSMs. Comparison of the different approaches. Each of the single EO methodologies (thermal, VHR optical tri-stereo, bi-static InSAR) applied in this study for lava volume estimates has their advantages and disadvantages. The advantage of the thermal EO is its very high observation frequency with in case of combining MODIS and VIIRS observations up to eight observations per day (for the location of La Palma), which enables the measurement and analysis of (relative) short-term eruption rate changes. The drawback with thermal EO data-based lava volume and TADR estimates is that cooling effects and crust growth at the lava surface lead to an underestimation of the total lava flow volume, especially when lava flows in tubes beneath a crusted surface which works www.nature.com/scientificreports/ as an isolator above the hot liquid lava 49 . With thermal EO high temperatures can only be measured at the vents and at fresh cracks. Consequently, thermal EO alone leads to an underestimation of the erupted lava volume. VHR satellite data from optical or SAR sensors enables a more precise lava volume estimation, independently of the isolating effects of the lava crust. VHR optical stereo or tri-stereo EO allows in general very good DSM generation and therefore precise lava volume estimates. The aforementioned term "in general" is used as the dark surface of lava flows does not always provide a high enough contrast to find a dense network of matching points between the two or three image pairs (for stereo or tri-stereo acquisitions, respectively). Bagnardi and coauthors reported similar for Fogo Volcano, where a low density of matching points was found due to the low texture caused by volcanic ash cover 28 . The pre-processing image correction in Agisoft Metashape made it possible to solve this problem in the second approach. Another limitation of optical data for volcano monitoring is the requirement of clear sky conditions. The location of La Palma as an oceanic island and the huge amount of volcanic ash emitted during several phases of the eruption did not allow to realize a clear-sky tri-stereo acquisition during the eruption event, but only after the end of the eruption. Bi-static InSAR EO provides good lava volumes estimates, with higher uncertainty as the VHR optical (tri-) stereo data, but unbiased in contrast to the thermal estimates. As SAR sensors provide information of the Earth's surface independent of the weather or visibility conditions, we were able to acquire several useful TanDEM-X acquisitions during the eruption event. Besides some technical problems in December 2021, which did not allow the acquisition of more TanDEM-X bi-static datasets another limitation is that the current mission state of TanDEM-X does not allow a regular and systematic bi-static monitoring of all active volcanoes around the globe. All acquisitions have to be manually tasked and conflicts between different customers/data requestors have to be solved. This reduces the number of possible bi-static acquisitions over the AOI. Future missions such as Tandem-L would be a great step forward regarding an operational and global volcano monitoring. The advantage of the multi-sensor approach for lava volume estimation presented in this study is that we combine the advantages of the single methods and overcome their limitations. The combined analysis of thermal, bi-static InSAR and (tri-)stereo optical data enables to get both, high frequent observation (to study the relative short-term effusion rate trends) and more precise estimates of the absolute lava volume. Comparison of the calibrated to the original thermal volume estimates shows that the calibrated values are above the uncertainty (± 50%) of the original estimates (Fig. 5). This ± 50% of the radiant density was introduced by Coppola and coauthors 24 in order to account for the effects that bulk rheology has on spreading rate of active TanDEM-X volume estimates (yellow diamonds) were used for calibrating the thermal volume time series. Our own final lava volume estimates by Pléiades tri-stereo DSM (green circle-CATENA, black circle-Agisoft Metashape). The blue dashed line simulates the original thermal data estimates but multiplied by a factor of 2. The red dotted line shows the thermal volume estimates when modelling an increased the silica content (cf. "Comparison of the different approaches" section). Copernicus EMS volume estimate based on Pléiades tristereo (February 3, 2022, red circle). Pléiades data-based volume estimates by 42 (orange circle). Post-eruptive drone data-based volume measurements by 47 are shown by the red cross (only lava) and black cross (lava and proximal fallout) (cf. "Comparison with independent measurements" section). www.nature.com/scientificreports/ lava (cf. "Lava effusion rate and volumes estimates from thermal satellite imagery" section). For the 2021 La Palma eruption analyzed in this study, a factor of 2 (i.e. + 100%) of the original thermal estimates, instead of the factor 1.5 (+ 50% uncertainty) as proposed by Coppola and coauthors 24 , would show a relatively good fit with the VHR (TanDEM-X and Pléiades) measurements (see blue dashed line in Fig. 5). With this factor 2 one would correct the aforementioned underestimation of the lava volume by thermal EO due to cooling and crust growth at the lava surface and the effects of lava flowing in tubes beneath a crusted surface. However, it is important to mention that this factor 2 is so far only valid for the 2021 La Palma eruption. More investigations of other eruption events (with combined thermal and VHR optical/InSAR data analysis) are necessary in future, before one should apply this factor 2 in general. Assuming the calibrated thermal time series or better the multiplied by factor 2 thermal time series as a realistic volume estimate, we performed an inverse calculation of the Eqs. (1) and (2) to model the silica content X SiO 2 (cf. red dotted line in Fig. 5) that would be necessary to get the same volume estimates as for the "factor 2" estimate (cf. the good match of the red dotted and blue dashed line in Fig. 5). The resulting silica content values of the inverse modelling are 50.0 wt% (before September 27, 2021 cf. "Lava effusion rate and volumes estimates from thermal satellite imagery" section) and 46.5 wt% (from September 27 onwards). The first aforementioned silica content value of the inverse modelling (50.0 wt%) is definitely too high compared to the laboratory silica content measurements of real lava samples of La Palma (cf. Table 2). The second silica content value of the inverse modelling (46.5 wt%) is also higher than the ones reported in Table 2, but it is at the maximum of the range measured by Castro and Feissel 50 . However, 46.5 wt% is definitely higher than the basanite-tephrite composition characterized by low SiO 2 contents (< 45 wt%) that is typical amongst Cumbre Vieja's most recent eruptions 45 . Consequently, it is not a matter of "wrong" or too low silica content values when estimating the lava volume evolution based on thermal EO data, but it is the fact, that the isolating effect of the growing crust at the lava surface and the phenomena of hot liquid lava flowing in tubes result in an underestimation of the total lava flow volume. Therefore, a multi-sensor approach as described in this study is necessary to get precise estimates of the lava volume together with detailed information about its temporal evolution. Comparison with seismic observations. Comparison of surface observations with seismic data have helped to better understand activity changes and interactions of adjacent systems 51,52 . Seismicity changes observed during the Tajogaite eruption on La Palma mainly occur at depths of 10-12 km or at 34 km, and are reflecting pressure changes at depth, which eventually may be responsible for the observed changes in eruption dynamics, eruption location and the propagation at the summit craters 31 . Similar relationships between eruption changes and crustal seismicity were proposed for Kīlauea 53 , for the 2014 Bárðarbunga eruption 54 , and for volcanoes in Kamchatka 51,52 . Possibly a pressure surge causes the communication between the deep and shallow processes, which during the Tajogaite eruption guided the differentiation of phases of activity 31 , where profound changes in activity occur during (i) multiple collapse of a crater wall and venting activity, and (ii) during development of new and clustered craters. According to the geomorphology and seismology comparison in Muñoz and coauthors 31 , the activity first shows initial linear craters above the erupting dike in NW-SE orientation, extension of the crater row, and finally a disaggregation of the craters due to a new dike intrusion. The transition of these phases is marked or preceded by pronounced seismic activity increases, and compares well to some of the phases we identify in this work. Figure 6 compares the temporal evolution of the lava effusion rate (TADR) with seismic measurements (earthquake counts). Only shallow earthquakes with a depth of ≤ 15 km were considered, thus any seismicity www.nature.com/scientificreports/ at a deeper level (such as the 34 km seismic zone) is excluded in this analysis. The ≤ 15 km depth range is considering the depth of the shallow magma reservoir as well as any pathways to the surface. Earthquake events at the shallow cluster, occurring within 1 day at the timing of the TADR estimate (12 h before to 12 h after) were considered in the analysis. We identify an initially strong diverging trend, where the lava effusion rate strongly increases whereas the seismicity is rather low during onset of the eruption. After October 5, 2021 (00:00), this anticorrelated behavior inverts to a correlated behavior in the long-term, where lava effusion rate declines are associated with weakened seismicity. However, particular of interest are two periods in the late evolution stage of the eruption, in mid-November and in end-November/beginning of December. Contrary to the aforementioned trend, these two periods show again much higher effusion rates also compared to the periods before, between them and after. Also Figs. 4 and 5 shows for these periods a deviation from the expected lava flow volume trend (Phases IIc and IId). This increase of the effusion rate is also reflected by an increase of seismic activity within these periods. This phenomenon is visible relatively well in the mid-November period and very imminent in the late-November/ early-December period. Interpreting those effects and couplings are important. The lack of correlation during early stages of the eruption may indicate that the large TADR emission occurs along a magma pathway that is already open. In fact, independent studies suggest that the bulk of deformation occurred prior to the eruption onset 39 . Following correlated gradual decline and correlated short peaks in seismicity and TADR indicate a coupling of eruption rates to the opening of the magma pathway. It is worth noting, that the observed changes in effusion rate and thermal radiant energy in mid-November and late-November/early-December could be correlated with the appearance of a new fracture system and isolated eruptive vents deviating from the main summit cone 40,55 . This underlines the value of integrated and multi-sensor monitoring to understand the dynamics of basaltic eruptions. Conclusions Information about the lava effusion rate during an effusive eruption is very important as it is a major factor controlling the lengths of a lava flow. This study described for the first time a multi-sensor satellite data approach for estimation of lava effusion rates and volume as well as their temporal evolution. We jointly analyzed time series information from thermal MODIS and VIIRS, VHR optical tri-stereo Pléiades and bi-static TanDEM-X SAR data. By combining their advantages, our multi-sensor approach overcomes the limitations of the single EO methods, which are for example underestimation of the absolute lava volume by thermal EO due to lava crust formation and lava flowing in tubes, the need for clear-sky conditions for thermal and optical EO, the relatively long repeat cycle of VHR satellite sensors. We used the precise VHR lava volume measurements to calibrate the more frequent thermal data-based volume estimates. Consequently, the multi-sensor data analysis enables to get both, high frequent observation (to study the relative short-term effusion rate trends) and precise estimates of the absolute lava volume. We investigated the 2021 Tajogaite eruption at Cumbre Vieja, the largest recorded eruption event at La Palma Island. The final subaerial lava volume was estimated to 212 × 10 6 ± 13 × 10 6 m 3 , which gives a mean output rate (MOR) of 28.8 ± 1.4 m 3 /s. Independent measurements by Belart and Pinel 48 , Civico and coauthors 47 and by Copernicus EMS 42 confirm our estimates of the total erupted lava volume. We identify phases of eruption by short term pulses of higher effusion rates. The initial phase is accompanied by weak seismicity only. Periods of strong lava effusion rates in the late eruption phase, however, coincide with strong seismicity and are contrary to the general declining trend and deviate from the expected lava volume trend. These may reflect changes in the underground magmatic plumbing system observable via satellite data analysis. Results show the added value of volumetric lava flow monitoring and underlines that eruption rate fluctuations may be geophysically monitored, allowing to speculate about changes of an underlying pathway during the 2021 Cumbre Vieja eruption. Material and methods Data. VHR optical tri-stereo Pléiades, bi-static TanDEM-X SAR and thermal MODIS and VIIRS satellite data were jointly analyzed to investigate the 2021 Cumbre Vieja eruption event. Optical tri-stereo imagery. The location of La Palma as an oceanic island, regular cloud or ash coverage did not allow an acquisition of a completely clear sky Pléiades tri-stereo dataset during the eruption. The first clear sky Pléiades tri-stereo dataset (acquired on December 31, 2021, cf. Table 3) could be realized after the end of www.nature.com/scientificreports/ the eruption. This acquisition was the only one which captured the entire lava field without any disturbances by meteorological or volcanic ash clouds. TanDEM-X bi-static SAR data. Three TanDEM-X bi-static acquisitions could be realized over the Cumbre Vieja lava field during the eruption event: October 15, November 17 and November 22 (cf. Table 3). Unfortunately, temporally technical problems at the satellite TanDEM-X in December 2021 did not allow further acquisitions during the eruption event. The TanDEM-X satellite constellation consists of the two SAR satellites TerraSAR-X and TanDEM-X flying in a tandem constellation with 120 to 500 m baseline. In case of a bi-static acquisition, one of these satellites transmits a radar signal to the Earth's surface and both satellites receive the SAR backscatter simultaneously at slightly different incidence angles. This enables the generation of SAR interferograms with high coherence of all land surfaces as no changes occur on ground between the two simultaneously acquired SAR images 35 . 46 . It is assumed that the lava composition of the 2021 eruption event is very similar to that of previous eruptions: the eruption started with more viscous tephrite and then later by activating a deeper magma source changing to less viscous basanite. We calculated the average of the silica content (SiO 2 ) of the aforementioned previous eruptions ( Table 2) and used these values for the thermal satellite data-based lava volumes estimates (cf. "Lava effusion rate and volumes estimates from thermal satellite imagery" section). Pre-eruption Methods. DSM generation and lava volume estimates from optical tri-stereo data. The Pléiades tri-stereo dataset consists of three scenes acquired at different looking angles during one overflight over the Cumbre Vieja lava field. The Pléiades tri-stereo data were processed using (1) the Semi-Global-Matching algorithm implemented in the DLR software environment CATENA 58 and (2) Agisoft Metashape v1.8. to derive a 0.50 m resolution DSM of the lava field and its surroundings as of December 31, 2021 testing the two different approaches. Dark surfaces such as lava fields do not always provide high enough contrast. The same applies for ash covered areas due to the similar and smooth texture of the ash 28 . This partly led to missing matching points between the single tri-stereo acquisitions, causing small gaps in the DSM derived with CATENA. We filled these few small gaps within the Pléiades DSM by inverse distance weighting (IDW) interpolation of the neighborhood height information values. However, the DSM generated in Agisoft Metashape has not gaps because we used the preprocessing contrast correction provided by the software. Prior to lava volume estimation via differencing of post-eruption the CATENA Pléiades DSM and the preeruption LiDAR DSM (cf. "Pre-eruption LiDAR DSM" section), an area in the southern neighborhood of the lava field, where almost no volcanic ash was deposited, was selected. Differencing of the post-and pre-eruption DSMs showed an average vertical offset of − 5.5 m for that area, where actually no topographic changes occurred during the eruption event. Consequently, this offset was added to the post-eruption DSM in order to guarantee the same base height within unaffected areas. Cross-check within the unaffected areas gives an RMSE of the offset corrected DSM of 1.1 m. The offset of the Agisoft Metashape Pleiades DSM was measured in eight different areas around the lava flow with the final RMSE of 0.9 m. This value was later distributed over the area of the lava field to further estimate the lava volume error. Next, both Pléiades DSMs (CATENA and Agisoft Metashape) as well as the pre-eruption LiDAR DSM were cut to the extent of the final lava flow area as mapped by the corresponding Copernicus (EMS) map (cf. Table 3). Then, the lava flow cut pre-eruption LiDAR DSM was subtracted from each lava flow cut post-eruption the Pléiades DSMs to compute the final lava flow volume. DSM generation and lava volume estimates from bi-static TanDEM-X SAR data. The three bi-static TanDEM-X datasets were processed with the ENVI ® SARscape ® software to derive from the Coregistered Single look Slant range Complex (CoSSC) TanDEM-X data DSMs via SAR interferometric processing. The processing workflow includes interferogram generation and flattening, phase filtering (using the Goldstein phase filter) and unwrapping (using the Minimum Cost Flow Approach), phase to height conversion and geocoding. The pre-eruption LiDAR DSM was used as base information for the bi-static TanDEM-X DSM generation. All TanDEM-X DSMs were processed to 5 m spatial resolution. www.nature.com/scientificreports/ DSM and the lava volume calculation were done for all three TanDEM-X DSMs as described in "DSM generation and lava volume estimates from optical tri-stereo data" section) for the CATENA Pléiades DSM. Table 3). Finally, the lava volumes were derived via differencing of the cut corrected TanDEM-X DSMs from the cut pre-eruption LiDAR DSM. Lava effusion rate and volumes estimates from thermal satellite imagery. The VIIRS and MODIS data were analyzed as follows: (1) only data with low scan angles were considered, as high satellite zenith angle strongly influence the reliability of volcanic hotspot detection, inducing possibly distortion effects 11 . MODIS data with zenith scan angle values < 40.00° were selected 11 . VIIRS data acquired at a scan angle ≤ 31.59° were considered as this scan angle region corresponds to the first aggregation region of VIIRS, where three native pixels are aggregated along the scan direction to form one data sample in the Level 1 image 56 . (2) In order to allow analysis of potential cloud cover, only day time MODIS and VIIRS data were considered. The influence of reflected sunlight is very low in MIR region, particularly at high temperatures related to great effusive events like the 2021 La Palma eruption, and may not be considered 8 . Then, only images showing clear sky condition over the lava field were selected to avoid underestimation of the lava effusion rate due to (partly) cloud or volcanic aerial ash over the study site. Next, the volcanic radiative power (VRP) was calculated for the MODIS and VIIRS hotspots using the MIR approach described by Wooster and coauthors 59 . This MIR approach assumes that the measured heat flux is just related to lava portions having a radiating temperature above 600 K. This approach is especially valid for hot bodies with an integrated temperature between 600 and 1500 K. According to Wright and coauthors, these conditions apply to most active lava bodies, such as lava flows 60 . Then, the total VRP over all hotspots detected within each single daytime (see above) overflight of MODIS and VIIRS were calculated. Next, the overflight with the maximum total VRP per day was selected and considered in the following processing steps. We selected the maximum total VRP per day in order to compensate for potential missing detection of hotspots due to thin ash or meteorological clouds that were missed in the aforementioned visibility check of the satellite imagery. According to the empirical approach of Coppola and coauthors, the TADR and the erupted lava volume were estimated 24 . This approach directly links the TADR with the VRP (Eqs. (1), (2)): with c rad (in J/m 3 ) is called radiant density c rad represents the empirical relationship between radiant and volumetric flux for the analyzed thermal emitting lava. X SiO 2 is the silica content (normalized to 100%) of the erupted lava investigated (cf. Table 2). According to Coppola and coauthors, an uncertainty of ± 50% c rad has to be considered to account for anticipated significant effects that bulk rheology has on spreading and cooling processes of active lava 24 . Consequently, two TADR estimations were performed with (1) c rad min = 0.5 × c rad and (2) c rad max = 1.5 × c rad . The final TADR is the mean of both. The erupted lava volume between two sequential satellite acquisitions t i and t j can be calculated by computing the integral between of TADR t i and TADR t j (Eq. (3)). The final erupted lava volume V is the cumulative sum of V t (Eq. (4)). Lava volume estimates from multi-sensor satellite data. The lava volume estimates based on the VHR satellite imagery (TanDEM-X and Pléiades) are more reliable than the volume estimates from thermal EO data (cf. detailed explanation in "Results" and "Discussion" sections). Consequently, we used the TanDEM-X volume estimates, that were derived from TanDEM-X acquisitions during the eruption, to calibrate the volume estimates from thermal EO data. This was done by replacing the thermal volume estimates by the value of the next available TanDEM-X volume measurement. The calibrated time series is then the high frequent thermal EO data volume estimates but corrected for more precise measurements (TanDEM-X) whenever available (Fig. 5). The uncertainty of the thermal EO estimates were considered in the final calibrated time series. Data availability Original satellite data are available via DLR (TanDEM-X) and Airbus/ESA (Pléiades). Information derived from the satellite data are available from the corresponding author on request.
2023-02-04T14:50:52.761Z
2023-02-04T00:00:00.000
{ "year": 2023, "sha1": "57374a7ca60d19520436e196751dd1004fc49a3b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "57374a7ca60d19520436e196751dd1004fc49a3b", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
252328114
pes2o/s2orc
v3-fos-license
Role of A‐Site Composition in Charge Transport in Lead Iodide Perovskites As the power conversion efficiency and stability of solar cells based on metal halide perovskites continue to improve, the community increasingly relies on compounds formed of mixed cations and mixed halides for the highest performing devices. The result is that device engineers now have a potentially infinite number of compositions to choose from. While this has provided a large scope for optimization, it has increased complexity of the field, and the rationale for choosing one composition over another remains somewhat empirical. Herein, the distribution of electronic properties for a range of lead iodide perovskite thin films is mapped. The relative percentages of methylammonium, formamidinium, and cesium are varied, and the electronic properties are measured with time‐resolved microwave conductivity, a contactless technique enabling extraction of electronic properties of isolated films of semiconductors. It is found a small amount of Cs leads to larger carrier mobilities and longer carrier lifetimes and that compositions with a tolerance factor close to 0.9 generally show lower performance that those closer to 0.8 or 1.0. DOI: 10.1002/aesr.202200120 As the power conversion efficiency and stability of solar cells based on metal halide perovskites continue to improve, the community increasingly relies on compounds formed of mixed cations and mixed halides for the highest performing devices. The result is that device engineers now have a potentially infinite number of compositions to choose from. While this has provided a large scope for optimization, it has increased complexity of the field, and the rationale for choosing one composition over another remains somewhat empirical. Herein, the distribution of electronic properties for a range of lead iodide perovskite thin films is mapped. The relative percentages of methylammonium, formamidinium, and cesium are varied, and the electronic properties are measured with time-resolved microwave conductivity, a contactless technique enabling extraction of electronic properties of isolated films of semiconductors. It is found a small amount of Cs leads to larger carrier mobilities and longer carrier lifetimes and that compositions with a tolerance factor close to 0.9 generally show lower performance that those closer to 0.8 or 1.0. glass) as a function of time, in response to irradiation by a pulsed optical source. [20,21] Examples of TRMC transient photoconductance (ΔG) measured in this study are shown in Figure 1 for various incident laser fluence values. Figure 1a shows example data for pure MAPbI 3 , Figure 1b for MA 0.5 FA 0.5 PbI 3 , and Figure 1c for (MA 0.17 FA 0.83 ) 0.95 Cs 0.05 PbI 3 . While the exact fluence values are different for each compound, the same behavior is observed: a rapid increase in ΔG when the laser pulse is incident, followed by a slower decay as free charges recombine. The TRMC figure of merit is ϕΣμ, where ϕ is carrier generation yield and Σμ ¼ ðμ e þ μ h Þ is the sum of the average mobilities of photogenerated carriers, where μ e is the average electron mobility and μ h is the average hole mobility, over the illuminated sample area. With knowledge of the sample absorbance, the incident number of photons, ϕΣμ can be extracted from transient ΔG TRMC data. [22] ϕΣμ has the same units as carrier mobility but is often lower than Σμ because the conversion of absorbed photons to electron-hole pairs is rarely 100% efficient. For example, some photons will be absorbed, but form bound excitons and recombine before being able to separate into electrons and holes. [23,24] However, the exciton binding energy for most perovskites is often below the thermal energy at room temperature, and ϕ is therefore close to unity. [25] Hence the TRMC figure of merit can often be interpreted in a similar way to the sum of carrier mobilities in perovskites. Figure 2a shows the mean value of ϕΣμ for three identically processed samples, measured as a function of incident laser fluence. ϕΣμ decreases as a function of incident laser fluence at www.advancedsciencenews.com www.advenergysustres.com high fluence values but is less sensitive at lower values of fluence. This is because bimolecular and Auger recombination during the laser pulse reduce the peak observable carrier concentration when carrier density is high. Models have been developed to account for this and enable representative (fluence-independent) values of ϕΣμ to be extracted from such data. [26] The red line in Figure 2a shows an example of such a fit. Reproducibility is another issue that not only inhibits commercialization of perovskites, [27] but also makes it challenging to make consistent and reproducible statements on perovskite properties. [28,29] In this study we prepared three samples for each composition and draw conclusions based on averages of the figure of merit, rather than individual samples. The mobility of charge carriers in solar cells is a critical factor in performance, as faster moving charges are more likely to be extracted before recombining than slower charges. [30] Another crucial parameter which impacts extraction is the average lifetime of charges in the film, τ. [31] Lifetime often depends on carrier type and comprises contributions from monomolecular, bimolecular, and Auger processes. [32] For this reason, τ typically depends on carrier density, and defining a single value of τ for all conditions is challenging. Because TRMC transient data encapsulate electronic properties over several orders of magnitude of charge density, [26] TRMC transient data rarely follow a single exponential decay [33] and hence attribution of a single value of τ from TRMC data is not obvious. Instead, the photoconductance half life, τ 1=2 , is often used as a proxy for τ. [34] Half life is defined as the time it takes for photoconductance to fall from its maximum value, ΔG max , to half of the maximum ΔG max =2. Figure 2b shows τ 1=2 as a function of incident laser fluence. As with ϕΣμ, τ 1=2 decreases with increasing fluence, as bimolecular and Auger recombination processes increase the rate of recombination as carrier density increases. For the purposes of this study, we have simply used the unweighted average of τ 1=2 over all measured values of fluence. We acknowledge that this value will be skewed by the higher number of measurements made at higher fluence, but since all samples were measured with comparable fluence values, and all exhibit a similar dependence of τ 1=2 on fluence, we view this as a reasonable, albeit crude, way to compare relative recombination rates between samples. Role of A-Site Composition on Electronic Properties Analogous measurements to those depicted in Figure 2 were carried out on a range of perovskite thin films, with various compositions. The extracted mean figure of merit as a function of A-site % is shown in Figure 3a. This is a ternary plot, meaning the vertices represent compositions with 100% MA, FA, Cs, respectively, on the A-site. The lines moving away from these vertices represent lower values (e.g., a horizontal line through the triangle represents 50% Cs). The points show compositions which were studied, and the colors are linear interpolation between these points. For each composition studied, the average from three identically processed samples was used to evaluate the value plotted. A wide range of factors affect mobility in perovskites, including microstructure, [35] trap states, [36] carrier density, [26] temperature, and [37] interface properties, [38] However, we do make some interesting observations when considering just the A-site composition in this way. Pure MAPbI 3 , FAPbI 3 , and CsPbI 3 exhibit relatively high mobilities. It is possible that this is due to lower in-grain energetic disorder in these systems. As mixed cation compounds will have a random cation at each A-site, not all unit cells will be the same, and we would hence expect coherence of wavefunctions to be affected over the grain. More experiments (or calculations) would of course be useful to better understand this. All measurements were carried out immediately after film fabrication and were generally concluded within 1 h. For the quasistable [8] compound CsPbI 3 , the sample was also measured one day after measurement to determine if it had degraded. After one day, a TRMC signal was not observable for the degraded film, suggesting it was no longer optoelectronically active. We can therefore conclude that the signal measured straight after deposition (and plotted in Figure 3) is at least partially CsPbI 3 in a perovskite form. We however cannot say with any certainty that this is pure CsPbI 3, but we can say there is some CsPbI 3 present; otherwise, we would see no signal. For this reason, we interpret this particular vertex as being the minimum ϕΣμ, rather than a representative value. In most double-cation mixtures, the highest ϕΣμ is obtained when each compound is mixed in the same ratio (e.g., MA 0.5 FA 0.5 PbI 3 ). The exception is for FACs, where the highest value was obtained when 75% of FA and 25% of Cs were mixed. While exact attribution of a certain value of ϕΣμ with a certain composition is not the objective of this study, it is important to note that incorporation of small amount of cesium has been shown to yield a significant improvement in solar cell device performance. [16,39,40] In the case of triple-cation systems, as the proportion of cesium increases, mobility was observed to decrease generally. It is also important to note that the highest and the lowest average mobilities are both found in the triple-cation system. Figure 3b shows the average half life as a function of A-site composition in the same perovskite thin films. In contrast with the mobility, lower values are found at the three vertices. Because the representative τ 1=2 values plotted in Figure 3b are an unweighted average of τ 1=2 extracted at all fluences, and more measurements are taken at higher fluence than low fluence (e.g., see Figure 2b), the data in Figure 3b is more representative of bimolecular processes than monomolecular process. [26] Unlike time-resolved photoluminescence (TRPL), TRMC is not sensitive to the decay of bound excitons; it is only sensitive to free charges, which often results in decay transients being observed over dissimilar timescales in the same compounds. [41] Similarly, constant bias-based techniques like transient photovoltage (TPV) [42] are also likely to probe dynamics not observed in zero-bias techniques such as TRMC or TRPL. Therefore different transient techniques can yield decay curves over very different timescales: from picoseconds [37] to milliseconds. [43] For this reason, a comparison of lifetime between techniques must be carried out with care. [32] It is important to reiterate that the data presented in Figure 3 is the mean of three identically processed samples for every condition and that TRMC measures electronic properties over a macroscopic area (defined by laser spot size). We know that microstructure [29,35] and phase [44] have a significant impact on charge transport in this class of materials, so we would expect differences in parameters depending on processing, but we interpret the presented values as the representative average for each composition. Relationship Between Properties in Lead Iodide Perovskite Thin Films While the composition clearly does play a role in both mobility and lifetime of carriers in perovskites, it is not immediately clear why this is the case. Since microstructure is known to be affected by composition, this is a possible explanation. However, the relationship between microstructure and carrier dynamics is complex. Grain boundaries have been argued to be beneficial, neutral, and detrimental to certain aspects of perovskite solar cell performance. [45][46][47] For example, there is evidence from Kelvin probe force microscopy that grain boundaries help the separation of photogenerated carriers. [48] Similarly, atomic force microscopy experiments have demonstrated that charge generation and transport are not significantly different in grains and at grain boundaries. [49] This can be rationalized by the fact that defects in perovskites can be shallow and/or inactive. [50] If grain boundaries in this class of materials were relatively benign [49] it would also be consistent with the general observation in the literature of diffusion lengths exceeding average grain sizes. [51,52] In contrast to this, there exists abundant experimental [53,54] and computational [55] evidence that charge recombination is accelerated at grain boundaries, and broadly the consensus is in community is that grain boundaries are not desirable in perovskite solar cells. [56][57][58][59] In our case, it is unlikely that microstructure alone is responsible for the variations in Figure 3. We have shown that, for our processing and measurement protocols at least, surface morphology is not strongly correlated with ϕΣμ. [29] Another possibility is that the average structure of the perovskite is modified, and the electronic structure also changes. A simple way to quantify the structure of perovskites is using the Goldschmidt tolerance factor, [60] defined by Equation (1). Note that here we parameterize the tolerance factor with the symbol TF, rather than the more commonly used t, to avoid ambiguity with time in our TRMC measurements. Here r A and r B are the ionic radii of the A-and B-site cations, respectively, and r X is the ionic radius of the anion. For perovskite structures, TF is bound between 0.8 and 1. In perovskite structures, a TF between 0.9 and 1 is known to result a cubic structure, and a TF between 0.8 and 0.9 is the orthorhombic phase. The tolerance factor can be indicative but is not a unique parameterization strategy, since two completely different structures, with different cation mixtures, may result in the same tolerance factor value. For instance, MA 0.4 FA 0.6 PbI 3 and FA 0.8 Cs 0.2 PbI 3 have the same tolerance factor. Moreover, MA and FA cations are organic compounds, rather than atoms, making it more challenging to define ionic diameter. Here we have calculated the Goldschmidt tolerance factor calculated using the following radii r MA ¼ 217 pm, r FA ¼ 253 pm, r Cs ¼ 181 pm, r Pb ¼ 133 pm, and r I ¼ 220 pm. Since TF ¼ 0.9 represents the boundary between cubic and orthorhombic phases, we could split our data into two groups: those with 0.8 < TF and ≤0.9 and those with 0.9 < TF ≤ 1. Instead, to consider all compounds in a single analysis, we have quantified our structures using the parameter jTF À 0.9j, that is, how far away from the cubic/orthorhombic boundary the structure is. Figure 4a shows a scatterplot of ϕΣμ plotted against the parameter jTF À 0.9j, and Figure 4b shows τ plotted against jTF À 0.9j. While there is significant scatter, there is a slight positive correlation between both ϕΣμ and jTF À 0.9j, and τ and jTF À 0.9j, of 0.28 and 0.21, respectively. This suggests that structures with better defined cubic or orthorhombic structures could generally lead to better transport properties throughout the film. Although of course it is important not to read too much into relationships with this much variance, as there are clearly many other factors which play a role in transport besides this parameter. A more detailed analysis with a larger number of compositions would help determine statistical significance (if any). Conclusion In this work, TRMC was applied to evaluate electronic properties of lead iodide perovskite thin films. The mobility-yield product and a proxy for lifetime were extracted and plotted for a range of cationic compositions, illustrating a strategy which can be built upon in the future. We found a general enhancement in mobility for the compounds with a single cation at the A-site: MAPbI 3 , FAPbI 3 , and CsPbI 3 , but triple-cation compounds containing a small amount of cesium also exhibited high mobilities. By correlating these electronic properties with Goldschmidt tolerance factor, we found a small positive correlation with mobility when tolerance factor is between 0.9 and 1 but a negatively correlated behavior was observed for tolerance factor less than 0.9. This result suggests that as the structure becomes more stable, as quantified by from tolerance factor, the electronic performance generally improved. Experimental Section Methylammonium Lead Iodide (MAPbI 3 ) Thin Films: Lead iodide (PbI 2 ), methylammonium iodide (MAI), and dimethyl sulfoxide (DMSO) were mixed in a 1:1:1 molar ratio and then dissolved in dimethylformamide (DMF) at 3 mmol mL À1 . Chlorobenzene was used as antisolvent and then the films were annealed at 100°C for 10 min. MAI was purchased from GreatCell Solar and PbI 2 was purchased from Sigma Aldrich. Cesium Lead Iodide (CsPbI 3 ) Thin Films: 0.8 M of CsI was dissolved in a mixture of DMF/DMSO at 4:1 volume ratio. CsI was purchased from Sigma Aldrich. The precursor solution was left for stirring for overnight. Films were spin cast onto clean quartz substrates. Two-step spin casting was applied: 1000 rpm for 10 s and then ramped up to 6000 rpm with acceleration of 4000 rpm for 1 min. Toluene was dripped 30 s before the spin-casting ends. The films stayed in air for 30 min before the annealing step. Films were annealed at 340°C for 10 min. MA x FA 1Àx PbI 3 Thin Films: Two master solutions MAPbI 3 and FAPbI 3 were prepared. They were then mixed with an appropriate ratio after being filtered individually. MA x Cs 1Àx PbI 3 Thin Films: MAI and CsI salts were mixed in an appropriate ratio and then dissolved in a mixture of DMF/GBL at 97:3 volume ratio at 120 mg mL À1 . Toluene was used as antisolvent. FA x Cs 1Àx PbI 3 Thin Films: FAI and CsI salts were mixed in an appropriate molar ratio and then dissolved in a mixture of DMF/DMSO at 3:1 volume ratio. Ether was used as antisolvent. ( Time-Resolved Microwave Conductivity: The TRMC system used in this study was described in a previous report, and a brief description is provided here for completeness. [22] Microwaves were generated using a Sivers IMA VO4280X/00 voltage-controlled oscillator (VCO). The signal had an approximate power of 16dBm and a tunable frequency between 8 and 15 GHz. The VCO was powered with an NNS1512 TDK-Lambda constant 12 V power supply, and the output frequency was controlled by a Stahl Electronics BSA-Series voltage source. The sample was mounted inside the cavity at a maximum of the electric field component of the standing microwaves, using a 3D-printed PLA sample holder. Microwaves reflected from the cavity were then incident on a zero-bias Schottky diode detector (Fairview Microwave SMD0218). The detected voltage signal was amplified by a Femto HAS-X-1-40 high-speed amplifier (gain ¼ Â100). The amplified detector voltage was measured as a function of time by a Textronix TDS 3032C digital oscilloscope. A Continuum Minilite II pulsed neodymium-doped yttrium aluminium garnet (Nd-YAG) laser was used to illuminate the sample. The laser pulse had a wavelength of 532 nm, FWHM%5 ns, and a maximum fluence incident on the sample of %10 15 photons cm À2 pulse À1 . An external trigger link was used to trigger the oscilloscope before the laser fires. The photoconductance and TRMC figure of merit ϕΣμ were evaluated from changes in the detector voltage using standard analysis. [20] UV-Vis Spectroscopy: UV-visible absorption spectra were obtained using a Shimadzu UV-Visible Spectrometer UV-2600 ranging from 200 to 1000 nm for all samples. Statistical Analysis: Preprocessing of Data: Noise was subtracted from raw voltage versus time data by running a measurement with the beam blocked. No other filtering or preprocessing of transient data took place. No data points were removed. Statistical Analysis: Data Presentation: Data points used in ϕΣμ versus fluence and τ 1=2 versus fluence plots ( Figure 2) were means, and error bars were standard deviations. Points in Figure 3 were mean values of ϕΣμ and τ 1=2 , and color gradients were linear interpolations between points, carried out with OriginLab Origin. Points shown in Figure 4 were individual data points (not mean values). Statistical Analysis: Sample Size: Three samples were used for each mean and standard deviation of ϕΣμ and τ 1=2 plotted in Figure 2. 3 samples were used for each mean ϕΣμ and τ 1=2 values plotted in Figure 3. www.advancedsciencenews.com www.advenergysustres.com Statistical Analysis: Statistical Methods: Means and standard deviations were evaluated using standard techniques. The red lines in Figure 4 were simple linear regressions. Correlation values quoted in the text were standard statistical correlations. Statistical Analysis: Software Used for Statistical Analysis: Microsoft Excel.
2022-09-17T15:07:00.268Z
2022-09-15T00:00:00.000
{ "year": 2022, "sha1": "4dfc716c3878680fe988c514956e401ef760bb6f", "oa_license": "CCBY", "oa_url": "https://discovery.ucl.ac.uk/10156559/1/Adv%20Energy%20and%20Sustain%20Res%20%202022%20%20Hong%20%20Role%20of%20ASite%20Composition%20in%20Charge%20Transport%20in%20Lead%20Iodide%20Perovskites%20(1).pdf", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "4bc533f68029f5e1e8a5732690d86ceebfd6d624", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
218515321
pes2o/s2orc
v3-fos-license
Regional Logistics Network Design in Mitigating Truck Flow-Caused Congestion Problems Sino-US Global Logistics Institute, Shanghai Jiaotong University, Shanghai 200240, China School of Transportation and Logistics, Southwest Jiaotong University, Chengdu, Sichuan 610031, China National United Engineering Laboratory of Integrated and Intelligent Transportation, National Engineering Laboratory of Big Data Application in Integrated Transportation, School of Transportation and Logistics, Southwest Jiaotong University, Chengdu, Sichuan 610031, China Introduction Traffic congestion has become a critical social issue in several metropolis and cities worldwide. Recently, both government and academia have put increasing effort to ease traffic congestion. Some of the major cities in China have achieved success regarding traffic congestion reduction. Compared with 2016, the maximum traffic-congested index during peak hours reduced by 4.8% [1]. Such progress may be attributed to the specific demand management mechanism and control policy, public transport system optimization, and emerging individual travel modes, such as ride-sharing demand-responsive transport and shared bikes [2]. In other words, the reduction in traffic congestion is due to the optimization of individual/passenger travel. However, from the perspective of roadway freight transport, there is a rising impact of urban congestion and greenhouse gas emissions. Truck flow accounts for 30% of the traffic flow in China's expressways [3]. In urban areas, truck proportion is also increasing due to the development of e-commerce and the regional economy. Consequently, researchers seek to discover new ways of achieving traffic congestion relief in the roadway freight transport system [4]. It should be noted that the spatiotemporal imbalance between the demand and supply of individual trips is not easy to solve. However, it is possible to adjust the spatiotemporal distribution of trucks by RLN design, which refers rescheduling delivery plans, relocating logistics facilities, and optimizing regional logistics networks [5]. Several measures can be taken to relieve traffic congestion caused by trucks. Such measures include controlling roadway freights by adjusting the in-city distribution or delivery time windows to avoid trucks traveling during commute time, as well as locating the logistics centers far from the cities [6,7] since the distribution of truck flow is determined by commercial demand, which is not easily reduced. Moreover, cargoes should be transported to the right destination at a proper time. us, after the demand and supply locations are determined, the choice of the travel time and routes are usually limited. In China, there are strict rules and policies on the vehicle type, travel time, and truck routes in the cities. Assuming that optimizing the logistics network design and planning could predetermine the potential truck flow and route, then the design and planning of the logistics network should be carried out and integrated with the existing traffic flow in the urban traffic network. In this context, finding a mechanism to solve the roadway traffic flow problems through the RLN design is a critical issue to be addressed, which is essential to mitigate traffic congestion in a given region and to reduce truck travel delay. Generally, the constraints of the RLN design problem is to satisfy the regional logistics demand in a given time period. en, the objectives of RLN include determining the number and locations of facilities and producing the transport plan among the facilities, in which the regional transport status is barely considered. Figure 1 displays a real logistics network of Chengdu, China, showing the background traffic status. It is clear that, for a fixed location of logistics centers, there is limited space for selecting the travel route of trucks, indicating that the truck flow will affect the original regional transport network and may intensify the regional traffic congestion. In other words, it is necessary to take the regional transport status into account in the RLN design problem. To fill the gap of existed research on RLN, this paper develops a novel RLN design model to mitigating truck flowcaused congestion problems. We attempt to use the delivery time to link the background traffic flow status and logistics system, and the delivery time is directly related to the RLN service level. erefore, this study also falls into a service decay problem category in RLND, in which the service capacity gradually declines with the increase in the distance or travel time between the logistics facilities and demand points. en, we capture the service decay of each facility in RLN by formulating a service uncover degree function (UDF). In the UDF, the delivery time is measured by an improved Bureau of Public Roads (BPR) function (impedance function), which calculated the truck delivery time by considering the background traffic state. By setting demand shortage penalty costs, we model the RLN design problem as a minimal cost problem. In the proposed model, besides the classic decision problem in RLN (e.g., facility location and transport planning), the decision for the degree of service and the corresponding truck-type selectionloading are integrated into the model. Last but not the least, the Lagrangian relaxation algorithm is developed in the model, and typical examples and sensitivity analysis demonstrate the effectiveness of the model and the algorithm. e remainder of this study is structured as follows. After reviewing the literature in Section 2, we describe the problem and develop a novel UDF in Section 3. en, the network design model and double-layer Lagrangian relaxation heuristics algorithms for solving the described problem are presented in Section 4. Numerical experiments and key factors sensitivity are analyzed in Section 5. In Section 6, the discussion and conclusions of the study are presented. Literature Review Logistics network design (LND) is popularly applied in research and practice. e forward or reverse LND, supply chain design, service network design, and regional logistics network design (RLND) are branches of LND. ere are different sophisticated theories about the application of LND in system engineering, operational research, and logistics engineering. LND mainly refers to solutions to a series of optimization problems in a logistics network system for long-term strategic planning levels, such as facility location, capacity decision, distribution planning, and vendor selection [8]. LND can be applied in terms of research, which could be in the enterprise/private sector or government/ public sector. In the private sector, firms may apply LND models and algorithms when developing a logistics network or supply chain network for new markets or replanning the network for an existing market. e objective is to optimize the total costs for facility location, distribution planning, and supply chain organization while satisfying several demands in the final customer market [9]. However, RLND is mainly applied in the public sector perspective, which optimizes the social logistics network by minimizing the logistics cost of the whole society to provide decision support for the regional logistics network (RLN) planning. us, it is more realistic to consider the traffic congestion in RLND because RLND helps to solve region logistics problems and similar transportation planning problems, which influences regional logistics and transportation infrastructures. In addition, a comprehensively new regional logistics network can lead to a stronger competition among industries and offer location advantages. Most of them pay attention to the joint optimization problem of road network design and distribution based on different location problems, which provide a solid theoretical background for RLND. In general, the RLN design problem is difficult to tackle due to the nondeterministic polynomial-time hardness (NP-hardness) properties. Notably, it is more complicated if we consider the background traffic flow. From the literature, the RLN design problem is almost a complete theoretical system; researchers are mainly focused on developing more practical models and more efficient algorithms. e most popular algorithms for solving RLND problems include the intelligent optimization algorithm, heuristic algorithm, and network flow algorithm [10][11][12][13][14][15][16][17]. Yet, few studies considered the background traffic flow state. Concerning fusion of the logistics system and background transport network, motivated by cost reduction in the company sector, some studies managed congestion problems in the product logistics network or the supply chain network. Bai et al. first introduced the BPR function in modeling the travel time and congestion costs in the location of refinery location decisions [18]. Konur and Geunes utilized a game model in providing an analytical characterization of the effects of traffic congestion costs on equilibrium distribution flows [19]. Jouzdani et al. applied fuzzy linear programming in modeling the dynamic travel time during traffic congestion and solved the dynamic dairy facility location and supply chain planning problems [20]. More recently, Mohammad et al. managed congestion in a biomass supply chain network through dynamic freight routing using multimodal facilities at different periods of a year [21]. However, unlike in these reviewed studies, the optimal model proposed in this study designs a regional logistics network, avoids increasing the level of regional traffic congestion, and promotes the service level of each facility simultaneously. For the literature studies on the service decay function of facility in the logistics network, there is a specific service coverage radius for logistics facilities that when the demand point is within this coverage radius, this demand point is assumed to be covered entirely. But, this is considered as not being wholly covered when it is outside the coverage radius [22][23][24]. e service level degree corresponds to the extent whether the demand nodes are covered. e most relevant studies to date are the gradual recession coverage model, stepwise coverage model, stochastic range decay coverage model, and minimum-maximum regression models. Generally, in such models, the logistics facility is given initial upper and lower bounds for the service distance. When the travel distance between the customer/demand point and the facility is within the lower bound distance, it appears that the demand point is completely covered; when the distance exceeds the upper bound, the demand point is not covered. For the demand points in the upper and lower bounds, a decay function is built to describe the demand point. e critical variable in the classic decay coverage function is the shortest distance between the demand point and the facility point. is function is essentially a distance-covering function between the demand point and the facility point, that is, the service capacity gradually declines with the increase in the distance between the facilities and demand points. However, in reality, the logistics demand is not only a simple spatial distance but also focuses on the delivery time and delivery volume of the facility through diverse modes of transportation. Concerning the logistics demand characteristics, the noncoverage function of logistics demand based on time and logistics delivery quantity is constructed, which has been proved to be applicable in facility location problems [25]. In equation (1), T j and t j represent the upper bound and lower bound of the response time of the logistics network, respectively, which are required in the customer/demand point, q ij is the service volume from the supply point to the demand point, and t ij is the shortest delivery time from the facility to the corresponding node. e upper bound is the maximum delivery time limit of the customer/demand point. e lower bound is the customer/demand point that could be fulfilled by the corresponding facility rapidly; in other words, the facility can provide a high service level to the corresponding customer/demand point. When the service time is within the upper and lower bounds, the demand point can be served, but its degree of service has a corresponding recession. Yu et al. further applied this function in the network design models of fresh agricultural product supply chain and pointed out that this function is equivalent to the logistics servicequality evaluation function [26]. Although the function can describe the relationship of the service level between the facility and the demand point better, the free-flow time to be replaced is selected for the point between the facility and the point of transport of goods, that is, the road traffic network is neglected in the cargo transport process. Nevertheless, this assumption is inconsistent with the actual situation. Correspondingly, when the service time exceeds the upper bound demand time, the noncoverage is a considerably large constant. When only the free-flow transportation time is taken into account, the value of the freight q ij can meet the demand, that is, . However, when the traffic flow is considered, the actual transportation time will increase, the inequality will not always hold, and the function needed to be redefined. To capture the delivery time better, in the next section, we will first develop an uncover degree function (UDF) by applying the service decay function and the BPR function. Formulating the Service Uncover Degree Function To enhance the understanding of the interface of regional truck flows and regional transportation systems, we have applied the truck trajectory data of Chengdu city from August 2017 to October 2017. e left part of Figure 2 shows the spatial distribution of weekly truck flows of Chengdu city. Remarkably, the trucks are clustered on a limited pathroad in the city. No matter the weekday, the trucks mainly traveled by the third ring road and fourth ring expressway as well as some main trunk roads connecting the logistics centers of the city. e right part of Figure 2 shows the ranking of the most congested roads in Chengdu city in 2017. e fourth ring expressway is ranked number 1, and the third ring road is ranked 4. erefore, a considerable number of trucks passed through the most congested roads. To modeling and capturing the service decay on travel time, first, we need to model the relationships between the background traffic flow, logistics freight flow, and travel time. Figure 3 gives a typical OD pair and its corresponding real route. Since the BPR function is the most widely accepted traffic delay function for both China and the U.S., we use the BPR function as the time function to estimate the travel time of each OD pair in the RLN as follows: where x ij is the traffic flow between OD pair i, j; t ij (x ij ) is the actual road access time when the traffic flow is x ij ; t 0 ij is the free-flow time; c ij is the road traffic volume; and α and β are constants that depend on the government regulation (e.g., according to the recommendation of the US Department of Transportation, the general road constant value can be set to α � 0.15 and β � 4. In China, the values are also applied). Based on the above analysis, the uncover degree function of the logistics demand point under the condition of traffic delay is deduced as follows: where b ij is the average value of the traffic flow on road ij when the corresponding logistics facilities are not constructed; y ij is the traffic flow on road ij when the corresponding logistics facilities are constructed. Introducing the constant δ to express the conversion factor between the traffic flow and freight volume, we have the following expression: en, we obtain the UDF of the problem as follows. Suppose a unit of traffic can transport 1/δ unit of goods, apply (2), (3), and (4) into function (1), and overwrite the noncover value where the upper bound time is overwritten to a sufficiently large value of M. To facilitate the observation, measuring the relationship of the UDF and distribution time in the logistics network, let q ij � 1. Considering that the road traffic has already achieved the capacity bottlenecks, then (b ij + δq ij )/c ij � 1. e upper and lower bounds of the distribution time of the demand nodes are set to 24 and 12 h, respectively. MATLAB R2010a was employed to plot the relationship between the travel time and uncover degree. As shown in Figure 4, when t ij ≤ t j , the demand point is regarded as having high coverage. e logistics network can respond to the logistics demand of the demand point quickly, and the noncoverage value can be taken as zero. When t j < t ij ≤ T j , the demand point is assumed to be under good coverage, and the uncover degree value is decayed by increasing the transport time. When t ij ≤ T j , then the demand point is not covered by any corresponding facilities, and the logistics network can hardly respond to logistics needs. is situation should be avoided in an actual logistics network design process. erefore, we manually set a value sufficient to describe the uncover degree. Models. e RLND model constructed in this study aims to minimize the design and operation costs of the logistics network to satisfy the regional logistics demand. As the gathering point of the regional logistics demand is usually the best candidate for the logistics facility location, it is assumed that the demand point in the logistics network is also the candidate site for site selection. For the treatment of road traffic flow, it is assumed that the road background traffic flow is the accurate data, which can be obtained through historical data. We now introduce the following symbols as shown in Table 1 in this section: e following equations are used in the mathematical models: Journal of Advanced Transportation e objective function aims to minimize the costs of road building, transportation, road expansion, facility turnover, fixed costs of facility location, and punishment costs of the demand point uncover degree based on the traffic flow. Constraints (7) and (8) provide a balance between the supply and demand of the network flow. Constraint (9) guarantees that the supply of the logistics network is not less than the total demand. Constraint (10) describes the relationship between the road expansion variables and road capacity. Constraint (10) shows that the edge set of the roadway to be built only needs expansion when the actual traffic flow exceeds the capacity. Constraint (11) states that the road capacity should not be less than the sum of the background traffic flow and the increasing freight traffic flow for the edge set of the roads with no need for expansion. Constraint (12) limits the demand point uncovered function. Constraints Decision variable is equal to 1 when the facility location is at node i; otherwise, it is 0 q ij Decision variable, which stands for the freight volume of the road (i, j) during the planning period, described as the amount of cargo from facility i to demand point j (unit: t) Q i Turnover during the planning period when facility point i is also the candidate for facility location (unit: tons) Journal of Advanced Transportation (13) and (14) guarantee that the variables are positive and range between 0 and 1. Algorithms. To solve this problem, the heuristic algorithm based on Lagrangian relaxation (HALR) is used to provide the upper and lower bounds for the original problem. Subsequently, the corresponding algorithm is developed to solve the NP-hardness problem in the model expressed from (6)- (14). Accounting for the characteristics of models (6)- (14), constraints (8) and (10) are used for relaxation. e simple Lagrange multiplier μ ij and nonnegative Lagrange multiplier μ ij ′ are introduced, and the relaxation of the original problem is as follows: For any sum μ ij and μ ij ′ , objective function (15) is a lower bound of the original problem. Furthermore, (15) is rewritten as a function based on the Lagrangian multiplier and decomposes into the following two subproblems. Subproblem 1: this subproblem refers to the general logistics network design. is problem can be solved using the developed heuristic algorithm. Subproblem 2: it refers to the traffic flow assignment. (17) Based on the above analysis, the algorithm steps of the original problem are designed as follows: (1) Initialize the input data according to the free flow of time between the nodes on the network. Remove the connection side of the situation t 0 ij > T j . For the connection edge t 0 (2) Construct the Lagrangian multiplier μ ij and μ ij ′ , and relax the original problem, as shown in (15). In addition, relax 0, 1 constraint (14). (3) Let k be equal to 100 for a small-scale problem or 500 for a large-scale problem. Take ε as a small, nonnegative real number, run a subgradient search to solve subproblem model (16), and obtain an approximate optimal solution. (4) Substitute q ij * into objective function (17) and obtain the following expression: i∈S j∈D (q ij where μ ij ′ is the transport time on the virtual edge, that is, t(v) � μ ij ′ , which can be solved using the Frank-Wolfe method. we obtain the upper-bound solution of the two subproblems z 1 and z 2 by updating the iteration. Recall that the number of iterations is 100 (small-scale problem) or 500 (large-scale problem). At z − z(μ, μ ′ ) ≤ 3%, the optimal solution is output, and the algorithm is terminated. Experimental Design. To observe the impact of traffic flow delay on the logistics network design problem and to compare the experimental results, two classical logistics network design problems are adopted to be experimented with the model and the algorithm. As shown in Figures 5(a) and 5(b), the network structures are represented as the classical six-point problem considering the road expansion and Sioux-Falls network problem. Where the site selection cost and unit operating cost are marked beside the node, the unit transport cost of the transport line is marked on the line. e traffic flow status of the six-node network is presented in the table part of Figure 5(a). For the Sioux-Falls network, the free-flow travel time of each path and the background traffic flow are displayed at the right gray-scale maps in Figure 5(b). In the six-point problem, the dotted line indicates the road to be constructed, whereas the solid line signifies the existing road to be expanded. In the Sioux-Falls network, the road construction is complete, and only the expansion problem of the road between the nodes is considered. Only the roads with a current capacity of 2000-4000 pcu/h will be expanded. Assuming that the upper and lower limits of logistics network response time are 24 and 12 h, respectively; the logistics network penalties to be considered include the product unit cost taking 40 yuan/unit. e unit construction cost of the road is 20 yuan, and the expansion cost based on traffic flow is 0.133 yuan/pcu (estimated at a payback period of 30 years). Calculation Results and Analysis (1) e computational result displayed in Table 2 is analyzed by encoding the HALR algorithm and applying a Genetic Algorithm (GA) toolbox with MATLAB R2010(a) on a personal computer launched Intel Core (TM) i7, 1.80 GHz, and 4 GB RAM. e results show the approximate optimal solution and the difference in value between its upper and lower bounds. For the small-scale problem, the optimal solution with higher precision can be obtained, whereas there is some redundancy between the upper and lower bounds for the large-scale problem. Concerning the computational performance comparison of genetic algorithms (GA) and the algorithms we proposed in Section 4 (HALR), the results show although the GA could achieve more accurate results, the computational time is much longer. (2) To further demonstrate the practicability of the proposed model and the algorithm, experiments are designed for the Sioux-Falls network problem to solve the traffic network design problem without considering the road traffic flow, i.e., subproblem 1. Moreover, comparative analyses are made on the total cost, location, road construction, road expansion, and traffic congestion, respectively. Taking the road traffic flow saturation to reflect the traffic congestion and ϕ � i,j (δq ij + b ij )/c ij , that is, the ratio of road traffic volume and road capacity in the network, the ratio between the traffic flow and capacity reached 0.927. erefore, only the roads with newly constructed logistics network facilities and the edge set ϕ ≥ 0.927 are calculated as the critical congested roads. We select the numbers of these edges and the value max(ϕ) in the network to represent the degree of network traffic congestion. A higher number of congested sides and a closer value of max(ϕ) to 1 result in a more saturated road traffic. Excessive traffic volume quickly leads to traffic congestion. As presented in Table 3, the traffic network design problem without considering the traffic flow neglects the road background traffic flow, and the road design capacity directly replaces the road capacity. Eliminating the congestion penalty cost without regard to the transport time, the total cost and the number of facilities are reduced by 8.6% and 20%, respectively. However, the total network transport time is increased by 35%, and the number of congested roads is increased by 157%. e maximum road saturation is close to 1, which leads to the original network traffic congestion. However, when considering the background traffic status and service level for the RLND problem, the optimal results of our model could meet the distribution time constraints in the regional logistics network. In addition, the model can balance the distribution truck flow in a more saturated traffic network. However, these are achieved with a slight sacrifice of the construction and operation cost saving. Furthermore, the optimal facility location and distribution plan will not cause traffic overload to realize the sustainable development of the logistics network. Sensitivity Analysis. To clarify the interface of the facility numbers and uncover degree (UD), as well as the effect of conversion factor variation of traffic flow and freight volume on network design decisions, we carried out sensitivity analysis using the data of the Sioux-Falls network. e results are plotted by MATLAB R2010(a) based on the calculated data. Figure 6 shows the relationship between the UD and the number of facilities in the logistics network. e UD decreases with the increase in the number of facilities for a particular range. However, when the number of facilities reaches 20, the logistics network demand is completely covered; thus, UD is 0. When the number of facilities is within the range of (5,20), UD decreases with the increase in the number of facilities. When the number is 5 or less, UD reaches its peak as the demand of the network cannot be satisfied. Figure 7 displays the effect of the conversion factor of traffic flow and freight volume on UD. e curve shows a gradually increasing trend. ere are three UD stable areas, which are (0.03, 0.06), (0.12, 0.17), and (0.23, 0.26), respectively, and the UD does not change in terms of area. Recall that and δ � y ij /q ij , where y ij is the truck flow and q ij is the freight volume. us, the freight load by truck � 1/δ, in tons. It is noted that a smaller traffic flow yields better results for the same volume freight, which signifies that a smaller δ is more desirable. e relationship with UD reveals that we could choose an optimal truck-type selection-loading to minimize the total costs and the UD. In the UD stable area, minimal δ is an optimal solution for truck-operating company to select the right truck type. As the three stable areas correspond to the freight load of (16. ere are also three stable areas in the rising curve, which are (0.01, 0.025), (0.06, 0.13), and (0. 16, 0.22). Typically, the number of facilities increases with the increase in δ, but the stable areas are within the ideal range, such as (0.06, 0.13). A higher freight load volume can be reached without increasing the facility cost by choosing the minimal δ � 0.06. In other words, selecting the truck type with the load capacity of 16.7 tons is the optimal solution for the trucks with load capacities ranging from 7.7 to 16.7 tons. For overall optimality, the range of values of δ (0.12, 0.13) and (0.16, 0.17) is the overlapped stable area for both UD and the number of facilities, which indicates that the trucks with loading capacity of 6.3 tons (corresponding to δ � 0.16) and 8.3 tons (corresponding to δ � 0.12) could be the optimal truck types for an RLN considering the traffic congestion. Conclusion In this study, the effect of traffic flow on the regional logistics network design is analyzed. Based on the study of partial coverage function and traffic flow-delay BPR function, the function of uncovering logistics facilities (UDF) is constructed through an analysis of real-travel time between the pair of facilities and customer/demand points. Using this function, an integrated optimization model of the regional logistics network is developed. e two-layer Lagrangian relaxation heuristic algorithm is applied in solving the design problem of the logistics network: six-point problem and Sioux-Falls network problem. e performance analysis demonstrates the validity of the model and the algorithm for two-scale problems. e results of the sensitivity analysis also show the relationship between the demand point coverage and the number of locations in the logistics network. e results also describe the relationship between the demand point coverage and the number of facility locations with the traffic volume and the fluctuation of the traffic-flow conversion factor. e model can guide the practical planning and design of truck-type selection and loading scheme in a regional logistics network. e simplified model can also direct the logistics company in specifying the real-time distribution plan, minimize logistics costs, ensure timely delivery, and reduce congestion when the distribution vehicle travels in the background transportation network. In addition, the traffic network pressure on the regional transport network can be reduced owing to the proper choice of truck travel time. From the managerial view, this study shows the following: (1) e proposed models and algorithms could support the decision maker in planning a logistics network with a little increase in the total cost, together with a higher service level based on service time and without causing traffic congestion. (2) e proposed model can replicate the effect of traffic congestion on the design of logistics network to some extent. (3) Considering the effect of traffic congestion, the total transportation time, path flow/capacity ratio in the logistics network design, and the number of congested routes in the regional logistics network can be optimized. (4) Generally, the uncover degree of customer/demand points increases with the conversion factor of traffic flow and freight volume δ. However, there are some stable areas to uncover degree that have some specific ranges of δ. e uncover degree is stable when the value of δ is increased. (5) Similarly, the number of facility locations increases with conversion factor of traffic flow and freight volume δ. However, some stable areas for the number of facilities have specific ranges of δ. e number of facilities that needs to be constructed is stable when increasing the value of δ. Future studies should focus on estimating the real-travel time by truck trajectory data. Additionally, a data-driven model for solving the regional logistics network design problem should be formulated. Data Availability e data used to support the findings were obtained from the National Engineering Laboratory of Big Data Application in Integrated Transportation, Chengdu, Sichuan, China. Most of the data of this study are included within the article. Conflicts of Interest e authors declare no conflicts of interest.
2020-04-30T09:11:24.921Z
2020-04-29T00:00:00.000
{ "year": 2020, "sha1": "859a70f09aeb304301002ae8c369586d161f9784", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jat/2020/5197025.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9beceeeffdc43dcf9cc509ad19e0abaf44322e7a", "s2fieldsofstudy": [ "Engineering", "Business", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
236690333
pes2o/s2orc
v3-fos-license
Artificial intelligence: confidence index in Russia and the world, prospects for implementation . Artificial intelligence technologies are being implemented in various fields, replacing the human mind with the help of specially designed algorithms. These systems are able to learn in the course of their functioning, free us from routine work, save time and material resources. The article presents the results of research on trust in breakthrough digital technologies as an important condition for their use, including in social life. There was revealed a high demand for «smart» technologies with an insufficient level of knowledge in this area, lack of interest in professional development. The article identifies the factors causing a negative attitude towards innovation. In the current conditions of the pandemic, a tendency has been revealed of an increase in the need for solutions using artificial intelligence and machine learning technologies, including in ensuring information security. Introduction The penetration of artificial intelligence (AI) technologies into various spheres of social life can change the social reality not only of the inhabitants of certain technologically most developed countries, but of the whole world. National programs and strategies are developed to develop AI, create benefits, and accelerate innovative change. China was one of the first to approve a new generation of artificial intelligence development plan (2017). In Russia, the «National Strategy for the Development of Artificial Intelligence for the Period up to 2030» was approved in 2019 and the federal project «Artificial Intelligence» was developed. AI technologies imitate the human mind, have the ability to understand, analyze and learn from data using specially designed algorithms. Artificial intelligence systems are able to memorize patterns of human behavior and adapt in accordance with his preferences. One of the examples of artificial intelligence that has entered our lives are "smart" video cameras capable of recognizing documents [1] and identifying a person [2]. Visual navigation systems have found application both in unmanned vehicle control [3] and in algorithms for the movement of humanoid robots [4]. There are many examples of using AI, and at the same time, a number of questions remain relevant: 1) Are we ready for the widespread introduction of artificial intelligence, for the development of technologies that allow machines to reproduce human capabilities much more accurately? 2) Can AI be entrusted with making decisions related to human life? 3) How will the introduction of AI systems affect the demand for specialists in various fields in the labor market? 4) What role does education play in the formation of the specialists of the future? Materials and methods The aim of the presented study is to determine the level of trust of residents of different countries in artificial intelligence technologies, as one of the conditions for the use of AI systems in various fields. The analysis was based on statistics that were published in open sources. The tasks were set to determine: 1) the peculiarities of the attitude of Russian citizens to "smart" technologies; 2) factors causing negative attitudes towards AI; 3) possible ways to overcome distrust in innovation. Results Particular attention on the part of government and commercial organizations to the opinion of citizens towards artificial intelligence is due to the fact that it is the trust of the population that is the most important condition that determines the possibility of using AI technologies in various fields. One use case for AI is unmanned vehicle control. Errors in unmanned control technology can lead to tragic consequences [3], serious economic and environmental damage. The first serious road traffic accident (RTA) with a "self-propelled" hybrid crossover from Google, in which three employees were injured, occurred in 2015. The accident with a Tesla Model S electric car with the autopilot on, in which the driver died, occurred in May 2016 in the USA. Unfortunately, the list of accidents continues, but experts admit that the introduction of unmanned technologies in transport can lead to a significant reduction in the number of victims of car accidents. Note that Russian experts of the national technology initiative "Autonet" are preparing proposals for amendments to the rules of the road: unmanned vehicles will be able to receive priority on the roads as public transport. In 2020, the company "Romir", part of the Mile Group, together with the international research community GlobalNR, studied the attitude of citizens of 10 countries to the robotics of road transport: India became the leader in the approval index (71%) of the idea of replacing human drivers with robots, and UK residents were the least supportive of this idea (27%). And only a third of Russians surveyed spoke in favor of the introduction of unmanned technologies. 69% of the world's respondents and 57% of Russians believe that unmanned technologies will become a reality in the next ten years. A study of the attitude of office workers of American companies to digitalization and robotics, organized by the global developer of solutions in the field of intellectual information processing and linguistics ABBYY, revealed that half of the employees surveyed are ready to transfer some of their functions to machines. At the same time, 1/3 of the respondents do not trust digital "employees": 27% will not delegate any work to artificial intelligence and 32% do not believe that AI could cope with any task better than they. The majority of Russians surveyed by the All-Russian Public Opinion Research Center have a positive or neutral attitude to the spread of artificial intelligence technologies. Only 29% of respondents were able to define AI, and 38% named the scope of its application. Russian respondents declare a high level of readiness for personal use of services based on AI technologies, primarily public services (68%), leisure and entertainment (54%). Only 54% of Russians surveyed are ready to transfer solutions to everyday tasks to artificial intelligence, 52% want to resort to digital medical care and diagnostics, 44% would like to undergo training using AI technologies. It should be especially noted that 68% of Russian respondents are not afraid of replacing a person with "smart" technologies in their profession. The study showed that more than 50% of respondents are not interested in advanced training in the field of artificial intelligence since they do not fully understand the essence of technologies and the consequences of their implementation. The negative attitude of respondents to artificial intelligence is caused by doubts that artificial intelligence systems are capable of performing the tasks assigned to them with the required quality, including in the field of ensuring the security of personal data. People are afraid of violations of personal space, hacking of information systems and theft of personal data. Analysis of statistical data [5] allowed us to conclude that it is the personnel of companies, including top managers and line managers, who are the main culprits in information security incidents, allowing confidential information to leak (Fig. 1). Cybersecurity Insiders reports that 90% of organizations in the world feel vulnerable to insider attacks. The activity of such attacks is growing with the development and sophistication of digital technologies, as well as an increase in the number of employees with access to confidential data. To implement information security threats, cybercriminals use malicious software (ransomware, spyware and adware, banking Trojans, etc.) in combination with the exploitation of web vulnerabilities and social engineering methods (Fig. 2), for which fertile soil has been created in a complex epidemiological and the economic situation in the world [6,7]. The year 2020, amid a pandemic, has changed the principles of interaction between employees of many organizations around the world. In the context of remote work, even the most conservative companies have connected to EDI services (Electronic data interchange) and switched to a full-fledged contactless document exchange. Phishing is currently the most popular social engineering technique. According to Verizon's 2020 Data Breach Investigations Report (DBIR), in 60% of cases, it was credentials (logins and passwords) that were the targets of attacks, and in 96% of cases, email was used to influence employees of companies and individuals. The author believes the introduction of artificial intelligence technologies for processing information resources will reduce the number of employees who have access to confidential data and reduce the volume of leaks. AI systems can have vulnerabilities inherent in modern software [8] and be attacked by information security intruders -this must be taken into account. To reduce the possible damage from the successful implementation of threats, you can use the system of an adaptive information protection system [9], algorithms for early detection of malicious activity [10], and currently one of the key areas in protection against cyber attacks is the use of artificial intelligence [11,12]. According to the Capgemini Research Institute, 64% of organizations with annual revenues of more than $ 1 billion say AI technologies can reduce the cost of detecting and responding to information threats, and about 75% say they have faster response times. The analysis showed that artificial intelligence technologies are in demand when processing large amounts of data, including information of limited access, and when protecting information systems from cyber attacks. Discussion Progress has been made in the world towards the creation and application of artificial intelligence technologies, it is a matter of time before AI systems become ubiquitous. With all the advantages of AI, it has an obstacle to development and implementation -people express distrust of new technologies and often do not have the necessary knowledge. Users of information systems often confuse the concepts of artificial intelligence and robotics [13,14]. Non-AI programs execute a specific sequence of instructions, and AI systems, mimicking the human mind, are able to learn. The education system should help in the formation of competencies in the field of breakthrough digital technologies, such as the Internet of Things, artificial intelligence, virtual reality, wireless communications, augmented reality. Serious steps have been taken in Russia to ensure confidence in artificial intelligence systems on the part of consumers of their results; the state standard 59276-2020 «Artificial intelligence systems. Methods for ensuring trust. General», which entered into force on March 1, 2021. The maximum number of factors that can lead to a decrease in their reliability must be eliminated in AI systems. Methods for ensuring trust at all stages of the life cycle of these systems should complement each other, since each of them has both advantages and disadvantages. To improve the efficiency of using AI systems in solving applied problems, on March 1, 2021, the 59277-2020 standard "Artificial intelligence systems. Classification of artificial intelligence systems ". It contains the main features of these systems and helps to determine the directions of their standardization. According to the National Strategy for the Development of Artificial Intelligence until 2030, by 2024 the level of participation of Russian specialists in the international exchange of knowledge and their contribution to the creation of open AI libraries should increase significantly. By 2030, software must be developed that uses AI technologies to solve problems in various fields of activity. Conclusion The study revealed the problems and prospects for the development of artificial intelligence technologies. Analyzed the data of statistical surveys reflecting the attitude of people to "smart" technologies. The presented analysis showed that residents of different countries generally have a positive attitude towards the introduction of artificial intelligence systems. The mistrust is caused by fears for the quality of solving the tasks entrusted to AI, including in the field of information security. The author proposed to transfer functions for processing restricted information to AI systems since it was revealed that it was insiders who were the main channel of data leakage. AI technologies can help protect against cyberattacks and reduce potential damage by reducing the time it takes to detect and respond to information threats.
2021-08-03T00:06:05.534Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c18ffe856aa1e84428516fd7ce8cc0387806c1b3", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/17/shsconf_mtde2021_01002.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e0f257273fa1eee8cc933853089fe34f3d8e71e3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
248563902
pes2o/s2orc
v3-fos-license
Deregulation of complement components C4A and CSMD1 peripheral expression in first-episode psychosis and links to cognitive ability Up-regulation of the complement component 4A (C4A) in the brain has been associated with excessive synaptic pruning and increased schizophrenia (SZ) susceptibility. Over-expression of C4A has been observed in SZ postmortem brain tissue, and the gene encoding for a protein inhibitor of C4A activity, CUB and Sushi multiple domains 1 (CSMD1) gene, has been implicated in SZ risk and cognitive ability. Herein, we examined C4A and CSMD1 mRNA expression in peripheral blood from antipsychotic-naive individuals with first-episode psychosis (FEP; n = 73) and mentally healthy volunteers (n = 48). Imputed C4 locus structural alleles and C4A serum protein levels were investigated. Associations with symptom severity and cognitive domains performance were explored. A significant decrease in CSMD1 expression levels was noted among FEP patients compared to healthy volunteers, further indicating a positive correlation between C4A and CSMD1 mRNA levels in healthy volunteers but not in FEP cases. In addition, C4 copy number variants previously associated with SZ risk correlated with higher C4A mRNA levels in FEP cases, which confirms the regulatory effect of C4 structural variants on gene expression. Evidence also emerged for markedly elevated C4A serum concentrations in FEP cases. Within the FEP patient group, higher C4A mRNA levels correlated with more severe general psychopathology symptoms and lower CSMD1 mRNA levels predicted worse working memory performance. Overall, these findings suggest C4A complement pathway perturbations in individuals with FEP and corroborate the involvement of CSMD1 in prefrontal-mediated cognitive functioning. Supplementary Information The online version contains supplementary material available at 10.1007/s00406-022-01409-5. Introduction A large number of investigations have reported biochemical alterations of the immune system in patients with schizophrenia (SZ), providing support of the immune/inflammatory hypothesis for SZ as a putative pathophysiological mechanism increasing the vulnerability to the illness [1][2][3]. Similarly, dysregulated immune/ inflammatory responses and differential expression of immune-related genes have been observed in individuals with first-episode of psychosis (FEP) [4][5][6][7], indicating that immune aberrations may exist even at the early stages of psychosis, further corroborating the view that underlying immunological deficiencies could play a role in the development or progression of psychotic disorders [8,9]. In line with the above notion, genetic evidence has also emerged highlighting the involvement of genes coding for immune system components in SZ pathology [10,11]. The involvement of complement system alterations in SZ etiopathogenesis, likely through the increased activity of the classical complement pathway, has long been considered an indication of disturbed innate immunity in SZ which may negatively affect neurodevelopmental processes [12]. On another front, the exacerbation of immune/ inflammatory reactions during the acute phase of psychotic illness could be viewed as a compensatory or protective physiological mechanism that may alleviate psychotic symptoms and cognitive impairment [13]. Environmental influences and psychosocial distress may also induce secondary physiological processes including an increase of inflammatory responses [14,15]. From a genomic perspective, fine-mapping of associations derived from large-scale genome-wide association studies (GWAS) in SZ, suggested that the complement component C4 constitutes a potential immune mediator which increases SZ susceptibility [11]. Specifically, structurally distinct alleles at the C4 gene locus, which encodes the complement component C4, have been genetically linked to SZ risk and associated with elevated C4A isotype gene expression in postmortem brain tissue from SZ patients. Over-expression of C4/C4A mRNA transcripts has been observed in multiple brain regions of SZ patients compared to healthy individuals [16,17]. It is noted that C4 protein product is a member of the classical complement pathway that is activated during the innate immune response and forms a proteolytic protein cascade that clears cellular debris, enhances inflammation, and it is involved in the engulfment or elimination of pathogens [18]. Moreover, evidence from animal studies indicates that higher C4A isotype expression is implicated in excessive synaptic pruning in the brain, likely contributing to behavioral disturbances and cognitive deficits [19]. In accordance with the aforementioned findings, higher genetically predicted C4 gene expression is associated with poor memory performance in both SZ patients and healthy individuals [20]. With respect to differences in psychotic symptoms, increased C4A mRNA levels in peripheral blood cells have been shown to correlate with greater severity of psychopathology in SZ patients, specifically delusional symptoms [21]. Additionally, up-regulation of C4A protein levels has been found in plasma as well as in cerebrospinal fluid from patients with SZ [22][23][24][25]. Preliminary findings also suggest that higher total C4 plasma levels may predict unfavorable treatment response among FEP patients followed-up for twelve months [26]. It is of interest that among the strongest associated loci with SZ in recent GWAS is the CUB and Sushi Multiple Domains 1 (CSMD1) genetic locus, which encodes a protein inhibitor of C4 activity in neural tissues [27][28][29]. Further, reduced CSMD1 mRNA expression has been observed in peripheral blood of SZ patients [30] and common genetic variation within CSMD1 has been associated with psychosis proneness in the general population [31], as well as memory functioning in SZ and healthy individuals [32,33]. Prompted by earlier findings implicating C4A and CSMD1 peripheral aberrations in SZ, the aim of the present study was to examine C4A and CSMD1 mRNA expression levels, as well as serum C4A protein levels, in peripheral blood from well-characterized antipsychotic-naïve FEP cases and mentally healthy individuals. In addition, genetically predicted C4 expression was estimated and we evaluated whether gene expression correlates with symptom severity at the early course of the illness and cognitive performance. Participants In the current study, we included 73 unrelated cases (mean age 25.0 ± 7.2 years; 69% males) with non-affective firstepisode psychosis (FEP) recruited as part of the collaborative Athens FEP Research Study [34]. Clinical diagnoses were established based on the International Classification of Diseases 10th Revision (ICD-10) diagnostic criteria (WHO, 1992) [35]. Consensus diagnoses for all cases were obtained from trained psychiatrists on the basis of detailed clinical records and individuals fulfilling diagnostic criteria for Schizophrenia-spectrum disorders (ICD-10 codes: F20-F29) were examined. All cases at the time of admission were antipsychotic-naïve and blood collection sampling was performed for basic biochemical examination and subsequent genomic analysis. Symptom severity at admission was assessed using the Positive and Negative Syndrome Scale (PANSS) [36]. Of the 73 FEP cases included in the study, 58 (79.5%) were hospitalized. Detailed demographic and clinical information is presented in Table S1. A total of 48 unrelated healthy volunteers (mean age 26.5 ± 4.8 years; 60.4% males) with no history of psychiatric disorder donated blood samples for annual routine biochemical examination and served as the control group. Each volunteer underwent a brief medical interview from trained physicians to assess the presence of major mental illness and other neurological or immunological disorders. Written informed consent was obtained from every individual after a detailed description of the research objectives and the study protocol was approved by the ethics committee and the Institutional Review Board at Eginition University Hospital (Athens, Greece). Cognitive assessment General cognitive ability was estimated using the Greek version of the Wechsler Adult Intelligence Scale (WAIS-Fourth edition) [37,38], a comprehensive neurocognitive test comprising ten basic subtests grouped into four indexes representing distinct cognitive domains: verbal comprehension, perceptual reasoning, working memory and processing speed. Subtest raw scores were converted into age-corrected scaled 1 3 scores using available Greek norms to determine the individual index scores and the full-scale IQ score. The assessment of neurocognitive functioning was performed within 3-4 weeks following admission to the research protocol by trained clinical neuropsychologists at Eginition University Hospital. In the current study, we enrolled 59 FEP cases with available neuropsychological data for further analysis. RNA extraction and gene expression analysis Total RNA was extracted from peripheral blood mononuclear cells (PBMCs) using the NucleoSpin RNA Blood kit (Macherey-Nagel, Düren, Germany), according to the protocol provided by the manufacturer. The purity and integrity of total RNA were evaluated using UV measurements and denaturing agarose gel electrophoresis, respectively. Reverse transcription (RT) reactions were performed using the PrimeScript First-Strand cDNA Synthesis Kit (TAKARA Bio, Japan) at 37 °C for 30 min followed by 85 °C for 5 s. The mRNA expression levels of C4A and CSMD1 were measured using semi-quantitative real-time polymerase chain reaction (RT-qPCR) on an ABI Prism 7000 instrument (Applied Biosystems, Foster City, CA, USA). Every cDNA sample was mixed with specific sets of primers and the qPCR master mix (KAPA SYBR FAST Universal Kit, Sigma-Aldrich, Germany) for 2 min at 50 °C and 2 min at 95 °C, followed by 40 cycles consisting of 15 s at 95 °C and 60 s at 60 °C. Finally, a standard dissociation protocol was used to ensure that each amplicon was a single product. All reactions were held in triplicate to ensure reproducible results. To evaluate differences in gene expression between groups, the fold change was calculated for each gene applying the comparative Ct (2 −∆∆Ct ) method. Relative mRNA expression levels were estimated by calculating delta Ct (Cycle threshold) values using the Ct values of the GAPDH housekeeping gene for normalization. Under-expressed genes are shown as the negative inverse of fold change and over-expressed as the fold change. Primers sequences were as follows: C4A forward, 5′-GGC TCA CAG CCT TTG TGT TG-3′; C4A reverse, 5′-CCC TGC ATG CTC CTG TCT AA-3′; CSMD1 forward, 5'-GTC TGG GCT CGT GGA TAT GT-3'; CSMD1 reverse, 5'-CAG GTC TCG GAA GGA CAG AG-3' and GAPDH forward, 5′-CGA GAT CCC TCC AAA ATC AA-3′; GAPDH reverse, 5′-TTC ACA CCC ATG ACG AAC AT-3′. Genotyping and C4 structural allele imputation Genome-wide genotyping of unrelated FEP cases was performed using the Infinium Omni2.5 BeadChip array (Illumina Inc., San Diego, USA) at the Johns Hopkins Center for Inherited Disease Research (CIDR). Standard quality control (QC) processing steps of genotype data were performed using PLINK v1.90 [39]. High-quality genotyped single-nucleotide polymorphisms (SNPs) with call rate > 95%, minor allele frequency > 1%, Hardy-Weinberg equilibrium deviation p > 10 -6 were retained for further analyses. Samples with low genotype rates (< 95%) were also excluded. To infer C4 structural alleles, 12,052 SNPs spanning the MHC region were extracted and utilized for imputation of C4 copy number structural variants following the procedures previously described by Sekar et al. (2016), using the MHC haplotypes HapMap3 CEU reference panel as recommended by the aforementioned group (http:// mccar rolll ab. org/ resou rces/ resou rces-for-c4/). Four common C4 haplotype groups were derived (BS, AL-BS, AL-BL, AL-AL) based on the combination of the C4 structural elements (C4A, C4B, C4L, C4S) that each individual carries and genetically predicted C4A expression levels were estimated in accordance with prior studies [11,40]. In our FEP sample, the C4A predicted expression values ranged between 0 and 1.61 (mean: 1.26; standard deviation: 0.22). Quantification of C4A serum levels Human complement C4A protein levels in serum samples were determined using an enzyme-linked immunosorbent (ELISA) assay kit (AssayGenie, Dublin, Ireland) following the manufacturer's instructions. Peripheral blood samples were obtained from every participant at admission (between 8.00 and 11.00 am) and centrifuged within 30 min after blood draw to collect serum samples for routine biochemical analysis. Serum aliquots were kept frozen at − 70 °C until further analysis. ELISA measurements were performed in a subset of FEP cases (n = 62) with available serum samples and triplicates were tested for each sample to calculate C4A mean concentration values for downstream statistical analyses. Statistical analyses Demographic characteristics were compared between groups by applying either Pearson's chi-square (categorical variables) or Mann-Whitney U tests (continuous variables), as appropriate. To evaluate gene expression differences between FEP cases and healthy volunteers, the comparative Ct method (2 −∆∆Ct ) was utilized and fold-change differences were calculated. Differences in relative mRNA expression levels (log-transformed ΔCt values) between groups and C4A serum levels were examined using one-sided Mann-Whitney U tests, as prior evidence has shown altered expression levels of C4A and CSMD1 in SZ [11,16,30]. Within-group correlations between C4A and CSMD1 relative gene expression were tested by applying Spearman's rank correlation coefficients (rho). Linear regression models adjusted for age and gender were applied to test for associations between relative gene expression levels and PANSS subscale scores or WAIS-IV cognitive domains scores. Correlations between genetically predicted C4 copy number structural alleles (C4 haplotype groups) and mRNA transcript levels were estimated by linear regression analysis as previously reported (40). The significance threshold was set at p < 0.05. All analyses were performed using R version 4.1.2 package. Demographic and clinical characteristics In the present study, we included a total of 73 FEP cases and 48 mentally healthy volunteers. Detailed demographic and clinical information for the FEP group of cases is presented in Supplementary Table S1. No significant differences were observed with regard to gender (χ 2 = 0.515; p = 0.473) or age (Mann-Whitney test p = 0.205) between FEP and healthy groups. All FEP cases were antipsychotic-naïve at the time of inclusion to the study and 80% were hospitalized. According to the ICD-10 diagnostic criteria, 88% of cases received a Schizophrenia (SZ) diagnosis (F20 ICD-10 code). The remaining cases (12%) were classified as psychosis-spectrum disorders (F23 or F28 ICD-10 codes; Supplementary Table S1). C4A and CSMD1 mRNA expression levels in peripheral blood The relative expression of C4A and CSMD1 mRNA levels in PBMCs was compared between FEP cases and healthy volunteers. As shown in Fig. 1, we observed that C4A was marginally over-expressed in FEP cases; however, this difference did not reach statistical significance (one-sided Mann-Whitney test p = 0.132). CSMD1 gene expression was found significantly reduced among FEP cases compared to healthy volunteers (1.4-fold change; one-sided Mann-Whitney test p = 0.004). Female FEP cases showed a trend toward higher C4A expression levels compared to male cases (Mann-Whitney test p = 0.067), whereas the above association was not present in healthy participants (Mann-Whitney test p = 0.749). Sex differences were not observed for CSMD1 expression levels in both FEP and healthy groups. Furthermore, all the above associations remained unchanged after adjustment for ICD-10-based diagnostic classifications. Within-group analyses revealed a significant positive correlation between C4A and CSMD1 mRNA expression levels in healthy volunteers (rho = 0.40, p = 0.005), yet this relationship was not detected in FEP cases (rho = 0.16, p = 0.197). C4 structural genetic variation correlates with C4A mRNA levels We further examined the correlation between C4A/C4B locus structural alleles (haplotype groups: BS, AL-BS, AL-BL, AL-AL) and measured mRNA levels in peripheral PBMCs, in an effort to validate previous findings in the human brain [11], demonstrating an association between genetically predicted C4A/C4B haplotype status and gene expression. Our results indicated that among FEP cases (n = 71), carriers of the SZ risk AL-AL haplotype showed higher C4A mRNA levels (β = 0.26; p = 0.02) (Fig. 2), whereas no association was found with CSMD1 mRNA levels (β = − 0.03; p = 0.817), as expected. In agreement with recent evidence from an independent FEP study [22], we observed that the AL-BL haplotype group was the most common among FEP cases (56%), compared to a much lower frequency reported within healthy individuals (41%) in the original study by Sekar et al. (2016). Likewise, the SZ low risk BS haplogroup (7% frequency in the Sekar et al. study) reached a comparable frequency in this FEP sample (8.2%). Associations with symptom severity and cognitive performance Within-group analyses were applied to evaluate the potential impact of C4A and CSMD1 gene expression on PANSS subscales scores at admission as well as cognitive performance in FEP cases. As shown in Fig. 4, significant positive correlations were observed between C4A mRNA expression levels and PANSS general psychopathology (β = 0.29; p = 0.016) and total (β = 0.28; p = 0.019) symptom scores. With regard to cognitive performance, increased CSMD1 mRNA levels were associated with better performance on working memory (β = 0.27; p = 0.037). C4A mRNA and C4A serum protein levels did not correlate significantly with either PANSS baseline subscales scores or cognitive indices (Fig. 5; Supplementary Table S2). Discussion The results of the present study add to a growing body of evidence implicating aberrant immune/inflammatory responses in patients with SZ and those experiencing FEP [1, 4-6, 26, 41]. We provide evidence for common transcriptional regulation of C4A and CSMD1 among healthy individuals, suggesting co-expression of the two genes and related biological functions in the C4A-dependent complement pathway. Importantly, our results indicate C4A cascade alterations in un-medicated cases with FEP, suggesting an abnormal innate immune reaction during the early course of psychosis [5,26]. It is stressed, though, that there is a difficulty to precisely identify the exact etiopathological mechanisms which induce C4 complement activation and related immunological dysregulation in FEP. The relationship between environmental stress or adverse life events and the enhancement of immune/inflammatory responses has been documented in psychotic disorders [14,42,43], and plausibly explains the immune exacerbations following an acute episode of psychosis [13]. Moreover, up-regulation of the complement system has been observed in rodents following exposure to stressful conditions [44]. The observed gene expression differences in FEP patients are in accordance with previous studies indicating higher C4/C4A expression levels in postmortem brain tissue [11,Fig. 4 Association between C4A, CSMD1 peripheral gene expression levels and PANSS subscale severity scores at admission among patients with FEP (*two-sided p < 0.05) Fig. 5 Association between C4A, CSMD1 peripheral gene expression levels and WAIS-IV assessed neuropsychological indexes in patients with FEP (*two-sided p < 0.05) 1 3 16,17] and significantly lower CSMD1 expression levels in peripheral blood from SZ patients [30]. A trend for increased C4A expression levels in PBMCs, although not statistically significant, has been observed in an earlier study which examined C4A blood mRNA levels in patients with SZ and psychotic bipolar disorder under medication with antipsychotics [21]. We argue that antipsychotic treatment could potentially impact gene expression levels; therefore, future investigations with larger sample sizes and wellcharacterized un-medicated patients are needed to delineate whether increased C4A expression stems from illness specific etio-pathological mechanisms or unidentified nonspecific drug-induced phenomena. This is to the best of our knowledge the first study reporting C4A isotype mRNA expression levels in peripheral blood in relation to genetically predicted C4A gene expression. In particular, we validate the positive correlation previously seen in brain postmortem tissue between SZ risk increasing C4A structural variation (i.e., AL-AL haplotype) and experimentally determined mRNA levels in immune cells [11]. The above observation supports the regulatory effect of distinct C4A copy number variants on gene transcription and has significant implications for future genetic studies aiming to estimate C4A expression profiles and investigate the contribution of deregulated C4/ C4A expression on psychotic disorders. It is of interest that prior evidence has shown that genetically predicted C4A expression associates with cognitive impairment in SZ patients and differences in brain imaging measures, such as cortical activation and thickness in healthy individuals [20,40]. In this work, there was no indication for a correlation between either predicted C4 structural alleles or measured mRNA levels and cognitive performance among FEP cases. Notably, the most frequent C4 structural variant, that is AL-BL haplogroup, was associated with higher SZ risk in the original study by Sekar et al. (2016) and was found to predispose to microbial infections in SZ patients [45], which naturally could activate innate immune responses and complement system up-regulation. In addition, we provide evidence that FEP cases are characterized by highly elevated serum C4A protein levels, compared to mentally healthy volunteers. This observation confirms the results of an earlier study reporting increased C4A levels in SZ patients measured by highly sensitive mass spectrometry methodology [23]. It is mentioned, however, that most studies so far have estimated peripheral total C4 protein levels in cases diagnosed with FEP or SZ reporting no significant differences in serum [24,25,46], but elevated levels in cerebrospinal fluid [24]. Our findings outline that a specific dysregulation of C4A isotype levels may exist in psychosis, at least in the very early stages of the illness, which could not be detected using methods that estimate total amount of complement C4 (i.e., C4A and C4B levels) [6]. It remains to be determined in future studies whether the observed increase in C4A protein levels is attributed to underlying biochemical alterations with considerable biological impact on the disposition to psychosis. To this perspective, findings from FEP cases followed-up for an entire year showed that higher baseline C4 serum levels may represent an early indicator of poor treatment response [26], suggesting that peripheral C4 concentration could potentially inform clinicians for optimal treatment intervention in individuals with FEP. Our analyses did not reveal significant correlation between C4A serum protein and mRNA levels in this population, which is possibly attributed to complicated cellular mechanisms involved in gene transcription regulation, mRNA processing and protein synthesis rates [47,48]. Moreover, we acknowledge as an additional limiting factor the slightly smaller number of FEP cases included in the C4A protein assays that might have reduced the statistical power of the analysis. Likewise, C4 haplogroup analysis among FEP cases did not reveal an association between C4A copy number status and C4A protein levels in serum, as opposed to an earlier study which examined a limited number of plasma samples from medicated SZ patients [22]. Prompted by previous reports supporting a relationship between C4A and CSMD1 gene expression deviations and symptom dimensions as well as cognitive function [20,21,30,49], we attempted to assess the relationships with symptom severity and cognitive performance in cases with FEP. In our sample, nominally significant associations were noted between C4A peripheral mRNA levels and PANSS general psychopathology as well as total symptoms severity, which corroborate prior evidence indicating a relationship between higher C4A expression levels in PBMCs and more severe psychotic symptomatology [21], albeit it should be stressed that our limited sample size could not permit definite conclusions. Further, recent findings from a FEP cohort did not observe correlations between genetically predicted C4 expression by estimating C4 haplotype status and symptom severity [22], implying that additional evidence is essential to elucidate whether altered C4/C4A expression could prove as a meaningful clinical biomarker [50]. Importantly, our findings indicate that lower peripheral CSMD1 expression levels correlate with poor performance on prefrontal-mediated cognitive domains, in particular perceptual reasoning. Common genetic variation within CSMD1 has been credibly associated with SZ in recent GWAS meta-analyses [51], and implicated in human cognitive impairment as well as reduced brain functional activation [33,52]. We postulate that lower CSMD1 expression negatively impacts on cognitive functioning in FEP cases by compromising C4-dependent complement pathway, which has been involved in synaptic pruning processes and neurodevelopmental mechanisms [11,53]. 3 It is noteworthy to mention that functional studies have reported that CSMD1 constitutes a complement cascade regulatory factor, acting as an inhibitor of C4-dependent cellular processes in neural tissues [28,29]. The biological relationship between C4A and CSMD1 components and their role in shared biochemical processes of the complement system is strengthened by studies examining psychosis-related behavioral and cognitive phenotypes in mice. Specifically, over-expression of C4/C4A components might contribute to the enhancement of anxiety-like behaviors, social impairment and working memory deficits [19,54], whereas depletion of the CSMD1 genetic locus induces emotional and cognitive defects [55]. Therefore, it becomes evident that C4A and CSMD1 complement factors likely operate in common cellular pathways with opposing functions [29]. To this perspective, the observed inverse direction of gene expression patterns for C4A and CSMD1 in FEP cases denotes a transcriptional dysregulation of the two complement-related genes at psychosis onset and/or at a drug-naïve state. In conclusion, this study suggests a hyperactivity of the C4/C4A complement pathway in un-medicated individuals diagnosed with FEP, predominantly SZ [12,56]. Furthermore, the results add further support to an altered immune/inflammatory response at least in a subset of individuals with FEP, which might contribute to the development of psychotic symptoms [5,6,9]. The exact biological importance of complement pathway dysregulation in the occurrence of psychosis is still not well understood [50], although recent evidence has implicated complement system alterations to early-life brain synaptic abnormalities and neurodevelopmental deficits, perhaps related to increased SZ predisposition [53,56,57]. Additional research efforts aiming to characterize the immune/inflammatory profile early in the course of psychotic illness may shed more light on the influence of complement factors in the pathogenesis of SZ and related psychosis-spectrum disorders [56]. Acknowledgements The authors are thankful to the participants of the present study and their family members for the valuable information they kindly provided. Author contributions AH, MG and NCS conceived and designed the study. SF, CN and KK participated in involving the subjects. SF, PS, AEN, KK and NCS participated in clinical and neuropsychological assessments and data acquisition and management. AH and MG participated in data analysis. AH and MG wrote the first draft of the paper. All authors read and approved the final manuscript. Funding This study was supported by research funding from the Theodor-Theohari Cozzika Foundation (Athens, Greece). Conflict of interest The authors report no biomedical financial interests or potential conflicts of interest. Ethical approval The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Clinical Research Ethics committee of Eginition University Hospital, National and Kapodistrian University of Athens. Consent to participate All participants signed an informed consent prior to their inclusion to the study. Consent for publication All participants signed an informed consent regarding publishing their data. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-05-09T13:33:06.275Z
2022-05-09T00:00:00.000
{ "year": 2022, "sha1": "d70d4413e92bf71284a4dfbfa9a75f825302f861", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00406-022-01409-5.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "2e62d121599aa79e1489b73e1be49835d12f8915", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3600371
pes2o/s2orc
v3-fos-license
Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from large-scale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-of-the-art methods. Introduction Traffic is the pulse of a city that impacts the daily life of millions of people. One of the most fundamental questions for future smart cities is how to build an efficient transportation system. To address this question, a critical component is an accurate demand prediction model. The better we can predict demand on travel, the better we can pre-allocate resources to meet the demand and avoid unnecessary energy consumption. Currently, with the increasing popularity of taxi requesting services such as Uber and Didi Chuxing, we are able to collect massive demand data at an unprecedented scale. The question of how to utilize big data to better predict traffic demand has drawn increasing attention in AI research communities. In this paper, we study the taxi demand prediction problem; that problem being how to predict the number of taxi requests for a region in a future timestamp by using historical taxi requesting data. In literature, there has been a long line of studies in traffic data prediction, including traffic volume, taxi pick-ups, and traffic in/out flow volume. To predict traffic, time series prediction methods have frequently been used. Representatively, autoregressive integrated moving average (ARIMA) and its variants have been widely applied for traffic prediction (Li et al. 2012;Moreira-Matias et al. 2013;Shekhar and Williams 2008). Based on the time series prediction method, recent studies further consider spatial relations (Deng et al. 2016;Tong et al. 2017) and external context data (e.g., venue, weather, and events) (Pan, Demiryurek, and Shahabi 2012;Wu, Wang, and Li 2016). While these studies show that prediction can be improved by considering various additional factors, they still fail to capture the complex nonlinear spatial-temporal correlations. Recent advances in deep learning have enabled researchers to model the complex nonlinear relationships and have shown promising results in computer vision and natural language processing fields (LeCun, Bengio, and Hinton 2015). This success has inspired several attempts to use deep learning techniques on traffic prediction problems. Recent studies (Zhang, Zheng, and Qi 2017;Zhang et al. 2016) propose to treat the traffic in a city as an image and the traffic volume for a time period as pixel values. Given a set of historical traffic images, the model predicts the traffic image for the next timestamp. Convolutional neural network (CNN) is applied to model the complex spatial correlation. Yu et al. (2017) proposes to use Long Short Term Memory networks (LSTM) to predict loop sensor readings. They show the proposed LSTM model is capable of modeling complex sequential interactions. These pioneering attempts show superior performance compared with previous methods based on traditional time series prediction methods. However, none of them consider spatial relation and tempo-ral sequential relation simultaneously. In this paper, we harness the power of CNN and LSTM in a joint model that captures the complex nonlinear relations of both space and time. However, we cannot simply apply CNN and LSTM on demand prediction problem. If treating the demand over an entire city as an image and applying CNN on this image, we fail to achieve the best result. We realize including regions with weak correlations to predict a target region actually hurts the performance. To address this issue, we propose a novel local CNN method which only considers spatially nearby regions. This local CNN method is motivated by the First Law of Geography: "near things are more related than distant things," (Tobler 1970) and it is also supported by observations from real data that demand patterns are more correlated for spatially close regions. While local CNN method filters weakly correlated remote regions, this fails to consider the case that two locations could be spatially distant but are similar in their demand patterns (i.e., on the semantic space). For example, residential areas may have high demands in the morning when people transit to work, and commercial areas may be have high demands on weekends. We propose to use a graph of regions to capture this latent semantic, where the edge represents similarity of demand patterns for a pair of regions. Later, regions are encoded into vectors via a graph embedding method and such vectors are used as context features in the model. In the end, a fully connected neural network component is used for prediction. Our method is validated via large-scale real-world taxi demand data from Didi Chuxing. The dataset contains taxi demand requests through Didi service in the city of Guangzhou in China over a two-month span, with about 300,000 requests per day on average. We conducted extensive experiments to compare with state-of-the-art methods and have demonstrated the superior performance of our proposed method. In summary, our contributions are summarized as follow: • We proposed a unified multi-view model that jointly considers the spatial, temporal, and semantic relations. • We proposed a local CNN model that captures local characteristics of regions in relation to their neighbors. • We constructed a region graph based on the similarity of demand patterns in order to model the correlated but spatially distant regions. The latent semantics of regions are learnt through graph embedding. • We conducted extensive experiments on a large-scale taxi request dataset from Didi Chuxing. The results show that our method consistently outperforms the competing baselines. Related Work Problems of traffic prediction could include predicting any traffic related data, such as traffic volume (collected from GPS or loop sensors), taxi pick-ups or drop-offs, traffic flow, and taxi demand (our problem). The problem formulation process for these different types of traffic data is the same. Essentially, the aim is to predict a traffic-related value for a location at a timestamp. In this section, we will discuss the related work on traffic prediction problems. The traditional approach is to use time series prediction method. Representatively, autoregressive integrated moving average (ARIMA) and its variants have been widely used in traffic prediction problem (Shekhar and Williams 2008;Li et al. 2012;Moreira-Matias et al. 2013). Recent studies further explore the utilities of external context data, such as venue types, weather conditions, and event information (Pan, Demiryurek, and Shahabi 2012;Wu, Wang, and Li 2016;Tong et al. 2017). In addition, various techniques have also been introduced to model spatial interactions. For example, Deng et al. (2016) used matrix factorization on road networks to capture a correlation among road connected regions for predicting traffic volume. Several studies (Tong et al. 2017;Idé and Sugiyama 2011;Zheng and Ni 2013) also propose to smooth the prediction differences for nearby locations and time points via regularization for close space and time dependency. These studies assume traffic in nearby locations should be similar. However, all of these methods are based on the time series prediction methods and fail to model the complex nonlinear relations of the space and time. Recently, the success of deep learning in the fields of computer vision and natural language processing (LeCun, Bengio, and Hinton 2015; Krizhevsky, Sutskever, and Hinton 2012) motivates researchers to apply deep learning techniques on traffic prediction problems. For instance, designed a neural network framework using context data from multiple sources and predict the gap between taxi supply and demand. The method uses extensive features, but does not model the spatial and temporal interactions. A line of studies applied CNN to capture spatial correlation by treating the entire city's traffic as images. For example, Ma et al. (2017) utilized CNN on images of traffic speed for the speed prediction problem. Zhang et al. (2016) and Zhang, Zheng, and Qi (2017) proposed to use residual CNN on the images of traffic flow. These methods simply use CNN on the whole city and will use all the regions for prediction. We observe that utilizing irrelevant regions (e.g., remote regions) for prediction of the target region might actually hurts the performance. In addition, while these methods do use traffic images of historical timestamps for prediction, but they do not explicitly model the temporal sequential dependency. Another line of studies uses LSTM for modeling sequential dependency. Yu et al. (2017) proposed to apply Longshort-term memory (LSTM) network and autoencoder to capture the sequential dependency for predicting the traffic under extreme conditions, particularly for peak-hour and post-accident scenarios. However, they do not consider the spatial relation. In summary, the biggest difference of our proposed method compared with literature is that we consider both spatial relation and temporal sequential relation in a joint deep learning model. Preliminaries In this section, we first fix some notations and define the taxi demand problem. We follow previous studies (Zhang, Zheng, and Qi 2017;) and define the set of non-overlapping locations L = {l 1 , l 2 , ..., l i , ..., l N } as rectangle partitions of a city, and the set of time intervals as I = {I 0 , I 1 , ..., I t , ..., I T }. 30 minutes is set as the length of the time interval. Alternatively, more sophisticated ways of partitioning can also be used, such as partition space by road network (Deng et al. 2016) or hexagonal partitioning. However, this is not the focus of this paper, and our methodology can still be applied. Given the set of locations L and time intervals T , we further define the following. Taxi request: A taxi request o is defined as a tuple Demand: The demand is defined as the number of taxi requests at one location per time point, i.e., y i t = |{o : o.t ∈ I t ∧o.l ∈ l i }|, where |·| denotes the cardinality of the set. For simplicity, we use the index of time intervals t representing I t , and the index of locations i representing l i for rest of the paper. Demand prediction problem: The demand prediction problem aims to predict the demand at time interval t + 1, given the data until time interval t. In addition to historical demand data, we can also incorporate context features such as temporal features, spatial features, meteorological features (refer to Data Description section for more details). We denote those context features for a location i and a time point t as a vector e i t ∈ R r , where r is the number of features. Therefore, our final goal is to predict ..,t are historical demands and E L t−h,...,t are context features for all locations L for time intervals from t − h to t, where t − h denotes the starting time interval. We define our prediction function F(·) on all regions and previous time intervals up to t − h to capture the complex spatial and temporal interaction among them. Proposed DMVST-Net Framework In this section, we provide details for our proposed Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework, i.e., our prediction function F. Figure 1 shows the architecture of our proposed method. Our proposed model has three views: spatial, temporal, and semantic view. Spatial View: Local CNN As we mentioned earlier, including regions with weak correlations to predict a target region actually hurts the performance. To address this issue, we propose a local CNN method which only considers spatially nearby regions. Our intuition is motivated by the First Law of Geography (Tobler 1970) -"near things are more related than distant things". As shown in Figure 1(a), at each time interval t, we treat one location i with its surrounding neighborhood as one S × S image (e.g., 7 × 7 image in Figure 1(a)) having one channel of demand values (with i being at the center of the image), where the size S controls the spatial granularity. We use zero padding for location at boundaries of the city. As a result, we have an image as a tensor (having one channel) Y i Temporal View: LSTM The temporal view models sequential relations in the demand time series. We propose to use Long Short-Term Memory (LSTM) network as our temporal view component. LSTM (Hochreiter and Schmidhuber 1997) is a type of neural network structure, which provides a good way to model sequential dependencies by recursively applying a transition function to the hidden state vector of the input. It is proposed to address the problem of classic Recurrent Neural Network (RNN) for its exploding or vanishing of gradient in the long sequence training (Hochreiter et al. 2001). LSTM learns sequential correlations stably by maintaining a memory cell c t in time interval t, which can be regarded as an accumulation of previous sequential information. In each time interval, LSTM takes an input g i t , h t−1 and c t−1 in this work, and then all information is accumulated to the memory cell when the input gate i i t is activated. In addition, LSTM has a forget gate f i t . If the forget gate is activated, the network can forget the previous memory cell c i t−1 . Also, the output gate o i t controls the output of the memory cell. In this study, the architecture of LSTM is formulated as follows: Figure 1(b) shows, the temporal component takes representations from the spatial view and concatenates them with context features. More specifically, we define: where ⊕ denotes the concatenation operator, therefore, g i t ∈ R r+d . Semantic View: Structural Embedding Intuitively, locations sharing similar functionality may have similar demand patterns, e.g., residential areas may have a high number of demands in the morning when people transit to work, and commercial areas may expect to have high demands on weekends. Similar regions may not necessarily be close in space. Therefore, we construct a graph of locations representing functional (semantic) similarity among regions. We define the semantic graph of location as G = (V, E, D), where the set of locations L are nodes V = L, E ∈ V × V is the edge set, and D is a set of similarity on all the edges. We use Dynamic Time Warping (DTW) to measure the similarity ω ij between node (location) i and node (location) j. ω ij = exp(−αDTW(i, j)), where α is the parameter that controls the decay rate of the distance (in this paper, α = 1), and DTW(i, j) is the dynamic time warping distance between the demand patterns of two locations. We use the average weekly demand time series as the demand patterns. The average is computed on the training data in the experiment. The graph is fully connected because every two regions can be reached. In order to encode each node into a low dimensional vector and maintain the structural information, we apply a graph embedding method on the graph. For each node i (location), the embedding method outputs the embedded feature vector m i . In addition, in order to co-train the embedded m i with our whole network architecture, we feed the feature vector m i to a fully connected layer, which is defined as: W f e and b f e are both learnable parameters. In this paper, we use LINE for generating embeddings (Tang et al. 2015). Prediction Component Recall that our goal is to predict the demand at t + 1 given the data till t. We join three views together by concatenatinĝ m i with the output h i t of LSTM: Note that the output of LSTM h i t contains both effects of temporal and spatial view. Then we feed q i t to the fully connected network to get the final prediction valueŷ i t+1 for each region. We define our final prediction function as: where W f f and b f f are learnable parameters. σ(x) is a Sigmoid function defined as σ(x) = 1/(1+e −x ). The output of Randomly select a batch of instance Ω bt from Ω; 13 Optimize θ by minimizing the loss function Eq. (9) with Ω bt 14 until stopping criteria is met; our model is in [0, 1], as the demand values are normalized. We later denormalize the prediction to get the actual demand values. Loss function In this section, we provide details about the loss function used for jointly training our proposed model. The loss function we used is defined as: where θ are all learnable parameters in the DMVST-Net and γ is a hyper parameter. The loss function consists of two parts: mean square loss and square of mean absolute percentage loss. In practice, mean square error is more relevant to predictions of large values. To avoid the training being dominated by large value samples, we in addition minimize the mean absolute percentage loss. Note that, in the experiment, all compared regression methods use the same loss function as defined in Eq. (9) for fair comparison. The training pipeline is outlined in Algorithm 1. We use Adam (Kingma and Ba 2014) for optimization. We use Tensorflow and Keras (Chollet and others 2015) to implement our proposed model. Experiment Dataset Description In this paper, we use a large-scale online taxi request dataset collected from Didi Chuxing, which is one of the largest online car-hailing companies in China. The dataset contains taxi requests from 02/01/2017 to 03/26/2017 for the city of Guangzhou. There are 20 × 20 regions in our data. The size of each region is 0.7km × 0.7km. There are about 300, 000 requests each day on average. The context features used in our experiment are the similar types of features used in (Tong et al. 2017). These features include temporal features (e.g., the average demand value in the last four time intervals), spatial features (e.g., longitude and latitude of the region center), meteorological features (e.g., weather condition), event features (e.g., holiday). In the experiment, the data from 02/01/2017 to 03/19/2017 is used for training (47 days), and the data from 03/20/2017 to 03/26/2017 (7 days) is used for testing. We use half an hour as the length of the time interval. When testing the prediction result, we use the previous 8 time intervals (i.e., 4 hours) to predict the taxi demand in the next time interval. In our experiment, we filter the samples with demand values less than 10. This is a common practice used in industry. Because in the real-world applications, people do not care about such low-demand scenarios. Evaluation Metric We use Mean Average Percentage Error (MAPE) and Rooted Mean Square Error (RMSE) to evaluate our algorithm, which are defined as follows: whereŷ i t+1 and y i t+1 mean the prediction value and real value of region i for time interval t + 1, and where ξ is total number of samples. Methods for Comparison We compared our model with the following methods, and tuned the parameters for all methods. We then reported the best performance. • Historical average (HA): Historical average predicts the demand using average values of previous demands at the location given in the same relative time interval (i.e., the same time of the day). • Autoregressive integrated moving average (ARIMA): ARIMA is a well-known model for forecasting time series which combines moving average and autoregressive components for modeling time series. • Linear regression (LR): We compare our method with different versions of linear regression methods: ordinary least squares regression (OLSR), Ridge Regression (i.e., with 2 -norm regularization), and Lasso (i.e., with 1norm regularization). • ST-ResNet (Zhang, Zheng, and Qi 2017): ST-ResNet is a deep learning based approach for traffic prediction. The method constructs a city's traffic density map at different times as images. CNN is used to extract features from historical images. We used the same context features for all regression methods above. For fair comparisons, all methods (except ARIMA and HA) use the same loss function as our method defined in Eq. (9). We also studied the effect of different view components proposed in our method. • Temporal view: For this variant, we used only LSTM with inputs as context features. Note that, if we do not use any context features but only use the demand value of last timestamp as input, LSTM does not perform well. It is necessary to use context features to enable LSTM to model the complex sequential interactions for these features. • Temporal view + Semantic view: This method captures both temporal dependency and semantic information. • Temporal view + Spatial (Neighbors) view: In this variant, we used the demand values of nearby regions at time interval t asŝ i t and combined them with context features as the input of LSTM. We wanted to demonstrate that simply using neighboring regions as features cannot model the complex spatial relations as our proposed local CNN method. • Temporal view + Spatial (LCNN) view: This variant considers both temporal and local spatial views. The spatial view uses the proposed local CNN for considering neighboring relation. Note that when our local CNN uses a local window that is large enough to cover the whole city, it is the same as the global CNN method. We studied the performance of different parameters and show that if the size is too large, the performance is worse, which indicates the importance of locality. • DMVST-Net: Our proposed model, which combines spatial, temporal and semantic views. Preprocessing and Parameters We normalized the demand values for all locations to [0, 1] by using Max-Min normalization on the training set. We used one-hot encoding to transform discrete features (e.g., holidays and weather conditions) and used Max-Min normalization to scale the continuous features (e.g., the average of demand value in last four time intervals). As our method outputs a value in [0, 1], we applied the inverse of the Max-Min transformation obtained on training set to recover the demand value. All these experiments were run on a cluster with four NVIDIA P100 GPUs. The size of each neighborhood considered was set as 9 × 9 (i.e., S = 9), which corresponds to 6km × 6km rectangles. For spatial view, we set K = 3 (number of layers), τ = 3×3 (size of filter), λ = 64 (number of filters used), and d = 64 (dimension of the output). For the temporal component, we set the sequence length h = 8 Performance Comparison Comparison with state-of-the-art methods. Table 1 shows the performance of the proposed method as compared to all other competing methods. DMVST-Net achieves the lowest MAPE (0.1616) and the lowest RMSE (9.642) among all the methods, which is 12.17% (MAPE) and 3.70% (RMSE) relative improvement over the best performance among baseline methods. More specifically, we can see that HA and ARIMA perform poorly (i.e., have a MAPE of 0.2513 and 0.2215, respectively), as they rely purely on historical demand values for prediction. Regression methods (OLSR, LASSO, Ridge, MLP and XGBoost) further consider context features and therefore achieve better performance. Note that the regression methods use the same loss function as our method defined in Eq. (9). However, the regression methods do not model the temporal and spatial dependency. Consequently, our proposed method significantly outperforms those methods. Furthermore, our proposed method achieves 18.01% (MAPE) and 6.37% relative improvement over ST-ResNet. Compared with ST-ResNet, our proposed method further utilizes LSTM to model the temporal dependency, while at the same time considering context features. In addition, our use of local CNN and semantic view better captures the correlation among regions. Comparison with variants of our proposed method. Table 2 shows the performance of DMVST-Net and its variants. First, we can see that both Temporal view + Spatial (Neighbor) view and Temporal view + Spatial (LCNN) view achieve a lower MAPE (a reduction of 0.63% and 6.10%, respectively). The result demonstrates the effectiveness of Figure 2 shows the performance of different methods on different days of the week. Due to the space limitation, We only show MAPE here. We get the same conclusions of RMSE. We exclude the results of HA and ARIMA, as they perform poorly. We show Ridge regression results as they perform best among linear regression models. In the figure, it shows that our proposed method DMVST-Net outperforms other methods consistently in all seven days. The result demonstrates that our method is robust. Moreover, we can see that predictions on weekends are generally worse than on weekdays. Since the average number of demand requests is similar (45.42 and 43.76 for weekdays and weekends, respectively), we believe the prediction task is harder for weekends as demand patterns are less regular. For example, we can expect that residential areas may have high demands in the morning hours on weekdays, as people need to transit to work. Such regular patterns are less likely to happen on weekends. To evaluate the robustness of our method, we look at the relative increase in prediction error on weekends as compared to weekdays, i.e., defined as |wk −wd|/wd, wherewd andwk are the average prediction error of weekdays and weekends, respectively. The results are shown in Table 3. For our proposed method, the relative increase in error is the smallest, at 4.04%. Performance on Different Days At the same time, considering temporal view, only (LSTM) has a relative increase in error of 4.77%, while the increase is more than 10% for Ridge regression, MLP, and XGBoost. The more stable performance of LSTM can be attributed to its modeling of the temporal dependency. We see that ST-ResNet has a more consistent performance (relative increase in error of 4.41%), as the method further models the spatial dependency. Finally, our proposed method is more robust than ST-ResNet. Influence of Sequence Length for LSTM In this section, we study how the sequence length for LSTM affects the performance. Figure 3a shows the prediction er- ror of MAPE with respect to the length. We can see that when the length is 4 hours, our method achieves the best performance. The decreasing trend in MAPE as the length increases shows the importance of considering the temporal dependency. Furthermore, as the length increases to more than 4 hours, the performance slightly degrades but mainly remains stable. One potential reason is that when considering longer temporal dependency, more parameters need to be learned. As a result, the training becomes harder. Influence of Input Size for Local CNN Our intuition was that applying CNN locally avoids learning relation among weakly related locations. We verified that intuition by varying the input size S for local CNN. As the input size S becomes larger, the model may fit for relations in a larger area. In Figure 3b, we show the performance of our method with respect to the size of the surrounding neighborhood map. We can see that when there are three convolutional layers and the size of map is 9 × 9, the method achieves the best performance. The prediction error increases as the size decreases to 5 × 5. This may be due to the fact that locally correlated neighboring locations are not fully covered. Furthermore, the prediction error increases significantly (more than 3.46%), as the size increases to 13 × 13 (where each area approximately covers more than 40% of the space in GuangZhou). The result suggests that locally significant correlations may be averaged as the size increases. We also increased the number of convolution layers to four and five layers, as the CNN needed to cover larger area. However, we observed similar trends of prediction error, as shown in Figure 3b. We can now see that the input size for local CNN when the method performs best remains consistent (i.e., the size of map is 9 × 9). Conclusion and Discussion The purpose of this paper is to inform of our proposal of a novel Deep Multi-View Spatial-Temporal Network (DMVST-Net) for predicting taxi demand. Our approach in- tegrates the spatial, temporal, and semantic views, which are modeled by local CNN, LSTM and semantic graph embedding, respectively. We evaluated our model on a large-scale taxi demand dataset. The experiment results show that our proposed method significantly outperforms several competing methods. As deep learning methods are often difficult to interpret, it is important to understand what contributes to the improvement. This is particularly important for policy makers. For future work, we plan to further investigate the performance improvement of our approach for better interpretability. In addition, seeing as the semantic information is implicitly modeled in this paper, we plan to incorporate more explicit information (e.g., POI information) in our future work.
2017-12-08T00:24:12.546Z
2018-02-23T00:00:00.000
{ "year": 2018, "sha1": "839c4dd710ae7a234424aeda2f1423e0ce61bd5e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "aa139d6a689da06328aa59af249e3b436181c04f", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
218516670
pes2o/s2orc
v3-fos-license
The Smallest SU($N$) Hadrons If new physics contains new, heavy strongly-interacting particles belonging to irreducible representations of SU(3) different from the adjoint or the (anti)fundamental, it is a non-trivial question to calculate what is the minimum number of quarks/antiquarks/gluons needed to form a color-singlet bound state (''hadron'') with the new particle. Here, I prove that for an SU(3) irreducible representation with Dynkin label $(p,q)$, the minimal number of quarks needed to form a product that includes the (0,0) representation is $2p+q$. I generalize this result to SU($N$), with $N>3$. I also calculate the minimal total number of quarks/antiquarks/gluons that, bound to a new particle in the $(p,q)$ representation, give a color-singlet state: $n_g=\lfloor (2p+q)/3 \rfloor$ gluons, $n_{\bar q}=\lfloor (2p+q-3n_g)/2 \rfloor$ antiquarks, and $n_q=2p+q-3n_g-2n_{\bar q}$ quarks (with the exception of the {\boldmath $\overline 6$}$\sim$(0,2) and of the {\boldmath $\overline{ 10}$}$\sim$(0,3), for which 2 and 3 quarks, respectively, are needed to form the most minimal color-less bound state). Finally, I show that the possible values of the electric charge $Q_H$ of the smallest hadron $H$ containing a new particle $X$ in the ($p,q$) representation of SU(3) and with electric charge $Q_X$ are $-(2p+q)/3\le Q_H-Q_X \le 2(2p+q)/3$. Introduction In Quantum Chromo-Dynamics (QCD), a gauge theory with gauge group SU(3) that describes the strong nuclear force in the Standard Model of particle physics, color confinement is the phenomenon that color-charged particles cannot be isolated, i.e. cannot subsist as stand-alone asymptotic states. From a group-theoretical standpoint, quarks belong to the fundamental representation of SU(3), antiquarks to the antifundamental representation, and the force mediators, gluons, to the adjoint representation 1 . Color-confinement can thus be stated in group-theoretic language as the phenomenon that asymptotic, physical states must belong to the singlet (trivial) representation of SU (3), which I indicate below as 1 ∼ (0, 0). For instance, in real life physical states of strongly-interacting particles include mesons, which are quark-antiquark states, belonging to the singlet representation resulting from 3 ⊗ 3 = 8 ⊕ 1; and baryons, which are three-quark states, belonging to the singlet representation resulting from 3 ⊗ 3 ⊗ 3 = 10 ⊕ 8 ⊕ 8 ⊕ 1. In addition, glueballs, bound states of two gluons, could also exist [4], since 8 ⊗ 8 = 27 ⊕ 10 ⊕ 10 ⊕ 8 ⊕ 8 ⊕ 1. Here, I am interested in which bound states would form around a hypothetical new particle X charged under SU(3) and belonging to some irreducible representation of SU (3) with Dynkin label (p, q). Specifically, I address two questions: the first, simple question is how many "quarks" would be needed to form a color-less bound state, i.e. what is the minimal number of copies of the fundamental representation such that the direct product of those copies and of the (p, q) contains the trivial representation (0, 0). The answer is 2p + q: I prove this in two different ways below. I then generalize this result to SU(N ). Secondly, I pose the slightly less trivial question of what is the minimal number of "elementary constituents", i.e. quarks, antiquarks and gluons, needed to form a colorless 1 In what follows, I will both use the notation d to indicate a representation of dimension d and d to indicate the corresponding conjugate representation, and the notation (p, q), with p and q non-negative integers; I use the convention that the bar corresponds to representations where q > p. The dimension of representation (p, q) is d = (p + 1)(q + 1)(p + q + 2)/2. For instance, quarks belong to the irreducible representation 3 ∼ (1, 0), antiquarks to 3 ∼ (0, 1), and gluons to 8 ∼ (1, 1) (for details see [1,2]; for an exhaustive review on Lie groups see e.g. [3]) bound state with the new particle X. I list the results for all SU(3) representations with dimension smaller than 100. As a corollary, I show that if the new particle X, of label (p, q), has electric charge Q X , the "smallest hadron" H containing X has electric charge −(2p + q)/3 ≤ Q H − Q X ≤ 2(2p + q)/3. Also as a corollary, assuming Q X = 0, I list all possible (p, q) irreducible representation such that the "smallest" hadron can be electrically neutral. The reasons why the questions above are interesting include the fact that, at least in the real world, the "smallest" hadrons (protons, neutrons, pions) are also the lightest ones in the spectrum, and there are good reasons to believe that the same could be true for a new exotic heavy state. Limits on new strongly-interacting states imply that the mass of the X be much higher than the QCD scale Λ QCD [5][6][7]. Thus any state containing more than one X, such as for instance the color-singlet XX, would be significantly heavier than any bound state of X with quarks, antiquarks or gluons. Additionally, such exotic hadrons could be stable, and under some circumstances could even be the dark matter, or a part thereof (see e.g. [8][9][10]). However, charge neutrality would restrict, through the arguments made here, which irreducible representations the X could belong to. The reminder of the paper is organized as follows: in the next section 2 I provide two proofs that the minimal product of fundamental representations of SU(3) is 2p + q and generalize the result to SU(N ); in the following section 3 I calculate the composition of the smallest hadron in SU (3); the final sec. 4 concludes. The minimal direct product of fundamental representations of SU(N ) containing the trivial representation Irreducible representations of SU(N ) are conveniently displayed with Young tableaux via the following rules (for more details, see e.g. [11][12][13][14]): (i) The fundamental representation is represented by a single box; (ii) Young tableaux for SU(N ) are left-justified N − 1 rows of boxes such that any row is not longer than the row above it; (iii) Any column with N boxes can be crossed out as it corresponds to the trivial (singlet) representation. Any irreducible representation can be obtained from direct products of the fundamental representation; the direct product of two representations proceeds via the following rules: (i) Label the rows of the second representation's tableau with indices a, b, c, ...., e.g. (ii) Attach all boxes from the second to the first tableau, one at a time, following the order a, b, c, ...., in all possible way; the resulting Young tableaux is admissible if it obeys the rules above, and if there are no more than one a, b, c, .... in every column; (iii) Two tableaux with the same shape should be kept only if they have different labeling; .. is admissible if at any point in the sequence at least as many a's have occurred as b's, at least as many b's have occurred as c's, etc.; all tableaux with indices in any row, from right to left, arranged in a non-admissible sequence must be eliminated. The direct product of k fundamentals is especially simple, since it entails repeated attachment of one additional box up to k new boxes to any row, if that operation produces an admissible tableau (for instance, one cannot attach a box to a row containing as many boxes as the row above). In the case of SU (3), Young tableaux have only two rows, and can be labeled with the Dynkin indices (p, q), where q is the number of boxes in the second row, and p the number of additional boxes in the first row with respect to the second (thus, the first row has p + q boxes). The dimensionality of the representation is given by The direct product of the fundamental and a generic irreducible representation (p, q) generally includes where the last two representations only exist if p ≥ 1 and q ≥ 1, respectively. As a result, to obtain the singlet representation (0, 0) from (p, q) we need exactly p copies of the fundamental to bring p → 0 (visually, by adding the extra boxes all to the second row); these will bring us to the representation (0, q + p); at that point, we attach q + p boxes to the third row (i.e. multiply by an additional q + p fundamentals) to obtain the singlet representation. The operational sequence outlined above is also the most economical, since, as Eq. (2.2) shows, p can only decrease by one unit for each additional fundamental representation factor, but doing so costs an increment of one unit to q; similarly, q can also only decrease by one unit at a time, thus the minimal number k of fundamental representations needed to obtain a representation that includes the singlet representation from the direct product of a given representation (p, q) and k copies of the fundamental representation is k = 2p + q. Visually, one simply needs to fill the Young tableaux of the representation (p, q) to a rectangle of 3 × (p + q) boxes; this requires 3p + 3q − (2q + p) = 2p + q additional boxes, or copies of the fundamental representation, as shown in fig. 1. This result is easily generalized, by the same argument, to SU(N ), where irreducible representations are labeled by (p 1 , p 2 , ..., p N −1 ), and the number of fundamental representations is given by A more formal proof of the statement above can be obtained from the Schur-Weyl duality 2 [15]: the direct product of k copies of the fundamental representation N of SU(N ) decomposes into a direct sum over of irreducible representations labeled by all ordered partitions λ 1 ≥ λ 2 ... ≥ λ i of k with i ≤ N . The question of whether, given a representation X, the representation X ⊗ N ⊗k contains the trivial representation is equivalent to asking whether N is contained in the Schur-Weyl duality sum. But given that for a representation X ∼ (p 1 , p 2 , ..., p N −1 ) the conjugate representation X ∼ (p N −1 , p N −2 , ...p 2 , p 1 ), whose Young tableaux contains exactly k N = p N −1 + 2p N −2 + ... + (N − 2)p 2 + (N − 1)p 1 boxes, the X certainly belongs to the Schur-Weyl duality decomposition; this also proves that k N is the smallest possible number k such that X ⊗ N ⊗k contains the trivial representation, since k N − 1 would not have a sufficient number of Young tableaux to produce X in the Schur-Weyl duality decomposition. The minimal number of gluons, quarks, antiquarks Here I will prove that, with two exceptions, one can always substitute the product of two SU(3) fundamentals 3 ∼ (1, 0) ⊗2 for one antifundamental 3 ∼ (0, 1), and of three fundamentals (1, 0) ⊗3 for one adjoint 8 ∼ (1, 1). This procedure yields the minimal number of quarks/antiquarks/gluons from the results of the previous section. The proof is as follows: in the previous section I have proved that for a representation X ∼ (p, q), the minimal Figure 2: With two exceptions, one is always allowed to trade 2 copies of the fundamental representation for one antifundamental, and 3 copies for one adjoint. number of fundamentals one needs to multiply to obtain a representation that contains the singlet representation is 2p + q, i.e. where Y is a generic direct sum of irreducible representations. Since 3 ⊗ 3 = 6 ⊕ 3, I can write: I will prove that for any representation Z, the product Z ⊗ 6 contains the singlet representation if and only if Z = 6 ∼ (0, 2). Therefore, with the exception of Z = 6, the singlet representation must be contained in the direct product X ⊗ 3 ⊗ 3 ⊗(2p+q−2) , and one can eliminate two fundamentals in favor of one antifundamental (see fig. 2). Let Z ∼ (p, q). The representation Z ⊗ 6 is obtained by adding two boxes to any of the three rows, which gives the following possibilities (I indicate with [x, y, z] x boxes added to the first row, y to the second row, z to the third row): Since p, q are non-negative integers, the only resulting representations that could correspond to the singlet representation are (p − 1, q) and (p, q − 2), for Z = 3 ∼ (1, 0) and 6 ∼ (0, 2), -5 -respectively. Since 3 ⊗ 3 = 8 ⊕ 1, the (1, 0) allows for the substitution of two fundamentals for one antifundamental; however, since 6 ⊗ 3 = 15 ⊕ 3, the 6 ∼ (0, 2) representation does not allow for the substitution of two fundamentals for one antifundamentals, and the minimal number of elementary constituents to produce a color-neutral hadron is two quarks. The proof for the substitution of three fundamentals for one adjoint follows along similar lines. Here, note that so I can write, similarly to what done above, By the proof in the preceding section, X ⊗ 3 ⊗(2p+q−3) cannot contain the singlet representation (since the minimal number of products of the fundamental is 2p + q, not 2p + q − 3); therefore, unless X ⊗ 3 ⊗(2p+q−3) ⊗ 10 contains the singlet representation, the substitution of three fundamentals for one adjoint is allowed. As above, I consider the product Z ⊗ 10, for a generic representation Z ∼ (p, q). Using the same notation as above, multiplication by the 10 is obtained by adding three boxes to any of the three rows, which gives the following possibilities: Inspection of the cases above, noting again that p and q are non-negative integers, indicates that the only candidate representations to give a singlet when multiplied by a 10 are 1 ∼ (0, 0), 8 ∼ (1, 1) and 10 ∼ (0, 3). Now, since 1⊗10 = 10 and 8⊗10 = 35⊕27⊕10⊕8, we conclude again that the only representation for which the substitution of the product of three fundamentals for one adjoint is the 10, for which the minimal number of elementary -6 -constituents is three quarks (a quark-antiquark pair would not suffice since 10 ⊗ 3 ⊗ 3 = 35 ⊕ 27 ⊕ 10 ⊕ 8). With the results outlined above, assuming the electric charge Q X of a new hypothetical strongly-interacting particle X belonging to a representation X ∼ (p, q) is known, it is possible to calculate both the electric charge of the "smallest" hadron Q H , and, generally, of any hadron containing X. Given the number of quarks n q , antiquarks n q and gluons n g listed in Tab. 1, the possible values of the charge of the smallest hadron H are the following: This can be equivalently expressed in terms of the Dynkin label (p, q) as Any other hadron H could only have electric charge Q H = Q H + k for integer k. Notice that all and only the representations of the form X ∼ (k + 3n, k + 3m) (with the exception of the one case k = 0 n = 0, m = 1, i.e. (0, 3)) exclusively contain gluons in their "smallest hadron" (equivalently, the direct product X ⊗ 8 ⊗(2n+m+k) contains the trivial representation). Thus, it is only those representations (including (0, 3)) that will yield hadronic bound states with integer charge if the "new physics particle" is neutral or of integer charge. I indicate those representations in green in Tab. 1. Notice that this set of representations includes all real (self-adjoint) representations (p, p). Conclusions I proved that the smallest number of copies k of the fundamental representation (1,0) of SU(3) such that the direct product of irreducible representation (p, q) ⊗ (1, 0) ⊗k contains the trivial representation (0, 0) is k = 2p + q; I generalized this result to SU(N ), where for irreducible representation (p 1 , p 2 , ..., p N −1 ), k N = p N −1 +2p N −2 +...+(N −2)p 2 +(N −1)p 1 . I showed that in SU(3) one can "trade" any two fundamentals in the direct product of 2p + q fundamentals for one antifundamental (with the exception of representation 6 ∼ (0, 2), for which such substitution is not allowed) and any three fundamentals for one adjoint (with the exception of representation 10 ∼ (0, 3), for which such substitution is not allowed); finally, I showed that if a new strongly interacting particle X in the (p, q) representation of SU(3) has electric charge Q X , the possible values of the electric charge of its "smallest hadron" H are −(2p + q)/3 ≤ Q H − Q X ≤ 2(2p + q)/3, and those of any other hadron H is Q H = Q H + k for integer k. Table 1: List of all irreducible representations of SU(3) with dimension smaller than 100, with the minimal number of gluons, antiquarks and quarks needed to form a color-singlet hadron. The smallest hadrons for representations in green only contain gluons, and, if the "new physics particle" belonging to that representation is electrically neutral, would also be electrically neutral.
2020-05-07T01:01:03.168Z
2020-05-05T00:00:00.000
{ "year": 2020, "sha1": "446b6f2520a5d02052e627d4a141c1265178b2f3", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.102.035008", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "446b6f2520a5d02052e627d4a141c1265178b2f3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
59444724
pes2o/s2orc
v3-fos-license
Comparison of planetary bearing load-sharing characteristics in wind turbine gearboxes In this paper, the planetary load-sharing behavior and fatigue life of different wind turbine gearboxes when subjected to rotor moments are examined. Two planetary bearing designs are compared – one design using cylindrical roller bearings with clearance and the other design using preloaded tapered roller bearings to support both the carrier and planet gears. Each design was developed and integrated into a 750 kW dynamometer tests, the loads on each planet bearing row were measured and compared to finite-element models. Bearing loads were not equally shared between the set of cylindrical roller bearings supporting the planets even in pure torque conditions, with one bearing supporting up to 46 % more load than expected. A significant improvement in planetary bearing load sharing was demonstrated in the gearbox with preloaded tapered roller bearings with maximum loads 20 % lower than the gearbox with cylindrical roller bearings. Bearing life was calculated with a representative duty cycle measured from field tests. The predicted fatigue life of the eight combined planet and carrier bearings for the gearbox with preloaded tapered roller bearings is 3.5 times greater than for the gearbox with cylindrical roller bearings. The influence of other factors, such as carrier and planet bearing clearance, gravity, and tangential pin position error, is also investigated. The combined effect of gravity and carrier bearing clearance was primarily responsible for unequal load sharing. Reducing carrier bearing clearance significantly improved load sharing, while reducing planet clearance did not. Normal tangential pin position error did not impact load sharing due to the floating sun design of this three-planet gearbox. Introduction Although the cost of energy from wind has declined tremendously during the past 3 decades (US Department of Energy, 2018), wind power plant operation and maintenance (O&M) costs are higher than anticipated and remain an appreciable contributor to the overall cost of wind energy.Wind power plant O&M averages USD 10 per megawatt hour at recently installed wind plants, accounts for 20 % or more of the wind power purchase agreement price, and generally increases as the wind plant ages (Wiser and Bolinger, 2017).Approximately half of the total wind plant O&M costs are related to wind turbine O&M (Lantz, 2013), and a sizeable portion of these costs is related to the reliability of the wind tur-bine drivetrain (Kotzalas and Doll, 2010;Greco et al., 2013;Keller et al., 2016). Most of the wind turbines installed in the United States utilize a geared drivetrain with a multi-stage gearbox including one or more planetary stages.These gearboxes must operate in a challenging, dynamic environment different from other industrial applications (Struggl et al., 2014).In general, wind turbine gearboxes do not achieve their expected design life (Lantz, 2013), even though they commonly meet or exceed the criteria specified in standards in the gear, bearing, and wind turbine industry as well as third-party certifications.Planet bearing failures, although not the most frequent type of failure (Sheng, 2017), are extremely costly because they typically require replacement of the entire gearbox with Published by Copernicus Publications on behalf of the European Academy of Wind Energy e.V. a large crane and thus merit investigation.In planetary gearboxes, equal load distribution between the planet gears and load distribution is required to achieve the predicted design life.Unequal load sharing between planetary gears due to manufacturing and assembly errors has been extensively examined in the past 3 decades (Winkelmann, 1987;Lamparski, 1995;Predki and Vriesen, 2005;Cooley and Parker, 2014), with analytic models often validated through finiteelement models or experimental measurements at fixed locations or even on rotating gearing (Mo et al., 2016;Nam et al., 2016).More recently and specifically for wind turbine gearboxes, the ability of a floating sun gear to absorb the consequences of geometrical imperfections has been studied (Nejad et al., 2015;Iglesias et al., 2016).The effects of gravity and the drivetrain tilt angle on planet gear load-sharing and tooth-wedging behavior were examined (Guo et al., 2014;Qiu et al., 2015).Gravity is an important factor as it introduces fundamental excitations in the rotating carrier frame of planetary gear sets.The effect of carrier bearing clearance on planetary load sharing subject to rotor moments has also been studied (Crowther et al., 2011;LaCava et al., 2013;Guo et al., 2014Guo et al., , 2015)).Rotor moments impact planet load sharing, gear and bearing alignment, and bearing contact conditions and stress (Park et al., 2013;Gould and Burris, 2016;Dabrowski and Natarajan, 2017).Steady-state rotor moments and gravity result in a once-per-revolution variation in bearing load in the rotating carrier frame, which both increases fatigue and could cause wear or skidding (Guo et al., 2014;Gould and Burris, 2016).Although it is generally agreed that a three-planet gear set with a floating central member has equal load sharing regardless of manufacturing errors (Cooley and Parker, 2014), in the wind turbine application, bearing clearance, gravity, and rotor moments can in fact cause unequal load sharing between planet gears. Although load sharing between planetary gears has been examined extensively, the distribution of loads between the two or more bearing rows supporting each planet has not.In this paper, the load-sharing characteristics between the bearing rows supporting the planetary gears of two different wind turbine gearbox designs are examined and compared.This work extends previous works by the authors (LaCava et al., 2013;Guo et al., 2015;Keller et al., 2017a, b) by examining a wind turbine gearbox planetary section supported by preloaded tapered roller bearings (TRBs) in addition to one supported by full complement and typical caged cylindrical roller bearings (CRBs) that operate in clearance.Loads predicted by design tools are compared to test measurements across a wide range of field-measured loading conditions, and the resultant planetary section fatigue life for a duty cycle of a typical turbine is also compared.The physical phenomenon responsible for unequal load sharing of the planet bearings is identified. Gearbox design and test program The National Renewable Energy Laboratory Gearbox Reliability Collaborative (GRC) has been investigating the root causes of premature wind turbine gearbox failures for over a decade.A modular 750 kW wind drivetrain from a NEG Micon 750/48 wind turbine featuring a three-stage gearbox in an three-point mounted configuration, still representative of most utility-scale drivetrain architectures, has been used for this effort, as shown in Fig. 1.In the three-point mounted configuration, the rotor and main shaft are primarily supported by a double-row spherical roller main bearing.The main shaft is connected to the planet carrier of the gearbox, which is supported by two torque arms that are mounted to the bedplate with elastomeric bushings.The two torque arms, along with the main bearing, provide a total of three points of support.The three-point mounted configuration transfers torque and rotor moments through the gearbox, which is an important design consideration (Guo et al., 2017). The GRC gearbox design has a single-input planetary stage followed by two parallel-shaft stages.The output shaft of the gearbox is connected to the generator with a flexible coupling.The rated rotor speed is 22.1 rpm, and with a ratio of 81.491, the gearbox increases the output speed to 1800 rpm (Oyague, 2011;Link et al., 2011).The planetary stage features a floating sun to help equalize the load distribution among the three equally spaced planets, accomplished with a hollow low-speed shaft that has an internal spline connection to the sun pinion (Guo et al., 2013).Using this drivetrain and gearbox architecture, the GRC has investigated planetary gear and bearing failure modes and load-sharing characteristics through a dedicated research and test campaign.Two different gearbox designs were purposefully developed, manufactured, and tested.As shown in Fig. 2, their primary difference is the bearing types supporting the carrier and planet bearings.One design features planet CRBs with C3 clearance and full complement carrier CRBs with CN clearance, while the other features planet and carrier TRBs under preload. In many applications, a small preload, creating a small negative operating clearance, can optimize roller loads and maximize bearing life (Oswald et al., 2012).These preloaded bearings, along with interference-fitted planet pins, improve planet alignments and load-sharing characteristics.A semiintegrated planet bearing design also increases capacity and eliminates outer race fretting.Other than these planetary system changes, including updating gear tooth microgeometry, the gearbox designs are nearly identical.The front and rear housing components, originally from a commercially available Jahnel-Kestermann PSC 1000-48/60 gearbox and with intermediate-and high-speed stage gearing, are used in each gearbox.Physical parameters of the planetary bearings are given in Table 1.The models used to design the gearbox with TRBs indicated that it has over three 3 the planetary stage predicted L10 life compared to the gearbox with CRBs, like the projected increase in fatigue life in other industrial applications (Flamang and Clement, 2003;Lucas, 2005). In two separate test campaigns, the gearboxes were mounted in the GRC drivetrain and installed in a dynamometer at the National Wind Technology Center, as shown in Fig. 3. Steady-state, constant-speed drivetrain operations were conducted throughout a range of power levels, from offline to the full 750 kW electrical power and 325 kilonewton meter (kNm) input torque.Vertical and lateral forces were applied with hydraulic actuators to an adapter in front of the main bearing, resulting in bending moments up to ±300 kNm measured on the main shaft.This range of moments was derived from measurements on the same drivetrain when installed in an NEG Micon NM 750/48 turbine at an operational wind plant (Link et al., 2011).Unique to the GRC program is that all engineering drawings, models, and result- ing test data are publicly available (Keller andWallen, 2015, 2017). Each gearbox was extensively instrumented, focusing primarily on planetary stage load-sharing characteristics.A total of 36 strain gage pair measurements were evenly placed between the upwind and downwind bearings of the three planets (A, B, and C) for each gearbox.Most of the measurements were in the expected bearing load zones, as shown in Fig. 4. The helical planetary gearing causes an overturning moment on the planets, resulting in a ±20 • offset of the center of each load zone from the bearing top dead center (TDC).The measurements were made at identical circumferential locations for the upwind and downwind bearings for the gearbox with CRBs.Two measurements were taken along the bearing inner-race width to investigate the axial load distribution between the upwind and downwind bearing rows (Link et al., 2011).Conversely for the gearbox with preloaded TRBs, the measurements focused on the circumferential load distribution with only one axial measurement on each bearing inner race.Additionally, one planet (B) has measurements at 10 circumferential locations per bearing row, 9 of which span the expected load zone.The other two planets (A and C) have measurements at four circumferential locations per bearing row (Keller and Wallen, 2017). The roller load at each measurement location is determined by converting the average strain range with calibration factors determined from dedicated bench tests (van Dam, 2011; Keller and Lucas, 2017).Several thermocouples were also installed on the bearing inner races for each gearbox. Gearbox modeling Gearbox models were developed in two different finiteelement, commercial software applications to predict planetary loads and load zones.The Transmission3D software application implements a three-dimensional, contactmechanics model (Transmission3D, 2018).The gearbox is represented with deformable bodies, including the ring gear and gearbox housing, as their flexibility can affect gear misalignment and load-sharing characteristics.Gear and bearing contacts, including piece-wise clearance nonlinearities, are modeled with a hybrid of finite elements to predict farfield displacements and a Green's function model to predict displacements in the contact region.Known bearing clearances, preload, and pin position errors were included in the model.The RomaxWind software application implements a beam finite-element representation of shafts and a solid finite-element representation of the gearbox housing, gear blanks, carrier, and torque arms (RomaxWind, 2018).The gears and bearings were modeled with semi-analytical formulations that account for misalignment, area of contact under load, microgeometry, radial and axial clearances, and material properties.Static nonlinear analysis is performed for prescribed loading conditions, and the global deflections are solved simultaneously.Additionally, bearing modified L10 fatigue life calculations are made for a predetermined drivetrain torque, thrust, and pitch and yaw moment spectrum (Keller et al., 2017a). Results and discussion In this section, the planetary bearing loads and load-sharing characteristics predicted by the models and measured in dynamometer tests are compared for both gearboxes.The fatigue life of planetary bearing designs is also calculated.Finally, several parametric design studies are examined for the gearbox with CRBs to understand the factors contributing to its load-sharing characteristics. Planet bearing load zones In this section, the bearing load zones for each gearbox are compared when the planet is at the bottom of the ring gear.The load zones for the pure torque condition are compared to those for the highest pitch moments.As shown in Fig. 5, the upwind planet CRB load zone increases in size as the applied pitch moment increases.In general, the upwind planet bearing supports up to twice the load of the downwind bearing.The downwind planet CRB load zone is not significantly affected by the applied pitch moment.The theoretical maximum roller load (Harris and Kotzalas, 2006) of approximately 45 kilonewtons (kN) for these bearings generally correlates with the measurements and model predictions. In contrast, as shown in Fig. 6, the planet TRB load zones maintain their size and orientation regardless of the applied pitch moment.The more circular shape of the load zones reflects the preload in the bearings and the rigidity of the planetary system in general.The measured load zone magnitudes and orientations correlate well with the predictions, including the ±20 • offset of the load zone from TDC.The theoretical maximum roller load (Harris and Kotzalas, 2006), also approximately 45 kN, again correlates with the measurements and predictions.The RomaxWind model assumes rigid bearing races, while Transmission3D includes the flexibility of the races.This results in a more circular load zone prediction for RomaxWind compared to an elliptical load zone for Transmission3D. Planet bearing loads The upwind and downwind planet bearing loads can be calculated for each gearbox.For the instrumented CRBs, a direct calibration factor is used to determine the total bearing www.wind-energ-sci.net/3/947/2018/Wind Energ.Sci., 3, 947-960, 2018 load (van Dam, 2011;Harris and Kotzalas, 2006) from only the TDC measurement.For the instrumented TRBs, a spline fit is then used to map the entire load zone and determine the total bearing load (Keller and Lucas, 2017;Keller et al., 2017b).The total load supported by both bearings, which is the vector summation of the upwind and downwind bearing loads, can also be calculated. Figure 7 compares the measured and predicted loads nondimensionalized by the assumed load (i.e., one-sixth or one-third of the load at the planet center resulting from input torque) over a complete revolution of the planet carrier for the pure torque condition.The 0 • location indicates the planet is at the top of the ring gear in its rotation and the 180 • location is at the bottom of the ring gear, which is the same position shown in Figs. 5 and 6.The CRB loads fluctuate over the rotation and are also out of phase because of the combined effect of planet and carrier bearing clearances, gravity, and the resulting gear misalignment (LaCava et al., 2013).The maximum measured load carried by the upwind bearing is 1.43, or 43 % more than the assumed load.The minimum measured load carried by the downwind bearing is only 0.61, or 39 % less than the assumed load.In this condition, the upwind bearing is accumulating more fatigue than expected; conversely, the downwind bearing has an increased risk of skidding for a portion of the carrier rotation.Because the bearing loads are nearly 180 • out of phase, there is much less fluctuation in the total load than the individual row loads.The maximum total measured bearing load is only 6 % greater than assumed.The planet TRB loads are much more consistent over the carrier rotation due to the preload in the bearings and, to some extent, the interference-fitted planet pins that also reduce misalignment.The maximum and minimum measured row loads are only 12 % different than assumed, whereas the maximum measured total bearing load is only 1 % more than assumed.There is good agreement between these measured loads and those predicted by Transmission3D for both gearboxes. In contrast, Fig. 8 compares the same loads but with a large negative pitch moment.The measured upwind CRB load is relatively constant over the rotation but 25 % greater than assumed.The downwind load behavior is very similar to the pure torque condition.The net effect is that the total measured bearing load fluctuates slightly more than the pure torque condition and 15 % more than assumed.The measured TRB loads again fluctuate very little -only 8 % for the downwind row and 2 % for the total bearing load. Figure 9 summarizes the measured upwind and downwind planet CRB loads for all the pitch moment cases.The pitch moment changes the upwind bearing loads significantly; however, it does not affect the downwind bearing loads at all.The behavior of the upwind bearing loads can be separated into three categories.Pure torque and positive pitch moments all essentially have the same effect, resulting in the largest variation over the carrier rotation and overall magnitude in the upwind bearing load.Conversely, pitch moments beyond −200 kNm elevate the mean upwind bearing load with much less fluctuation over the carrier rotation.The −100 kNm pitch moment case is a transition between these two categories. The bearing loads shown in these figures contain both a constant difference from the assumed load and a fluctuating component.The loads are not equally shared in practice.The constant difference is a result of deformations, displacements, and manufacturing deviations causing consistently higher loads on one planet than the others (Cooley and Parker, 2014).The fluctuating component is a result of the rotor moments and gravity, exacerbated by planet and carrier bearing clearances and resulting in misalignment in the www.wind-energ-sci.net/3/947/2018/Wind Energ.Sci., 3, 947-960, 2018 gearbox with the CRBs, causing a once-per-revolution load variation over the carrier rotation (Guo et al., 2015). Planet bearing load sharing The accurate estimation of planet bearing loads is a crucial step in calculating the planetary load-sharing factor, also called the planetary mesh load factor (Kγ ).Ideally, all planets share torque equally and the planetary mesh load factor equals 1.However, because of positional-type errors and variations in tooth stiffness, International Electrotechnical Commission standard 61400-4 assumes this factor is 1.1 for three-planet wind turbine gearboxes.In this study, the maximum load throughout the main shaft rotation shown in Figs.7-9, which accounts for both constant load differences and the fluctuating load from gravity and rotor moments, is examined for comparison to this assumption. Figure 10 compares the maximum individual bearing row load and maximum total bearing load for both gearboxes over the complete range of pitch moments.The maximum total measured CRB load ranges from 1.07 in pure torque to just over 1.15 for large negative pitch moments, very close to the assumed planetary mesh load factor of 1.1. However, as shown previously, the measured CRB load carried by the upwind bearing far exceeds this, reaching 1.43 on average and as high as 1.46 in one test.Counterintuitively, this highest load occurs in the pure torque condition and is not affected by increasing the pitch moment, also as demonstrated in Fig. 9.The upwind-measured CRB load does decrease with negative pitch moments; however, it never falls below 1.26.The wide variation in the maximum CRB load can be contrasted with the consistency in the maximum TRB load.In general, the maximum TRB loads are all much closer to the assumed planetary mesh load factor of 1.1.The max- imum measured downwind TRB load is 1.13 in pure torque and no more than 1.17 even for a large positive pitch moment -much lower than the maximum CRB load.A significant reduction of the maximum loads and improvement in load sharing was achieved with the design changes in the gearbox with TRBs. To better understand the planet bearing load-sharing behavior shown in Fig. 10, the effect of pitch moments on carrier bearing loads is explored in Fig. 11.Here only the predicted loads from the model are available; measurements of the carrier bearing loads were not acquired in tests.Carrier bearing loads are also nondimensionalized by the average of the assumed total planet bearing load.Beyond a ±100 kNm pitch moment, the downwind carrier CRB load increases, while the planet CRB load does not.The downwind carrier CRB essentially supports all the additional load.Within a ±100 kNm pitch moment, the planet CRBs carry any load, while the carrier CRBs are both unloaded.For this gearbox, the upwind carrier CRB does not carry any load regardless of the pitch moment.This behavior is a direct result of the relative clearances of all the carrier and planet CRBs.In contrast, because of their preloaded condition both the upwind and downwind carrier TRBs support loads for any applied pitch moment.It is clear from Figs. 10 and 11 that for the gearbox with CRBs, pitch moments can relieve the gravity load from the main shaft and planetary system from the downwind carrier CRB and shift it to the planet CRBs.However, for the gearbox with TRBs, pitch moments are essentially entirely reacted by the carrier bearings.In the three-point mount drivetrain configuration, the carrier bearing is expected to be part of the load path from the wind turbine rotor to the bedplate.In this ideal situation, the planetary gear system carries only torque and is not impacted by other loads, resulting in improved load sharing between planets and upwind and downwind rows.From this analysis, the planet carrier TRBs are carrying the moment loads as expected, whereas the planet carrier CRBs are not. For comparison to pitch moments, Fig. 12 shows the planet loads over the full range of yaw moments for both gearboxes.The maximum upwind CRB loads occur again in pure torque conditions.Both positive and negative yaw moments decrease the maximum upwind CRB load slightly; however, the measured load remains above 1.35.The total measured bearing load follows a more intuitive pattern, in which it is a minimum of 1.08 at pure torque and increases slightly to 1.13 with either positive or negative yaw moments.Yaw moments have little effect on any of the TRB loads.The maximum measured load of 1.13 occurs for the downwind bearing for a positive yaw moment. Since the largest disparity in load sharing is evident even in pure torque conditions, Fig. 13 examines the maximum individual bearing row load and maximum total bearing load for both gearboxes over the complete range of pure torque conditions tested.Generally, the load increases as torque decreases, although it increases more for the CRBs than the TRBs.The maximum measured total planet bearing load increases from approximately 1.1 at full torque to 1.3 at 25 % torque for both gearboxes.However, the measured CRB load carried by the upwind bearing increases from 1.43 on average in pure torque to as high as 1.85 at 25 % torque.In contrast, the maximum measured downwind TRB load increases from 1.13 in pure torque to just 1.40 at 25 % torque. As shown in this section, the load-sharing characteristics of the planet TRBs were significantly improved compared to the planet CRBs.However, the use of preload in these bearings does raise the question of their temperature characteristics.The measurements from the thermocouples on the bearing inner races for each gearbox, with respect to the gearbox sump temperature, are examined in Fig. 14.In this figure, the average of all thermocouples on each of the planets is examined for the full range of gearbox operating torque and applied moments.The planet bearing temperatures are approximately 5 • C cooler than the sump temperatures for both gearboxes.There is little to no difference in the temperature of the planet bearing inner races between the two gearboxes and thus most likely little to no impact on the gearbox efficiency.This is not necessarily a surprise as the planets are spinning at a relatively low speed compared to the bearings supporting the intermediate-stage and 1800 rpm output shaft of the gearbox.These higher-speed bearings generate significantly more heat and cause the gearbox sump temperature to be higher than the planet bearing operating temperature.For reference, the absolute temperatures of the planet bearings for each gearbox were in the range of 50 to 65 • C, while the gearbox sump temperature typically ranged from 55 to 70 • C. Planet bearing fatigue life The predicted planetary fatigue life for each gearbox was calculated using a representative drivetrain torque and pitch and yaw moment spectrum derived from field measurements (Keller et al., 2017b).The modified bearing L10 life was calculated per Deutsches Institut für Normung International Organization for Standardization 281 Beiblatt 4 (now superseded by International Organization for Standardization technical specification 16281), including a systems life modification factor.As shown in Fig. 15, the average fatigue life of the upwind planet bearing was increased by a factor of 6 using the TRBs when compared to the CRBs, in addition to a smaller life extension for both the upwind and downwind planet bearings due to the larger bearing capacity in the semi-integrated design.The modified L10 life for the eight total planetary bearings (all six planet bearings and two carrier bearings) is also shown, combined using a Weibull slope of 1.125 (Zaretsky et al., 2007).The planetary stage bearing predicted fatigue life has been increased by a factor of 3.5 using the TRBs when compared to the CRBs.The overall planetary bearing stage life is driven by the lowest-life components, which in this case are the planetary bearings in both gearboxes.The carrier bearings have a much longer fatigue life and thus are not shown individually. Parametric studies The previous section examined planetary load-sharing characteristics in detail.The upwind and downwind planet bearing loads were not shared equally in the gearbox with CRBs, even in pure torque conditions.In this section, the major factors responsible for the disturbed load sharing in this gearbox are examined through parametric studies of bearing clearances, gravity, and pin position error. Effect of bearing clearances The planet CRB loads were predicted with reduced clearance settings in the carrier and planet CRBs individually, as listed in Table 1, for comparison to the original model clearances.As shown in Fig. 16, reducing the carrier CRB clearance from CN to C2 resulted in a noticeable improvement in loadsharing characteristics for both the upwind and downwind bearings, especially for positive pitch moments.For example, the predicted upwind CRB load decreases from 1.49 to 1.22 at a +300 kNm pitch moment, a reduction of 18 %.In contrast, reducing the planet CRB clearance from C3 to CN did not significantly reduce the upwind planet CRB loads for positive pitch moments, and it increased the downwind planet CRB loads. Effect of gravity As shown previously, the interplay between the gravity load from the main shaft and planetary system and the pitch moment has a significant effect on both the planet and carrier CRB loads.Figure 17 examines this further by eliminating gravity from the model.Gravity has a significant influence on the planet CRB loads, such as the effect of carrier CRB clearance.Without gravity, the upwind planet CRB load is reduced dramatically anywhere from 0.20 to 0.37 over the entire range of pitch moments, including a reduction from 1.40 to 1.07 at pure torque.The effect of gravity on the downwind planet CRB load is small.The effects of gravity, planetary clearances, and nontorque loads on three-point mounted wind turbine gearboxes are unavoidable and should be considered in their design.The effect of gravity on planet bearing loads can be mitigated by using carrier bearings with reduced clearances if possible. Effect of pin position error Tangential pin position error is one of the more common manufacturing deviations that is known to affect planetary load sharing (Cooley and Parker, 2014).In this parametric study, the effect of a 15 µm tangential pin position error -a magnitude commonly considered in other applications (Singh, 2009) -on the load sharing of the gearbox with CRBs is assessed.As shown in Fig. 18, the pin position error changes the upwind planet CRB loads for only positive pitch moments by less than 4 %.This effect is much smaller than the load fluctuations caused by other factors, which agrees with analytical results (Singh, 2009).It does not significantly change the upwind planet CRB loads in pure torque, negative pitch moments, or downwind CRB loads.Ideally, pin position error should not disturb load sharing with an adequately floating sun for a three-planet gearbox such as the GRC test article. Conclusions This study compared two wind turbine gearbox planetary bearing system designs, a conventional design representing most of the gearboxes used in three-point mounted drivetrains and a new design tailored for increased planetary bearing fatigue life.These two designs differ in the choice of carrier and planet bearings.The first design uses planet CRBs with clearance and the latter utilizes preloaded TRBs.Both gearboxes were designed, built, and instrumented and then tested in a dynamometer under the same set of controlled loading conditions, including pitch and yaw moments.The resulting planet bearing load measurements were correlated with predictions from finite-element models of both gearboxes. The gearbox design using CRBs with clearance did not demonstrate equal load sharing between the planet CRBs, with one bearing supporting up to 46 % more load than expected.This unequal load sharing occurred, counterintuitively, in pure torque conditions and for positive pitch moments.The gearbox design with preloaded TRBs demonstrated improved planetary load-sharing characteristics compared to the gearbox with CRBs.The preloaded TRBs significantly reduced the planet bearing loads from a maximum of 1.46 to 1.17, a 20 % reduction, in pure torque conditions.Furthermore, pitch and yaw moments did not significantly affect the upwind and downwind TRB row loads.Parametric studies indicate that the unequal load sharing in the gearbox with CRBs is primarily a result of the combined effects of gravity and pitch and yaw moments, and bearing clearances and can be substantially improved by reducing clearance in the carrier bearings.This reduction and equalization in planet bearing loads, along with slightly larger capacity bearings through a semi-integrated design, resulted in a modified L10 life 3.5 times greater for the gearbox with preloaded TRBs than for the gearbox with CRBs. Figure 3 . Figure 3. Installation of the GRC drivetrain in the dynamometer.Photo by Mark McDade, NREL 32734. Figure 4 . Figure 4. Planet bearing load measurements for the gearbox with CRBs (a) and TRBs (b). Figure 10 . Figure 10.Maximum planet CRB (a) and TRB (b) loads for all pitch moments. Figure 11 . Figure 11.Maximum carrier CRB (a) and TRB (b) loads for all pitch moments. Figure 12 . Figure 12.Maximum planet CRB (a) and TRB (b) loads for all yaw moments. Figure 13 . Figure 13.Maximum planet CRB (a) and TRB (b) loads for all torque. Figure 14 . Figure 14.Differential between the gearbox sump temperature and the average of planet bearing inner-ring temperatures. Figure 15 . Figure 15.Fatigue life for the planet bearings (a) and planetary bearing stage (b). Figure 16 . Figure 16.Effect of carrier (a) and planet (b) bearing clearance on maximum planet CRB loads. Figure 17 . Figure 17.Effect of gravity on maximum planet CRB loads. Figure 18 . Figure 18.Effect of tangential pin position error on maximum planet CRB loads. Table 1 . Parameters of the planetary bearings.
2019-02-12T08:17:44.314Z
2018-06-05T00:00:00.000
{ "year": 2018, "sha1": "0bcab9a89430efdfc50d400b4d26f39b7048eb58", "oa_license": "CCBY", "oa_url": "https://wes.copernicus.org/articles/3/947/2018/wes-3-947-2018.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0bcab9a89430efdfc50d400b4d26f39b7048eb58", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
250667206
pes2o/s2orc
v3-fos-license
Transverse momentum correlations in relativistic nuclear collisions From the correlation structure of transverse momentum pt in relativistic nuclear collisions we observe for the first time temperature/velocity structure resulting from low- Q2 partons. Our novel analysis technique does not invoke an a priori jet hypothesis. pt autocorrelations derived from the scale dependence of ⟨pt⟩ fluctuations reveal a complex parton dissipation process in RHIC heavy ion collisions. We also observe structure which may result from collective bulk-medium recoil in response to parton stopping. Introduction Central Au-Au collisions at RHIC may generate a color-deconfined medium (quark-gluon plasma or QGP) [1]. Some theoretical descriptions predict abundant low-Q 2 gluon production in the early stages of high-energy nuclear collisions, with rapid parton thermalization as the source of the colored medium [2,3,4]. Nonstatistical fluctuations of event-wise mean p t p t [5,6] may isolate fragments from low-Q 2 partons and determine the properties of the corresponding medium. A recent measurement of excess p t fluctuations in Au-Au collisions at 130 GeV revealed a large excess of fluctuations compared to independent-particle p t production [6]. In this paper we describe the event-wise structure of transverse momentum p t produced in relativistic nuclear collisions at RHIC. We discuss the role of low-Q 2 partons as Brownian probe particles in heavy ion collisions. We compare joint autocorrelations on (η, φ) to conventional leading-particle techniques for parton fragment analysis. We present experimental evidence from mean-p t fluctuations and corresponding p t autocorrelations for local temperature/velocity structure in A-A collisions which can be interpreted in terms of parton dissipation in the A-A medium and same-side recoil response of the bulk medium to parton stopping. Finally, we review the energy dependence of mean-p t fluctuations from SPS to RHIC and its implications. Low-Q 2 partons as Brownian probes In 1905 the microscopic structure of ordinary matter was addressed theoretically by Einstein, who introduced the concept of a (Brownian) probe particle large enough to be observed visually, yet small enough that its motion in response to the molecular dynamics of a fluid might also be observed [7]. Those two constraints specified the one-micron probe particles used by Jean Perrin to confirm molecular motion in fluids [8,9]. The Langevin equatioṅ v(t) = − 1 τ v(t)+ a stoch (t)+ a mcs (t) models the motion of a Brownian probe in a thermalized fluid medium of point masses qualitatively smaller than the probe particle [10,11]. The accelerations are gaussian-random with zero mean; a stoch (t) is isotropic and a mcs (t) ⊥ v(t) (and ∝ v). The first term models collective dissipation of probe motion (viscosity), the second models individual probe collisions with medium particles and the third simulates multiple Coulomb scattering of a fast probe particle. A solution of that equation for unit initial speed in the x direction starting at the (x, y) origin is shown in the first two panels of Fig. 1. Speed is dissipated with time, leading to equilibration with the medium: fluctuations of velocity about zero and random walk of the probe. An example of such motion is shown in the third panel: an electron track in a time projection chamber exhibits multiple Coulomb scattering along its trajectory, terminating in random walk represented by the circled ball of charge at the trajectory endpoint [12]. [12]. In 2005 we seek the microscopic structure and local dynamics of the QCD medium formed in RHIC heavy ion collisions. The point-mass concept of Einstein's Brownian probe must be extended to partonic probes, possibly with internal degrees of freedom and experiencing complex non-point interactions with medium degrees of freedom. This problem requires novel analysis techniques closely coupled to the Langevin equation and its associated numerical methods. The analog in heavy ion collisions to Einstein's Brownian probe is the low-Q 2 parton, visualized for the first time by methods presented in this paper. In contrast to Einstein's notion of a particle of exceptional size observed indefinitely in equilibrium with microscopic motions of a thermalized particulate medium, the QCD Brownian probe is identical to medium particles but possesses an exceptional initial velocity relative to the medium with which it interacts for a brief interval: do probe manifestations in the hadronic system reveal 'microscopic' degrees of freedom of the medium, is the medium locally or globally equilibrated, what are its fluid properties? Joint autocorrelations vs conditional distributions Conventional study of QCD jets in elementary collisions is inherently model-dependent. Scattered partons with large transverse momentum are associated individually with concentrations of transverse momentum or energy localized on angle variables (η, φ). In heavy ion collisions, where such identification is impractical, jet studies are based on a highp t 'leading particle' which may estimate a parton momentum direction and some fraction of its magnitude. The leading-particle momentum is the basis for two-particle conditional distributions on transverse momentum and angles. Those distributions reveal medium modifications to parton production and fragmentation as changes in the single-particle p t spectrum (R AA ) and in the fragment-pair relative azimuth distribution (away-side jet disappearance), referred to collectively as jet quenching [13]. The leading-particle approach is based on perturbative concepts of parton hard scattering as a point-like binary interaction and parton energy loss as gluon bremsstrahlung. We can then ask how the medium is modified by parton energy loss and what happens to low-Q 2 partons, in a Q 2 regime where the pQCD assumption of point-like interactions breaks down, where the parton may have an effective internal structure. In other words, how can we describe parton dissipation as a transport process, including bulk-medium degrees of freedom? To access low-Q 2 partons we have developed an alternative analysis method for jet correlations employing autocorrelation distributions which do not require a leading-or trigger-particle concept. The autocorrelation principle is illustrated in Fig. 2. Projections of the twoparticle momentum space of 130 GeV Au-Au collisions onto subspaces (η 1 , η 2 ) and (φ 1 , φ 2 ) (left panels) indicate that correlations on those spaces are approximately invariant on sum variables η Σ ≡ η 1 +η 2 and φ Σ ≡ φ 1 +φ 2 , in which case autocorrelations on difference variables η ∆ ≡ η 1 −η 2 and φ ∆ ≡ φ 1 − φ 2 retain nearly all the information in the unprojected distribution [14]. The autocorrelation concept was first introduced to solve the Langevin equation, to extract deterministic information from stochastic trajectories. In time-series analysis the autocorrelation of time series The same principle can be applied to ensemble-averaged two-particle momentum distributions which are approximately invariant on their sum variables [15]. Distributions on angle space (η 1 , η 2 , φ 1 , φ 2 ) can be reduced to joint autocorrelations on difference variables (η ∆ , φ ∆ ). For example, joint autocorrelations in the right-most two panels of Fig. 2 correspond to (η 1 , η 2 ) and (φ 1 , φ 2 ) distributions in the left-most four panels. Joint autocorrelations for relativistic nuclear collisions retain almost all correlation structure on a visualizable 2D space and provide access to parton fragment angular correlations with no leading-particle condition, sampling a minimumbias parton distribution. Jet correlations are thus revealed with no a priori jet hypothesis, providing access to the low-Q 2 partons which serve as Brownian probes of the QCD medium. The p-p reference system The reference system for low-Q 2 partons in A-A collisions is the hard component of correlations in p-p collisions. The single-particle p t spectrum for p-p collisions can be decomposed into soft and hard components on the basis of event multiplicity dependence [16]. Event multiplicity determines statistically the fraction of p-p collisions containing observable parton scattering (hard component). Hard components for ten multiplicity classes in the first panel of Fig. 3, obtained by subtracting fixed soft-component spectrum model S 0 , are plotted on transverse rapidity y t ≡ ln{(m t + p t )/m 0 }. The approximately gaussian distributions on y t may be compared with conventional fragmentation functions plotted on logarithmic variable ξ ≡ ln{E jet /p t } [17]. Such single-particle structures motivated a study of two-particle correlations on (y t1 , y t2 ). An example in Fig. 3 (second panel) reveals structures at smaller and larger y t . Soft and hard correlation components on y t , interpreted as longitudinal string fragments (smaller y t ) and transverse parton fragments (larger y t ), produce corresponding structures in joint angular autocorrelations on (η ∆ , φ ∆ ). In the third panel, string-fragment correlations for unlike-sign pairs are determined by local charge and transverse-momentum conservation (the sharp peak at the origin is conversion electrons). Minimum-bias parton fragments in the fourth panel produce classic jet correlations, with a same-side (η ∆ < π/2) jet cone at the origin and an away-side (η ∆ > π/2) ridge corresponding to the broad distribution of partonpair centers of momentum. Similar-quality parton fragment distributions on (η ∆ , φ ∆ ) can be obtained for both p t s of a hadron pair down to 0.35 GeV/c (parton Q/2 ∼ 1 GeV). The criteria for partons as Brownian probes are 1) Q 2 large enough that resulting hadron correlations are statistically significant and uniquely assigned to parton fragments, and 2) Q 2 small enough that correlations are significantly modified by local medium dynamics. In the QCD context the medium itself is formed from low-Q 2 partons. In the low-Q 2 regime 'partons' may not interact as point color charges, and complex couplings to the medium, e.g., tensor components of the velocity field (Hubble expansion), may be important. Non-perturbative aspects of low-Q 2 parton collisions should be accessible via low-p t fragment angular autocorrelations and two-particle y t distributions. p t fluctuations and prehadronic temperature/velocity structure Event-wise p t fluctuations generally result from local event-wise changes in the shape of the single-particle p t spectrum, as illustrated in Fig. 4 (first panel). In each collision, a distribution of 'source' temperature and/or velocity on (η, φ) determines the local parent p t spectrum shape. Each hadron p t samples a spectrum shape determined by the sample location, as shown in Fig. 4 (second panel). The local parent shape can be characterized schematically by parameter β(η, φ), interpreted loosely as 1/T or v/c for the local pre-hadronic medium. Variation of either or both parameters relative to an ensemble mean results in p t fluctuations. A similar situation is encountered in studies of the cosmic microwave background (CMB) as shown in Fig. 4 (third panel) [18]. The temperature distribution β on the unit sphere is represented by the microwave power density (local spectrum integral rather than mean). The β(θ, φ) structure for that single event is directly observable due to large photon numbers. In contrast, for a single heavy ion collision as in Fig. 4 (fourth panel) the parent distribution is sparsely sampled by ∼ 1000 final-state hadrons, and parent properties are not accessible on an event-wise basis. Interpreting p t fluctuations has two aspects: 1) study equivalent two-particle number correlations on p t or y t , which reveal medium modification of the two-particle parton fragment distribution-those correlations are directly related to a distribution on (β 1 , β 2 ) sensitive to in-medium parton dissipation; 2) invert the scale or bin-size dependence of p t fluctuations to obtain p t autocorrelations on (η, φ) which reveal details of event-wise β(η, φ) distribution. We first consider properties of β(η, φ) as a random variable and its relation to two-particle correlations on p t or y t . We then employ p t autocorrelations from p t fluctuations to infer aspects of the β(η, φ) distribution which depend only on separation of pairs of points on (η, φ). Parton Dissipation in the A-A Medium p t fluctuations can be related to a 1D distribution on temperature/velocity parameter β and corresponding two-point distribution on (β 1 , β 2 ). Each entry of those distributions corresponds to an event-wise p t spectrum in a single bin or pair of bins on (η, φ). The frequency distribution on β represents variation of the single-particle p t spectrum shape. For Gaussian-random fluctuations the relative variance of the β distribution is σ 2 β /β 2 0 ≡ 1/n, where n is the exponent of Lévy distribution A/(1+β 0 (m t −m 0 )/n) n describing the average p t spectrum shape [19]. The shape of the single-particle spectrum is thus related to the event-wise temperature/velocity distribution. Other aspects of shape determination, such as collective radial flow, also contribute to exponent n. We therefore consider the two-point distribution on (β 1 , β 2 ). Given the correspondence between the fluctuation distribution on β and the shape of the single-particle spectrum on p t we seek the relation between the distribution on (β 1 , β 2 ) and the shape of the two-particle distribution on (p t1 , p t2 ). The distribution on (β 1 , β 2 ) provides information about the correlation structure of event-wise β distributions. The two-particle Lévy distribution on (p t1 , p t2 ), constructed as a Cartesian product of two single-particle distributions with Lévy exponent n, represents a mixed-pair reference distribution (pairs from different but similar events). We can also define a two-particle object Lévy distribution representing sibling pairs (pairs formed from single events), with exponents n Σ and n ∆ representing variances on sum and difference axes (β Σ , β ∆ ). The ratio of object and reference distributions reveals a saddle-shaped structure whose curvatures measure temperature/velocity correlations on (η, φ). Ratios of sibling to mixed pair densities for 130 GeV Au-Au collisions are shown in Fig. 5 (first two panels) plotted on variable X(p t ) [20]. Those panels are dominated by a Lévy saddle, a 2D manifestation of two-particle p t spectrum shape variation due to velocity and temperature fluctuations in the parent distribution. The saddle is an intermediate shape in the dissipation process; its curvatures reflect the correlation structure of the (β 1 , β 2 ) distribution, especially its covariance as discussed in [20]. The saddle curvatures on sum and difference variables, measured by 1/n Σ − 1/n and 1/n ∆ − 1/n, represent the variance excesses (beyond independent The integral of correlations on (p t1 , p t2 ), measured by the saddlecurvature difference 1/n Σ −1/n ∆ , is equivalent to p t fluctuations measured in the corresponding detector acceptance [6]. With increasing Au-Au centrality the curvature on the difference axis increases strongly, while that on the sum axis approaches zero [20]. More recently, we have transitioned from per-pair correlation measurer−1 plotted on variable X(p t ) to per-particle density ratio ∆ρ/ √ ρ ref plotted on transverse rapidity y t . We wish to follow, within a single context, the transition from parton fragment distributions in elementary collisions to correlations from parton dissipation in a bulk medium. Fig. 4 (last two panels) shows ∆ρ/ √ ρ ref on (y t1 , y t2 ) for peripheral and central Au-Au collisions at 200 GeV. The logarithmic y t interval [1,4.5] corresponds to linear p t ∼ [0.15,6] GeV/c. Peripheral collisions produce a 2D minimum-bias parton fragment distribution peaked at y t ∼ 2.5 (p t ∼ 1 GeV/c), similar to p-p collisions but without small-y t correlations from string fragmentation. As centrality increases the fragment distribution is transported to smaller y t and approaches a shape corresponding to the Lévy saddle on X(p t ) × X(p t ). In this format we can study the transition with A-A centrality between two extreme cases: 1) in vacuo distributions of string and parton fragments and 2) gaussian-random variation of β on (η, φ) for a nearly-equilibrated system. Parton dissipation in the A-A bulk medium is represented by the transition between those extremes. p t fluctuations and p t autocorrelations The previous section describes p t fluctuations in terms of two-particle number densities on (p t1 , p t2 ) or its logarithmic equivalent (y t1 , y t2 ), the issue being modification of the two-particle parton fragment distribution with changing A-A centrality. One can also express p t fluctuations in terms of two-particle p t distributions on (η, φ) which reveal different aspects of the underlying two-particle number distribution on vector momentum. This section describes a procedure to determine the correlation structure of the β(η, φ) distribution as a temperature/velocity distribution on the prehadronic medium. Fluctuations in bins of a given size or scale are determined by two-particle correlations with characteristic lengths less than or equal to the bin scale. By measuring fluctuation magnitudes as a function of bin size one can recover some details of the two-particle correlation structurethose aspects which depend on the separation of pairs of points, not on their absolute positions. The relation between fluctuations and correlations is given by the integral equation [15] ∆σ 2 pt:n (m η , n φ ) = 4 m,n k,l=1 η φ K mn;kl ∆ρ(p t : n; k η , l φ ) with kernel K mn;kl ≡ (m − k + 1/2)/m · (n − l + 1/2)/n representing the 2D macrobin system, ∆σ 2 pt:n (δη, δφ) is a variance excess and ∆ρ(p t : n; )/ ρ ref (n) is an autocorrelation density ratio. That equation can be inverted numerically to obtain the p t autocorrelation. p t autocorrelations can also be determined directly by pair counting. In Fig. 7 the peripheral Au-Au result from the previous section (first panel) is compared to the minimum-bias p-p result (second panel) and to p-p collisions with n ch ≥ 9 (third panel). The last panel shows the charge-dependent (like-sign − unlike-sign pairs) p t autocorrelation for the same event class, reflecting charge-ordering along the jet thrust axis during parton fragmentation. This is the first determination of p t correlations in p-p collisions. Local velocity structure and same-side recoil Whether derived from pair counting or from fluctuation inversion, the resulting p t autocorrelations can be separated into several components. We first subtract multipoles on azimuth (azimuth sinusoids independent of pseudorapidity), revealing structure associated with parton scattering and fragmentation. Fig. 8 shows the resulting p t autocorrelation for 20-30% central Au-Au collisions at 200 GeV (first panel) and a three-component model fit to that distribution (second panel) including a same-side (φ ∆ < π/2) positive peak, a same-side negative peak and an away-side (φ ∆ > π/2) positive peak. The fit is excellent, with residuals at the percent level. The third panel shows the result of subtracting the positive same-side model peak (representing parton fragments) from the data in the first panel. The shape of the negative sameside peak is very different from the positive peak; there is thus negligible systematic coupling in the fit procedure. The fourth panel shows the data distribution in the third panel plotted in a cylindrical format, suggesting an interpretation in terms of temperature/velocity correlations. Histogram values of the p t autocorrelation effectively measure correlations (covariances) of blue or red shifts of local p t spectra relative to the ensemble mean spectrum at pairs of points separated by (η ∆ , φ ∆ ). The negative same-side peak can therefore be interpreted as a systematic red shift of local p t distributions adjacent to the positive fragment peak. The red shift can in turn be interpreted as recoil of the bulk medium in response to stopping the parton partner of the observed parton (positive same-side peak). This detailed picture of parton dissipation, stopping and fragmentation in an A-A collisions, including recoil response of the dissipative bulk medium suggested in the fourth panel, is accessed for the first time with joint p t autocorrelations. Reconstructing Event-wise Temperature/velocity Structure We now consider the relation of p t autocorrelations to individual collision events. In Fig. 9 we repeat the WMAP CMB distribution of microwave power on the unit sphere, picturing a single Big Bang 'event' which has a large statistical depth and can therefore be directly observed [18]. Information relevant to cosmological theory is extracted as a power spectrum on polar angle (second panel), formally equivalent (within a Fourier transform) to an autocorrelation according to the Wiener-Khinchine theorem. In some studies, CMB angular autocorrelations and crosscorrelation have been determined directly [21]. In our study of heavy ion collisions we obtain angular p t autocorrelations as in the third panel. Due to sparse sampling we cannot directly visualize the temperature/velocity structure of individual collision events as for the CMB survey. The local microwave power density of the CMB survey is analogous to local p t in a Au-Au collision. For individual collisions, and especially for smaller bin sizes, the event-wise mean values are not significant. However, given p t autocorrelations we can simulate event-wise velocity/temperature distributions. We estimate the number of hard parton scatters within the STAR acceptance in a central Au-Au collision as 20-40, based on an analysis of p-p collisions [16]. Combining that frequency estimate with shape information from the autocorrelation, and introducing some statistical variation of peak structure about the autocorrelation mean value, we can produce simulated events as shown in Fig. 9 (fourth panel): distributions on primary angle variables (η, φ), whereas the autocorrelation is on difference variables (η ∆ , φ ∆ ). This exercise illustrates that while Au-Au collisions at RHIC may be locally equilibrated prior to kinetic decoupling, they remain highly structured due to copious parton scattering which is not fully dissipated. Access to that structure requires p t in addition to angular or number autocorrelations on (η, φ) to provide the full picture. Energy dependence of p t fluctuations and parton scattering Given this close connection between parton scattering and fluctuations, the collision-energy dependence of p t fluctuations may reveal previously inaccessible parton dynamics at lower collision energies. In Fig. 10 (first panel) we show the centrality dependence (ν measures mean participant path length in nucleon diameters) of p t fluctuations for four RHIC energies and a summary (crosshatched region) of SPS fluctuation measurements at 12.6 and 17.3 GeV [22], all at the full STAR acceptance (CERES measurements are extrapolated). In the second panel the pseudorapidity scale dependence of fluctuations at full azimuth acceptance is shown for central collisions at six energies. Extrapolation of CERES data in the first panel is illustrated by the dashed lines at the bottom of the second. Fluctuation measure ∆σ pt:n is related to the variance difference in Eq. (1) by ∆σ 2 pt:n ≡ 2σp t ∆σ pt:n , with σp t the single-particle variance. To good approximation ∆σ pt:n Φ pt , and both are per particle fluctuation measures. Φ pt was used for the CERES fluctuation measurements. For either measure we observe a dramatic increase in p t fluctuations from SPS to RHIC energies. The centrality dependence in the first panel suggests that fluctuations for p-p and peripheral A-A collisions saturate near 60 GeV, whereas there is monotonic increase for the more central collisions. The scale dependence in the second panel illustrates how measurements with different detector acceptances are related. Measurements over common scale intervals should correspond. At RHIC energies we have demonstrated that p t fluctuations are dominated by fragments from low-Q 2 parton collisions. The energy dependence of ∆σ pt:n or Φ pt is shown in the third panel of Fig. 10, plotted vs √ s NN . We observe that p t fluctuations vary almost linearly with log{ √ s NN /10.5} (solid curve in that panel), suggesting a threshold for observable parton scattering and fragmentation near 10 GeV. Fluctuation measurements based on Σ pt ∆σ 2 pt:n /(n chp 2 t ) [22] appear to contradict the results described here, implying instead negligible energy dependence of p t fluctuations from SPS to RHIC. We observe that nuclear collisions at RHIC are dominated by local temperature/velocity structure from hard parton scattering. Σ pt is a per pair measure which averages the local p t correlation structure dominating RHIC collisions over the entire detector acceptance, resulting in apparent reduction of correlations with increasing A-A centrality as 1/N participant (per the central limit theorem) and consequent insensitivity to contributions from hard scattering. We want to study separately the changes in p t production (T ) and in the correlation structure of that produced p t (δT ) prior to hadronization. Σ pt by construction estimates relative temperature fluctuations of the form δT /T . It thus divides the structure problem by the production problem, greatly decreasing sensitivity to each. Summary We have demonstrated that low-Q 2 partons, accessed here for the first time by novel analysis techniques including joint autocorrelations, serve as Brownian probes of A-A collisions, being the softest detectable dynamical objects which experience QCD interactions as color charges. Our analysis of p-p correlations provides an essential reference for A-A collisions. Inversion of the scale dependence of p t fluctuations provides the first access to p t autocorrelations which reveal a complex parton dissipation process in A-A collisions relative to p-p collisions. We observe possible evidence for bulk-medium recoil in response to parton stopping. We also observe strong energy dependence of p t fluctuations, which is to be expected given the dominant role of scattered partons in driving those fluctuations. This work was supported in part by the Office of Science of the U.S. DoE under grant DE-FG03-97ER41020.
2022-06-28T02:16:47.753Z
2005-01-01T00:00:00.000
{ "year": 2005, "sha1": "be2efe59a2b7975136b99b91eb522e7b6847b624", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/27/1/015", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "be2efe59a2b7975136b99b91eb522e7b6847b624", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257902933
pes2o/s2orc
v3-fos-license
Vaccine hesitancy about the HPV vaccine among French young women and their parents: a telephone survey Background The human papillomavirus (HPV) vaccine reduces the burden of cervical and other cancers. In numerous countries, a slow uptakeof this vaccine persists, calling for a better understanding of the structural factors leading to vaccine acceptation. We aimed to assess the attitudes toward HPV vaccination among its intended public to explore its specific characteristics. Methods A random cross-sectional telephone survey of the French general population provided data from a sample of 2426 respondents of the target public: the parents of young women and the young women aged 15-25 themselves. We applied cluster analysis to identify contrasting attitudinal profiles, and logistic regressions with a model averaging method to investigate and rank the factors associated with these profiles. Results A third of the respondents had never heard of HPV. However, most of the respondents who had heard of it agreed that it is a severe (93.8%) and frequent (65.1%) infection. Overall, 72.3% of them considered the HPV vaccine to be effective, but 54% had concerns about its side effects. We identified four contrasting profiles based on their perceptions of this vaccine: informed supporters, objectors, uninformed supporters, and those who were uncertain. In multivariate analysis, these attitudinal clusters were the strongest predictors of HPV vaccine uptake, followed by attitudes toward vaccination in general. Conclusions Tailored information campaigns and programs should address the specific and contrasted concerns about HPV vaccination of both young women and of their parents. Introduction The links between Human Papilloma Virus (HPV) infections and some forms of cancer have been widely reported in the literature [1][2][3][4]. Several high-risk types of HPV are key risk factors for cancers in adults including female genital cancers (i.e., cervical, vulvar and vaginal cancer), as well as anal cancers and head and neck cancers in men and women [1,3]. The primary HPV-related cancer is cervical cancer [5]. HPV types 16 and 18, which cause precancerous lesions and genital cancers, have been found in 71% of cervical cancers [5]. HPV vaccination is a strategic component of the battle to prevent cervical cancers caused by this virus. The World Health Organization (WHO) recommends HPV vaccination combined with screening and education strategies to reduce the impact of these infections on global public health [4][5][6][7][8][9][10][11]. Its implementation appeared to have promise as a means of reducing the burden of cancer. Unfortunately, as of 2019, most estimates showed vaccination coverage that included the last dose below 75% in most countries [11]. HPV vaccination rates continue to be suboptimal in many countries including France, where this vaccine coverage is among the worst in Europe. Only 40.7% of 15-year-old girls born in 2005 received a first dose of HPV vaccine [10][11][12][13][14]. To achieve optimal vaccination rates, continued efforts are needed to better understand the factors associated with attitudes toward this vaccination [4,5,9]. Over the past decade, HPV vaccination may have been affected by the rise of vaccine hesitancy (VH) in Europe [15]. In March 2012, the SAGE Working Group on this topic convened to reach a definition of the term, specifying that it is a delay in acceptance or refusal of some vaccines despite their availability [16]. According to the SAGE group, VH results from the combination of lack of confidence, complacency, and convenience issues [16]. VH is context-and vaccine-specific, rather than driven only by a general attitude toward vaccination [17]. Various approaches have been proposed to understand factors predictive of vaccination as well as to map the determinants of VH [11,[18][19][20][21]. Factors associated with VH such as lack of trust in health authorities, vaccinehesitant doctors, and perceived "newness" of vaccines also play a role in HPV vaccination [15,22,23]. The literature suggests the existence of different VH clusters and social differentiation between them, varying with the type of belief, vaccine, and country [24]. A considerable amount of literature has examined attitudes toward HPV infection and HPV vaccination, drawing mainly on surveys among parents [25][26][27] and young women [19,28]. Two complementary systematic literature reviews have summarized the factors influencing HPV knowledge and vaccine acceptance among young women and their parents and VH in Europe [15,29]. The literature about VH related to HPV vaccination discusses the most prevalent concerns linked to HPV vaccine uptake; these include but are not limited to insufficient and inadequate information about HPV vaccination, beliefs that the vaccine causes long-term side effects, perceived effectiveness and perceived low risk of HPV/cervical cancer [15,26]. Here, we address hesitancy toward HPV vaccination by examining the case of France, where this vaccine coverage is among the worst in Europe. Public health authorities in France have recommended HPV vaccination since 2007. Initially intended for girls aged 14-21, then the recommendation was extended for all girls aged 11-14 years old in 2013, when the French High Council for Public Health issued new guidelines. The vaccination schedule requires two to three doses spaced out between six months depending on the vaccine chosen (Gardasil ® or Cervarix ® ) and the girl's age. Prescribed by a general practitioner, these vaccines are reimbursed at 65% by the Health Insurance. Despite the communication efforts of the health authorities, a 2016 national survey found that more than half the French parents of adolescent girls had negative attitudes toward the HPV vaccine or were uncertain of its benefits [30]. The available research on HPV vaccination has identified several barriers to HPV vaccination but tended to oppose those in favor of this vaccine and those opposed to it instead of considering the various forms of reluctance. We explore this diversity and analyse whether there are socially differentiated clusters of VH toward HPV vaccination and how they influence vaccination behavior. Our aims were: 1) to explore hesitancy toward this vaccine among the young women (targeted group) and their parents (who often take the decision) specifically by considering simultaneously four different perceptions related to the disease's severity and its frequency, and the vaccine's efficacy and side effects respectively, as well as their possible combinations; 2) study the potential sociodemographic differences between profiles associated with the different types of vaccine hesitancy; 3) to test the extent to which these profiles to predict selfreported vaccination behavior. Study setting and participants' characteristics We used data from the 2016 Baromètre Santé, a national cross-sectional telephone survey addressing health issues in a representative population sample, conducted by the French Public Health Agency (Santé Publique France) [12,30]. Data collection used a computer-assisted telephone interview (CATI) survey that took place between January and July 2016 in mainland France. It used an overlapping dual-frame design of landline and mobile phone numbers, generated randomly from the prefixes allocated by the electronic communications regulatory authority. All households with at least one French-speaking individual aged 15-75 years were eligible. Among other health-related issues, the 2016 questionnaire dealt with HPV vaccination and the corresponding section targeted two specific categories: on the one hand, parents of at least one girl aged 11-19 years, as it is the intended age category for HPV vaccination in France at the time of survey, and, on the other hand, young women aged 15-25, who were supposed to have had access to the vaccine since it was introduced in France in 2007. One respondent from each household was selected at random for each landline phone or from eligible mobile phone users. The French national commission for computer data and individual freedom (CNIL) approved the survey. Measures Respondents were asked about their attitude toward vaccination in general (from "very favorable" to "not at all favorable"). To capture different structural factors involved in the attitude toward HPV vaccine, we measured reported knowledge and perceptions about this vaccine. Therefore participants were asked whether or not they had ever heard of HPV vaccination and then to either agree or disagree (from "Absolutely" to "Not at all") with four assertions related to, respectively, the severity and the frequency of HPV infections and the effectiveness and potential side effects of the vaccines against it. These questions were also asked of participants who stated that they had not heard of it, to see the extent to which people may endorse attitudes toward an unknown vaccine driven by their attitude toward vaccination in general. The questionnaire also collected data on HPV vaccine uptake: parents reported their daughters' vaccination status, and young women reported their own. Thus, the HPV vaccine uptake was evaluated by the answers "yes/no/don't know" of both young women and parents of teenage girls. Finally, the questionnaire collected information about participants' sociodemographic background: gender, age, educational level, and household income. The equivalized household income per month was computed taking into account household size and composition, to estimate participants' standard of living [31]. Statistical analysis Data were weighted so the distribution of the main sociodemographic characteristics (gender, age, educational level, geographical region, and urbanization level) matched the sample to the national census. Weights were applied to all statistics. First, we analyzed the perceptions of HPV infection and vaccination simultaneously, by conducting a cluster analysis to summarize the variety of perceptions reported by participants into contrasting attitudinal clusters toward the HPV vaccination. Items measuring agreement were coded from 1 ("Absolutely") to 4 ("Not at all"). These scores were transformed into Z-scores before clustering with the standard agglomerative hierarchical procedure [32]. We also investigated the sociodemographic composition of the resulting clusters, as well as their association with attitudes toward vaccination in general, by using χ 2 independence tests. Then we examined the factors associated with HPV vaccination status, including sociodemographic indicators, the clusters, and attitude toward vaccination in general. Using a logistic model, we used a multimodel averaging approach based on the Akaike information criterion to rank the explanatory variables by their relative importance. This approach estimates all possible models, given the explanatory variables introduced, and computes the final model as the weighted average of all parameters and standard errors from all possible models [33]. We used partial Nagelkerke's R squares [34] to quantify the partial contributions of each explanatory variable to the dependent variable [35] and relative weights (values between 0 and 1) to classify the explanatory factors according to the level of the evidence of an actual relation to the dependent variable. The explanatory factors were classified as follows: Results The sample of the 2016 Baromètre Santé included 15,216 respondents with full interviews (participation rate: 50%). The questions about HPV vaccination concerned 2168 participants from two subgroups -young women (45%, mean = 20 years old; SD = 3) and parents of young women (55%, mean = 45 years old; SD = 6) whose attitudes and behavior toward HPV vaccination we sought to study. Among our participants, the overall HPV selfreported vaccine uptake rate was 35.2% (45.8% for young women aged 15-25 years, and 26.6% among parents who reported the vaccination status of a daughter aged 11-19 years. Clusters of attitudes toward HPV vaccination A third (35.1%) of respondents had not heard of HPV vaccines at the time of research (see Table 1). Most respondents nonetheless agreed that HPV infections are severe (93.8%) and frequent (65.1%). Furthermore, 72.3% of respondents considered the HPV vaccine to be effective, although half (54%) reported that it may also cause side effects. The cluster analysis produced four contrasting profiles. Before examining each cluster more closely, two general results must be emphasized. First, despite the gaps in knowledge about HPV vaccination (35% of participants had not heard of this vaccine, from 0.5% in Cluster 3 to 100% in Cluster 1), there was consensus across the four clusters regarding the severity of HPV infections (between 77.6% and 95.9% of participants considered them absolutely or somewhat severe). Second, the four clusters displayed contrasting opinions about the potential side effects of the vaccines, but in each cluster at least one third of respondents believed that HPV vaccines could cause severe side effects. Participants in the different clusters agreed that HPV infections are serious and the vaccine is effective but had divided views about the frequency of these infections and about the safety of the vaccine. Cluster 1 comprised 40.6% of participants. All of them had heard of HPV vaccines. Nearly all (95.5%) agreed that HPV infections are severe, and 79.3% perceived HPV as frequent. The vast majority considered HPV vaccines to be effective (97.7%), but more than a third were concerned about possible side effects (37%). We labeled this profile as Informed supporters. Respondents in cluster 2 (23.6% of the sample), were labeled as Objectors. Most (88%) had heard of the HPV vaccine, and agreed that HPV infections are severe (92.9%). Among them, only 46.1% agreed that HPV infections are frequent, and almost all were concerned about its potential side effects (94.2%). We labeled Cluster 3 Uninformed supporters (29.2% of the sample) because 99.5% of respondents in this cluster reported they had not heard of the HPV vaccine. Most of them considered HPV infections to be serious (95.9%), and a large majority agreed that these infections are common (67.4%). According to 86.3% of the respondents in this cluster, HPV vaccines are effective but 49.4% thought that they might have side effects. Finally, in Cluster 4, only half of the participants (who represented 6.6% of the whole sample) had heard of this vaccine. This cluster concentrated most of the "Don't know" answers, and was labeled as Uncertain. Among them, 77.6% agreed HPV infections are severe (17.7% didn't know) and a third agreed they are frequent (39.5% didn't know). The questions concerning their views of the effectiveness of the vaccines and their potential to cause side effects showed high levels of uncertainty (respectively 53.7% and 54.3% did not know). Characterization of attitudinal clusters toward HPV vaccination Our results showed that parents were more frequently uncertain toward HPV vaccination than young women (see Table 2). Fathers were overrepresented among uninformed supporters, while mothers, and especially those aged 25-45, were more frequently objectors. On the contrary, younger women (15)(16)(17)(18)(19) were more supportive of this vaccination. Educational level was also strongly correlated with these attitudinal clusters. Objectors had an educational profile close to the average, while informed supporters were more educated (69.4% had completed high-school vs 42.2% to 58.6% in other clusters). Uninformed supporters and uncertain participants were the least educated. Results for household income were similar: objectors had an average profile for household income per consumption unit, while informed supporters were wealthier and low-income households were overrepresented among the two other clusters. Finally, the majority of objectors were nonetheless favorable to vaccination in general (58.8%), versus 87.6% of informed supporters, 84.8% of uninformed supporters, and 73.2% among the uncertain. Factors associated with HPV vaccine reported uptake In the bivariate analyses, self-reported HPV vaccine uptake was significantly more frequent among informed supporters (52.7%, versus 17.6% to 29.2% for other clusters) and young women aged 20-25 (52.9%) (see Table 3). This coverage was also lower among both the lowest and the highest educational level categories, while it was weakly correlated with household income level. Finally, HPV vaccination coverage was twice as higher among participants who supported vaccination in general than among those who did not. The multimodel averaging approach showed that informed supporters, young women in the 20-25 yearold age range, and participants who were favorable to vaccination in general were most likely to report HPV vaccination, and the corresponding three variables obtained the highest importance weight in our model (very strong) ( Table 4). Once controlled for attitudinal profiles, we found evidence of only a weak association between educational level and HPV vaccination status and no evidence of a significant effect concerning household income. Main results In our study, 35.2% of participants reported HPV vaccine uptake. This result was reasonably close to the actual French national coverage [14]. Combining opinions on the frequency and severity of HPV infections, and HPV vaccination efficacy and side effects, we found four contrasting profiles of attitudes toward this vaccination (informed supporters, objectors, uninformed supporters, and uncertain) among young women and parents of young women. Each profile contained a substantial proportion of participants concerned about potential side effects of the vaccine. These profiles differ mainly according to reported knowledge and perceptions of the risk-benefit of vaccination. Informed supporters reported to be informed about the HPV vaccine and considered the vaccine to be effective even though some of them were unsure about the safety of the vaccines. In contrast, Objectors, although they reported to be globally informed about the vaccine, considered the disease rather rare and the vaccination not necessarily effective or safe. The other two profiles are characterized by a low reported knowledge of HPV vaccine. However, the Uninformed supporters considered it effective but didn't have a shared perception about its safety. The last group identified, Uncertain, grouped respondents reporting uncertainty about their perceptions of the vaccine. These profiles were also significantly correlated with participants' sociodemographic background. In multivariate analysis, these attitudinal clusters were the strongest predictors of HPV vaccine uptake, but attitudes toward vaccination in general also predicted uptake strongly. Study limitations Before discussing our results, we must acknowledge several limitations of our study. First, this study shares the usual shortcomings of quantitative telephone surveys, including a moderate participation rate (50%). The announcement letter describing the survey and requesting participation did not give any details about the topics to be investigated: thus there is no reason to suspect that respondents' answers regarding the attitudes toward the HPV vaccine and the vaccine uptake were correlated with non-participation. In addition, the data were weighted for various factors that are known to often be associated with survey participation. Second, like any data collection based on self-report, this survey is subject to social desirability and recall biases, especially regarding past vaccination of participants' daughters. Attitudes toward HPV vaccination It has been frequently claimed that contemporary VH has been fueled by the very success of vaccination in controlling and eliminating diseases: severe infections that were previously common have almost disappeared, and so people stopped worrying about them (e.g., André, 2003 [38]). In our study, however, young women and their parents were aware of the frequency and severity of HPV infections. This certainly does not mean there are no information problems, as more than a third of respondents reported that they had never heard of HPV vaccines. Information issues about them have already been identified as the main barrier to this vaccination [15,18,29] and consequently as a key lever for improving it [39]. The other major barrier to HPV vaccination highlighted by previous studies involves concerns about vaccine safety [18,29]. These worries were shared by half of respondents and were quite pervasive, being present in each of the four contrasting attitudinal profiles toward HPV vaccination, including informed supporters, in whom at least one third had such concerns. Our multivariate analysis also echoed these results as the objectors (nearly all of whom considered that the HPV vaccine might cause adverse side effects) and the uncertain group were less likely to report HPV vaccination. Many mothers face precisely this dilemma: they know that HPV is dangerous but they remain uncertain about this vaccine's safety. HPV vaccination & vaccine hesitancy The contrasting attitudinal clusters, based on perceptions related to the specific risks and benefits of HPV vaccination, turned out to be slightly more predictive of this vaccination status than attitudes toward vaccination in general. This reflects the specificity of contemporary VH, which is often not guided by a general attitude toward vaccination, but that instead takes the specificities of each vaccine and each context into account [17,30]. Nevertheless, general attitudes toward vaccination still play a significant role as a determinant of HPV vaccine uptake. A recent study also supported this result, as previous vaccine refusal for a child, which is a good proxy of this general attitude, remained a significant factor in the decision about HPV vaccination, together with awareness of the vaccine's existence [26]. These general attitudes probably capture some aspects related to people's lack of trust in the health care system and health authorities, which is a systemic issue in contemporary societies and plays an important part in VH [17,[40][41][42]. Sociodemographic background and HPV vaccination Young women were more supportive of HPV vaccination than their parents. Moreover, among young women, the oldest (those aged 20-25 rather than 15-19 years) were more likely to report complete HPV vaccination: this may result from both the mechanical effect of age (older participants have had more opportunities to be vaccinated during their lifetime) and a more supportive attitude toward this vaccination (in line with Patel et al. 2016 [40]). Among parents, fathers were more frequently uncertain or uninformed supporters, which probably reflects the fact that they are usually much less engaged in the vaccination decisions about their children than mothers [43], at least in western cultural contexts where taking care of children' health is considered a mother's duty [44]. Finally, mothers were more frequently objectors. This gender effect has already been observed for other recent vaccines in France (for the H1N1 vaccine, see [45], for the COVID-19 vaccine, see [46]), and a number of studies conducted in other countries also mentioned a higher hesitancy among women for the COVID-19 vaccine [47][48][49]. To our knowledge, a wide range of explanations have been put forward ranging from higher tendency for risk aversion, lower trust in medical institutions to a higher likelihood of crossing vaccine-critical information [44,46]. In the case of HPV vaccination, the campaigns have emphasized its effectiveness in preventing cervical cancer over other HPV-related conditions, leading to errors in the public's risk assessment. In addition, the arguable overlap of science, politics, economics, and beliefs about gender roles that led to the initial focus on women may have had a negative impact on women's confidence in the vaccine [50]. We can hypothesize that women, who are often the bearers of reproductive work that is heavily framed by preventive measures, are more likely to develop critical dispositions that allow them to express concerns about these vaccines and the ability to make choices. The relation between participants' socioeconomic status and their attitudes toward vaccination may depend on both the vaccine considered and the national context [22]. Previous studies have found that socioeconomic status and especially educational level are correlated with HPV knowledge [18] and HPV vaccine uptake [27]. In our study, similarly, the informed supporters of HPV vaccination had a higher educational level on average, as well as better household income. Nevertheless, it is worth mentioning that objectors had average profiles for education attainment and living conditions, while the less educated and lower income households were overrepresented among the less informed (uninformed supporters and uncertain). In other words, at least in France, although wealthier and more educated people are more likely to support HPV vaccination, objections to it do not result from 'poor people's fears' fueled by lack of material, social or cognitive resources [43,45]. Conclusion Overall, in 2016, a majority of French people supported HPV vaccination. However, there is still great room for improvement regarding information issues, as a third had never heard of this vaccination, while half shared concerns about its safety. Tailored information campaigns and programs should consider young women and their parents as distinct targets who may have different concerns. Vaccination uptake strongly depends on specific attitudes toward HPV vaccination, which can be enhanced by information campaigns, but also on general attitudes toward vaccination, which involve trust issues.
2023-04-03T14:04:32.577Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "568846616e487f0f325e418e490f44b03d9f236d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "568846616e487f0f325e418e490f44b03d9f236d", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
263334532
pes2o/s2orc
v3-fos-license
The Distributional Impact of Inflation in Pakistan: A Case Study of a New Price Focused Microsimulation Framework, PRICES This paper develops a microsimulation model to simulate the distributional impact of price changes using Household Budget Survey data, income survey data and an Input Output Model. The primary purpose is to describe the model components. The secondary purpose is to demonstrate one component of the model by assessing the distributional and welfare impact of recent price changes in Pakistan. Over the period of November 2020 to November 2022, headline inflation 41.5 percent, with food and transportation prices increasing most. The analysis shows that despite large increases in energy prices, the importance of energy prices for the welfare losses due to inflation is limited because energy budget shares are small and inflation is relatively low. The overall distributional impact of recent price changes is mildly progressive, but household welfare is impacted significantly irrespective of households position along the income distribution. The biggest driver of welfare losses at the bottom of the income distribution was food price inflation, while inflation in other goods and services was the biggest driver at the top. To compensate households for increased living costs, transfers would need to be on average 40 percent of pre-inflation expenditure, assuming constant incomes. Behavioural responses to price changes have a negligible impact on the overall welfare cost to households. Introduction With the resurgence of price inflation, a growing interest in the use of price related environmental policy such as carbon taxation, and the interaction between these forces with indirect taxation, there is an increasing need to be able to evaluate the distributional impact of policy and economic changes.While there is a large and historic literature in these related fields, much of the work has been undertaken in a disjoint way.This paper describes the development of a framework to simulate the impact of price related policies, taking Pakistan as a case study.Capéau et al. (2014) and O'Donoghue (2021) are reviews of the use of microsimulation for the simulation of price related issues.The majority of the papers focus on the distributional impact of indirect taxation (Decoster et al., 2010;O'Donoghue et al., 2018;Harris et al., 2018;Maitino et al., 2017;Symons, 1991). Modelling prices and price related policy requires expenditure data, typically included in household budget surveys, but not typically available in income surveys often used in direct tax and transfer focused models such as EUROMOD (Sutherland & Figari, 2013).These models have incorporated price inflation to understand the impact of fiscal drag (Immervoll, 2005;Levy et al., 2010;Süssmuth & Wieschemeyer, 2022).In order to combine expenditure related policies and income related policies, statistical matching is often undertaken to link income and expenditure surveys (Decoster et al., 2020) While microsimulation frameworks have been developed for cross-sectional income related policy such as EUROMOD (Sutherland & Figari, 2013) and for inter-temporal analysis such as the LIAM2 framework (De Menten et al., 2014), there is a gap in the sphere of price related policy.The EUROMOD team developed an indirect tax tool that is able to assess distributional impacts of indirect taxation (Akoguz et al, 2022).This framework however does not link household expenditure information to input output tables and includes limited behavioural responses.In this paper we describe the development of a modelling framework to simulate alternative price policies, including carbon pricing, and in particular to allow for analyses to be scaled to consider comparative perspectives. Environmental policy as an issue has increased dramatically in importance in recent decades. Transforming prices to reflect the social cost of environmental pollution, such as the social cost of carbon associated with greenhouse gas emissions, incorporates elements of indirect taxation and price changes.In a modelling environment, this means that models need to be able to estimate the level of pollution associated with the production and consumption of goods and to adjust the price of these goods accordingly.As environmental taxes aim to reduce environmental pollution, models assessing their distributional impacts should allow for the behavioural response to the policy to be measured (Hynes & O'Donoghue, 2014). Taxes on polluting goods can be similar to indirect taxes such as VAT or Excise duties on fuels. However pure environmental taxes are slightly different as they are levied not in proportion to value or volume, but in relation to the amount of pollution that is produced."The aim of environmental taxation is to factor environmental damage, or negative externalities, into prices in order to steer production and consumption choices in a more eco-friendly direction." 1n summary, carbon taxes:  Are fiscal instruments aimed to incentivise both reduced fuel use and a substitution from dirtier fuels to cleaner fuels;  Are levied in proportion to the amount of pollution that is produced.  Are a popular tool with governments in many countries. Achieving a decarbonised society requires changes in many areas, from transportation, through home energy use, industrial restructuring, electricity generation, dietary change, land use change and alternative energy use.Many of these changes impose a cost or burden that is higher for some groups in society than others, resulting in policies that are environmentally positive, but distributionally regressive.These differential costs at a personal level may result in lower uptake or implementation.At a societal level, they may result in negative political responses and may stall political momentum.Without addressing these issues, global decarbonisation goals may be delayed, policies may be less ambitious and have lower impact, or have disproportionate or differential costs.A socially just transition towards low-carbon economies therefore requires comprehensive tools that can assess the extent to which the burden of policy is disproportionally distributed across individuals.This paper develops a data analytical tool to improve the design of policies that can both deliver environmental goals whilst reducing the distributional impact, ensuring a just transition. Data analytic tools such as microsimulation models are well developed in OECD countries and increasingly in developing countries to facilitate the design of social policies.They have been used to improve policy's poverty effectiveness and work incentives, and form a part of the day to day toolkit of policy makers.While tools have been developed to assess the distributional impact of some simple environmental policies, there has been limited effort to develop integrated policy analytic tools that can both design environmental policies and design sophisticated mitigating measures.Existing scalable microsimulation models generally focused on price changes due to indirect taxation (Amores et al, 2023), or environmental taxation and revenue recycling (Feindt et al, 2021;Steckel et al, 2021), but ignore the interplay between indirect taxation and environmental taxes, do not estimate behavioural responses, and are limited to simple revenue recycling schemes.A number of general equilibrium models were developed to assess the joint distributional impact of carbon pricing and revenue recycling (Rausch et al, 2011;Antosiewicz et al, 2022, Vandyck et al, 2021).These models are however designed to incorporate adjustments made by producers, and generally provide less granular results regarding differential impacts of price changes across households.The goal of this paper is to develop a novel and scalable analytical framework to assist policy makers in the design of better price related policy and in particular environmental policy.Our aim is to apply this framework in different continents at different stages of development and with different priorities in terms of mitigation measures.This paper describes in more detail the development of a model that can jointly simulate distributional impacts of inflation, indirect taxation, carbon taxation, revenue recycling, and household behavioural responses.We describe the methodological context of environmental taxation microsimulation in section 2, and methodological issues in terms of modelling pollution and data in section 3. The distributional impact of price inflation is simulated in section 4. Section 5 concludes. Theoretical Framework There is a relatively extensive literature on modelling the distributive impact of indirect and environmental taxes. Indirect taxation, because it is easier to collect than income taxes, often forms a higher share of taxation in developing countries.Atkinson and Bourguignon (1991) found that much of the redistribution in the existing Brazilian system in the 1980s relied on instruments that were less important in OECD countries, where indirect taxes, subsidies and the provision of targeted non-cash benefits (such as public education and subsidised school meals) were found to be more important.Given the important share of total tax revenue provided by indirect taxes and the availability of household budget survey data, the microsimulation modelling of indirect taxation is, and has been for a long time, a focus of developing and transition countries (Harris et al., 2018) such as Pakistan (Ahmad and Stern, 1991), Hungary (Newbery, 1995), Romania (Cuceu, 2016), Serbia (Arsić & Altiparmakov, 2013), Uruguay (Amarante et al., 2011), Guatemala, (Castañón-Herrera & Romero, 2012) and Chile (Larrañaga et al., 2012). While most papers have focused on single country analyses, there is an increasing literature looking at indirect taxes in a comparative context (O'Donoghue et al., 2004;Decoster et al, 2009Decoster et al, , 2010Decoster et al, , 2011;;Amores et al, 2023).Many of the papers in the literature focus on indirect taxes only, given that income data in household budget surveys is not always of sufficient quality to model direct taxes.In some cases as in the UK, it is possible to simulate both direct and indirect taxation (Redmond et al., 1998), but more often than not, there is a need to statistically match data from a budget survey into an income survey in order to model both direct and indirect taxes (Maitino et al., 2017;Akoguz et al, 2022).Picos-Sánchez and Thomas (2015) undertook comparative research looking at joint direct and indirect tax reform in a comparative context. Microsimulation analyses have also been used to undertake distributional assessments of other environmental policies such as tradable emissions permits (Waduda, 2008), taxes on methane emissions from cattle (Hynes et al., 2009) and taxes on nitrogen emissions (Berntsen et al, 2003).Cervigni et al., (2013) have analysed the distributional impact of wider low-carbon economic development policies. Other types of distributional impact analysis include the simulation of 'what if' scenarios such as the impact of an action on emissions if consumption patterns are changed.For example Alfredsson (2002) utilised a microsimulation model to undertake a life-cycle analysis of alternative 'greener' consumption patterns that incorporates energy use and carbon dioxide ( 2 ) emissions connected with the whole production process (up to the point of purchase). 2.1.Structure In this paper, we describe a framework that can simulate  Price inflation  Indirect taxation  Carbon prices and carbon taxation, together with revenue recycling For the purposes of this paper, we test the framework considering the distributional impact of price inflation and plan to develop other analyses in due course in relation to indirect taxation and carbon taxation. The structure of the microsimulation modelling framework used in this model is described in Figure 1.At its core is the ability to incorporate mechanism that influence price in these three dimensions.It is similar to an indirect tax model, containing input expenditure data, a policy calculator and the consumption behavioural response. At its core is the capacity to model price inflation through changes in the consumer price index. An indirect tax policy model is included containing VAT, excise duty and ad valorem tax rates. Methodology and Data This section describes the methodological approach and dataset used in the framework described above. 3.1.Data The dataset is constructed from two main data sources, the World Input Output Database (WIOD) and a Household Budget Survey (HBS).In this paper we utilise the Pakistan HBS for 2018 to undertake a simulation of the distributional analysis of various price related issues. There is no income in this survey and so we rank households on total expenditure.The survey does not record alcohol purchases by households, which are very low in Pakistan.We use the WIOD data released in 2016 and its environmental extension reflecting industry-level CO2 emissions (Arto et al, 2020).The modelling framework allows for two approaches to computing the industry-level CO2 emissions.A first approach is to use an emission vector provided by Arto et al (2020).This emission vector includes one non-negative entry for each industry in each country, and includes process-based and fugitive emissions.A second approach is to compute the CO2 emissions emitted by energy industries in each country and to energy use across industries.This approach allows to focus on energy-related emissions only, as is commonly done in practice.The WIOD maps monetary flows across 56 industries in 44 countries.The use of Multi-regional Input-Output (MRIO) models reflects the state of the art in the estimation of GHG emissions associated to household consumption (Feindt et al, 2021;Steckel et al, 2021). We start by describing WIOD and the Input-Output analysis underpinning carbon intensity estimates and the value chain drivers of price inflation.Next, we describe how the WIOD is combined with HBS data to estimate household carbon tax burdens. We then describe the imputation of expenditure patterns into datasets containing more detailed information on household income sources and tax liabilities, such as EU-SILC.Last, we briefly describe the consumption model used to derive household's behavioural response. 3.2.Input-Output Model Modelling the impact of energy price changes on households' cost of living incorporates both a direct impact on the price of energy consumed by households, and an indirect impact associated with price changes of inputs used in the production of other goods and services consumed by households. 2 A change in the price of inputs used in the production process of impacts the producer price of goods and services.This price increase will be (partially) passed through to consumers.Here, we focus on energy price changes due to carbon pricing. In order to capture the indirect effect of the producer price changes and carbon taxes, the transmission of price changes through the economy to the household sector is modelled using an input-output (IO) table, developed initially by Leontief (1951).An extensive treatment of IO analysis is provided by Miller and Blair (2009).Similar analyses to conducted internationally include O'Donoghue (1997) for Ireland, Gay and Proops (1993) in the UK , Casler and Rafiqui (1993) and Sager (2019) in the USA, and Feindt et al (2021) for the EU. 2 Indirect emissions can be divided into emissions produced by imports which are typically not taxable and emissions produced by domestically produced goods and services which are likely to be taxable.Similarly direct emissions can be divided into emissions from purchased energy and own produced energy such as harvested firewood etc. x , where the input coefficients ij a , are fixed in value.In other words, ij a is the quantity of commodity that is required as an input to produce a unit of output .Output can therefore be seen as the sum of intermediate inputs and final demand as follows: or in matrix terminology: Combining the output coefficients to produce a   A I  technology matrix and inverting, the is produced, which gives the direct and indirect inter industry requirements for the economy: This can be expanded to produce the following As A is a non-negative matrix with all elements less than 1, A m approaches the null matrix as gets larger, enabling us to get a good approximation to the inverse matrix.It thus expands output per sector into its components of final demand , , the inputs needed to produce the number of units of each output used in the production of a unit of final demand for each good. If tax is applied and is passed on in its entirety to consumers, then the tax on goods consumed in final demand is , the tax on the inputs to these goods is , the tax on inputs to these is 2 and so on.Combining, total tax is The original IO table contains information on three fuel sector, Mining and quarrying, Manufacturing of coke and refined petroleum products, and Electricity, gas, steam and air conditioning supply.Because of the focus on the differential effect of price changes on individual fuels such as petrol, diesel, gas and other fuels, this component of the IO table is decomposed into its constituent parts. In this paper we utilise Multi-regional Input-Output (MRIO) tables from the WIOD.MRIO tables extend the Input-Output (IO) methodology introduced by Leontief (1951).MRIO datasets consist of a matrix mapping the monetary flows between m sectors and n regions, ∈ ℝ (⋅)(⋅) , with single entries 1,1 2,2 representing the monetary flows from sector 1 in region 1 into sector 2 in region 2, and a final demand vector ∈ ℝ (⋅) (⋅) .In a m sector economy, the final demand for sector i in region 1 is denoted by 1 and sector i's output in region 1 is 1 .Intermediate inputs from sector 1 in region 1 into sector 2 in region 2 are denoted by 1,1 2,2 , and 1,1 2,2 is the input coefficient from sector 1 in region 1 into sector 2 in region 1, given by The technology matrix, A∈ ℝ (⋅) (⋅) , contains all input coefficients for all sectors in all regions and enables the calculation of the Leontief inverse matrix, = ( − ) −1 , providing us with the economy-wide input requirements of output x , x = f( − ) −1 .In other words, A gives the input needed by a sector in one region from every other sector in all regions to produce one (monetary) unit of output.This can be written as = ( + + 2 + ⋯ + ) and gives the output per sector as a component of final demand f3 . The WIOD includes an environmental extension under the form of a vector of carbon emissions associated to the production of each sector in each region (Arto et al, 2020), allowing the construction of an Environmentally Extended-MRIO (EE-MRIO).EE-MRIO models link products to indirect carbon emissions embedded in the production process.Kitzes (2013) provides a short introduction to environmentally extended Input-Output analysis.Let F∈ ℝ (1⋅) ∈ ℝ(1•) denote the emissions where refers to emissions produced in sector n.Dividing F entry-wise by the corresponding sector's output gives the level of CO2 emissions per monetary unit of the sector's output vector.This approach however does not allow differentiating between emissions due to energy use and other emissions, such as fugitive emissions or process emissions.We therefore compute the carbon intensity of energy industry inputs.Calculating carbon intensity of the energy industry requires assumptions on the fuel mix used by domestic energy industries.In this version of the model, we approximate energy industries' fuel mix through the average fuel mix across EU energy industries, sourced from UNIDO MINSTAT4 .Additionally, to account for differential carbon prices faced by domestic and foreign industries, we differentiate between indirect carbon emissions embedded in domestically produced goods, and indirect carbon emissions embedded in imported goods. To get total carbon emissions per monetary unit of output, we add indirect to direct emissions .Direct emissions are released through the consumption of motor and domestic fuels.As HBS data provides expenditure information only, we estimate energy volumes consumed by households by dividing expenditure per fuel by its price.To compute the direct emissions, we multiple the quantity of fuel consumed by its carbon intensity factor, taken from the IPCC 2006 Guidelines for National Greenhouse Gas Inventories (Eggleston et al, 2006).For each household, we add direct and indirect emissions to get final 2 emissions from household consumption: 3.3.Matching WIOD and HBS To compute household's carbon footprints, we combine information from MRIO and HBS. HBS reports expenditure across consumption purposes (COICOP).WIOD reports interindustry flows and final consumption by industry classification (ISIC rev. 4 or NACE rev.2). To translate between consumption purpose and industry classifications, we use bridging matrices (Mongelli et al., 2010).A bridging matrix maps the use of a product to satisfy a consumption purpose, so that the i th element of matrix = [ ] represents the use share of industry product j for consumption purpose i.Industry products can then be translated into industry output.The integration of HBS data into multi-sectoral models is described in Mongelli et al. (2010) and Cazcarro et al. (2022).Our approach consists of four steps: 1) Transform from consumer product (COICOP) to Industry product (CPA (Classification of products by activity)) using the bridging matrix by Cai and Vandyck (2020) 5 . 2) Match Budget shares to CPA categories by aggregating COICOP categories expenditure categories and calculating the weighted sum of CPA contributions to expenditure categories. 3) Match CPA categories to WIOD using national supply tables to calculate CPA input per industry output using the Fixed Product Sales Structure Assumption 6 . 4) Assign the relative contribution of each sector in the country to the appropriate budget shares. 3.4.Imputation of expenditure patterns into income datasets Estimating the net distributional impacts of price changes and mitigation measures requires information on household's consumption patterns, employment situation, incomes, and demographic characteristics.Household budget surveys contain information on household consumption patterns and demographic characteristics, but often lack detailed information on their employment situation, income sources and tax liabilities. The PRICES model uses parametrically estimated Engel curves to impute expenditure patterns into datasets containing information on income sources and tax payments.We follow a threestep procedure.In a first step, we impute total expenditure as a function of disposable income and household characteristics.In a second step, we impute the likelihood of positive expenditures using a logit model to account for the fact that not all commodities are consumed by all households (e.g.motor fuels).We collapse all commodities in the HBS into 19 expenditure categories.In a third step, we impute the conditional budget share of each category. Our approach broadly follow that described in De Agostini et al (2017). 5 Pakistan is not included in the bridging matrices supplied by Cai and Vandyck (2020).We aggregate the bridging matrices so that the final result is a generalized bridging matrix.We use the generalized bridging matrix as approximation for a Pakistan bridging matrix, as no Pakistan bridging matrix was available. 6WIOD does not supply a national supply table for Pakistan.We construct an EU wide supply table by summing all single EU country supply tables.This also reduces issues relating to country-specialization along industry supply chain and ensures that the composition of a product reflects the majority or all its inputs, rather than only those produced within a specific country. First, we estimate total expenditure as a function of household disposable income and demographic characteristics available in the HBS and the income dataset: Where ℎ is total consumption expenditure of household ℎ, is household disposable income, is a vector of demographic characteristics and is the error term.We generate a normally distributed error term, reproducing the mean and variance of the error term in the HBS. Next, we estimate the likelihood of having positive expenditure for each expenditure category: We then rank households according to this likelihood and assign positive expenditure to the highest ranked households until the share of households with positive expenditure in the HBS is replicated. Next, we estimate Engel curves for each expenditure category, conditional on having positive expenditure, where w i h = ℎ ℎ : Where w i is the budget share allocated to good . For each equation, the estimated parameters are used to impute total expenditure, the presence of expenditure, and budget shares into the income dataset using disposable income and demographic characteristics contained in both datasets.Before estimating the equations described above, we calibrate disposable income in the HBS to reflect the mean and standard deviation of disposable income in the income dataset.The mean and standard deviation are calculated without extreme values, which are determined using the Chauvenet's criterion. Finally, the sum of imputed budget shares is adjusted to equal 1. 3.5.Behavioural Estimates In order to model behaviour, a demand system is required that relates the consumption of a particular good to the price of the good, the prices of other goods, the income of the household and the characteristics of the household.See Deaton and Muellbauer (1980b) for an introduction to this field. The objective of a demand system is to model households' expenditure patterns on a group of related items, in order to obtain estimates of price and income elasticities and to estimate consumer welfare.This has been popular since Stone's (1954) where is the price paid for good, is the quantity of good consumed, and is the total expenditure on all goods in the demand system.The sum of the budget shares is constrained to be one where is the vector of all prices, () is defined as: and ln P(p) is a price index defined as There are a number of constraints imposed by economic theory, known as adding up conditions, such as 7 See Creedy (1998) for more details.   Estimating a demand system such as QUAIDS requires sufficient price variability to be able to identify the parameters within the system.Frequently however there are not enough data, typically drawn from a number of different years of Household Budget Surveys, to be able to do this.Therefore in this section a simpler method is described, drawing upon Stone's Linear Expenditure System.Creedy (1998) describes an approximate method for producing price elasticities.Rather than estimating a system of demand equations, it relies on a method due to Frisch (1959) that describes own and cross-price elasticities in terms of total expenditure elasticities ( i  ), budget shares (w i ) and the Frisch marginal utility of income parameter (ξ) for directly additive utility functions.8 This can be described as follows: where δ ij = 1 if i = j, 0 otherwise.The total expenditure elasticity ( i  ) can be defined: The Frisch parameter (ξ), can be defined as the elasticity with respect to total per capita nominal consumption spending of the marginal utility of the last dollar optimally spent (See Powell et al, 2002).In absence of price and quantity data, it is not possible to directly estimate the Frisch parameter.For this, it is necessary to rely on extraneous information.Deaton (1974) provide a review of Frisch parameters.Lluch et al. (1977) empirically showed that -ξ = 0.36.{realgnp per head in 1970 US dollars} -0.36 .This model bas been used to estimate Frisch parameters for 8 Note an additive utility function is utilised and does not allow for complements and so one must exert a degree of caution in interpreting the results.multiple countries (Creedy, 2002;Clements et al, 2020;Clements at al, 2022).Lahiri et al. (2000) have estimated a cross-country equation based on 1995 prices relating -1/ξ = 0.485829 + 0.104019*ln(GDP pc).Estimates for USA, Japan, EU and Australia are respectively -1.53, -1.41, -1.61 and -1.71.A method due to Creedy (2001) (adapted using the exchange rate parameter ER) elaborated on the Lluch et al. model as follows: where the parameters  , and  are ad hoc parameters (here respectively 9.2, 0.973, 7000) derived by trial and error.The maximum value of the Frisch parameter has been set at -1.3. Note consumption in this case is expressed as consumption per capita per month. The LES has two limitations.Firstly, the LES is based on the Stone-Geary Utility function, which assumes additive utility functions, i.e. that the utility derived from the consumption of one product is independent of the consumption of other products.This excludes complementary goods and inferior goods.Powell (1974) and Creedy and Van De Ven (1997) argue that when the LES is estimated on aggregated expenditure categories, complementary goods likely fall into the same expenditure category, making the lack of complements and inferior goods acceptable to overcome data limitations.Second, the LES assumes proportionality between income and price elasticities.Clements (2019) find empirical support for such proportionality. In order to produce equivalent income, a utility function is required.As in the case of Creedy ( 2001), a Stone-Geary LES direct utility function is utilised: where i  are LES parameters known as committed consumption for each good and . For convenience ignore the subscripts indicating that different parameters are estimated for different demographic (and income) groups. Maximising utility subject to the budget constraint  , the linear expenditure function for good i is: Differentiating w.r.t. Differentiating (**) w.r.t.C and multiplying by i i x p C , produce the budget elasticity, from which the i  parameters can be derived: Table 1 reports budget and price elasticities derived from the LES system using our data.For purchased goods, budget elasticities are lower for necessities (Food, fuel, clothing), and for tobacco and recreation, as expenditure on these goods varies less with income compared to expenditure on other goods.Health and communications also have budget elasticities of less than 1, while most other categories have a budget elasticity of about 1. Private education expenditure and durables have a budget elasticity well above 1, indicating that these expenditures are disproportionally concentrated among households with the highest expenditure. Given the direct relationship between budget and price elasticities, imputed own-price elasticities have a high correlation with budget elasticities.Necessities and other goods with a low budget elasticity are relative price insensitive, while goods such as durables, education and household services are almost perfectly price-elastic.Cross-price elasticities are not report, but are quite small relative to own price elasticities. Results I: Expenditure Patterns and Price Changes In this section we describe the distributional impact of price changes in Pakistan over the period 2020 Q4 to 2022 Q4.The differential impact of inflation across the distribution is driven primarily by the good specific price changes and the budget share of these expenditures across the distribution. 9 Figure 2 reports the average price change over the two year period by COICOP expenditure category.Unsurprisingly transport costs have the highest price growth rate at nearly 80%, 9 Behavioural responses to price and income changes will also influence the total impact across the income distribution.Real income changes are also relevant. followed by food and drink sectors at about 50%.Domestic energy fuels experienced a relatively low price growth of about 25%.Table 2 decomposes inflation into four high level expenditure groups, covering the main differential inflation rates; food, motor fuels, domestic energy & electricity and other goods and services.In total, the energy budget share is relatively low in Pakistan at about 5%, with the food budget share at over 40% and the remainder allocated to other goods and services. Whilst fuel price inflation is much higher than in other areas, the lower budget share means that the contribution made by fuels to the overall inflation rate is relatively low.Table 3 describes the budget share by equivalised expenditure quintile.The budget share for food in the bottom quintile is more than 50% falling to 34% in the top quintile.Purchased domestic energy and electricity have a slightly higher share in the bottom of the distribution. Conversely the budget shares for motor fuels and other goods and services rise over the distribution. 4.1.Distributional Statistic The distributional characteristic (DC) of expenditure i d is a measure of the impact of price changes on the economic welfare of different population groups.The measure based upon a static analysis of the distribution of expenditure over the population and the welfare weights placed upon different groups.This methodology is based on a Social Welfare Function (SWF) W = (v 1 ,…,v H ), where v h = v h (c h ,p) is the indirect utility function of household ℎ for expenditure c and prices . 10 We define the impact of change in price (or indirect taxation) as follows: where is the social marginal utility of total expenditures for household h and h i x is the consumption of good by household ℎ. The DC measures the impact of a potential marginal price change on social welfare.It can be defined as the ratio of the marginal change in welfare resulting from the price change, using different weights for different social groups, to the marginal change calculated using uniform weights (equal to the average social welfare weight).   where  is the average social welfare weight.The greater the consumption of a good by households with higher social marginal utilities (social weight), the greater is the value of d i .If however, constant social welfare weights are applied (i.e.indifference between households of different income), then i d The highest distributional characteristic statistics, outlined in Table 4 are for necessities such food, health, clothes and heating fuels plus tobacco products.The lowest are for durables, household services, rents and education, consistent with the budget elasticities presented above. 10 For a more detailed description of this method, see Liberati (2001).The distributional impact of inflation across equivalised expenditure quintiles is described in Table 5.The columns in Table 5 show the inflation rate on each expenditure group weighted by its budget share for each equivalised expenditure decile.Reflecting their higher budget share, the distributional pattern is driven by the relative budget shares of food and other goods and services.Food has a higher distributional statistic, reflecting a higher budget share, and makes a higher contribution to inflation at the bottom of the distribution, while other goods and services make the largest contribution at the top of the distribution.Overall the average inflation rate is higher at the top of the distribution. To quantify these effects and to deepen our understanding of the distributional impact of inflation, Table 6 shows several distributional measures inspired by the taxation literature (see Lambert (2001)).The Reynolds-Smolensky index (RS) (column 5) confirms that inflation had a slightly progressive impact (higher at the top).This is largely driven by other goods and services.Food inflation pushes in the opposite direction.The Kakwani index (column 6) shows a progressive inflation rate structure, with regressive drivers in food and Home fuels & electricity.Decomposing in column 9, we see that other goods and services are as progressive as food is regressive.The average inflation rate is consistent with the average inflation rates calculated above.We evaluate next how the cost of living was affected by the price increases and the contribution of prices changes towards overall social welfare.Compensating variation measures the change in welfare attributable to changes in the cost of living due to price increases.It represents the monetary compensation that households should receive in order to maintain their initial wellbeing (utility) after the price increases.In Table 7, we express the compensating variation relative to total initial expenditure for households along quintiles of household equivalised expenditure in order to approximate the percentage change in the cost of living for households with different means. Whereas the relative increase in costs due to inflation captures the increase in expenditure that households face due to price increases given their current consumption pattern, relative CV (welfare losses) captures the relative increase in income that households would need in order to maintain their utility under the new prices.The difference between them represents the adjustment that households do in their consumption behaviour (due to changes in the relative prices between different commodity groups) in order to maintain their utility under the price increases.In other words, how much would the price increase cost households without a behavioural adjustment minus how much it would cost taking into account that households can modify their behaviour. Overall, it appears that the behavioural response component has very limited effects on welfare. The picture of welfare losses along the distribution of income follows the same distributional pattern of inflation above.The progressivity is slightly lower once the behavioural component of the compensating variation is accounted for.This is expected given that the highest price changes are recorded for necessities (energy and food), leaving little space for household to adjust their consumption.In order to evaluate the change in welfare due to price changes in terms of their overall effect for the population as a whole, we use the social welfare function associated with the Atkinson index based on the distribution of equivalent incomes before and after the price changes (Table 8).According to the Atkinson Index, the rise in consumer prices reduces the inequality and corresponds well with the earlier findings based on the RS index. The decomposition of the welfare losses into their efficiency and equity components in Table 9 reveal that the main driver of the welfare loss was due to a decrease in efficiency (decrease in mean equivalent income).The small changes in consumption inequality reveal that price increases affected all households, with a similar relative impact. Conclusions This paper developed a microsimulation model, PRICES (Prices, Revenue recycling, Indirect tax, Carbon, Expenditure micro Simulation model) to simulate the distributional impact of price changes and price related policies, including indirect and carbon taxes.To demonstrate the framework we have developed an analysis taking Pakistan as a case study.The framework provides a static incidence analysis of these issues, combining a model to incorporate price and income related behavioural responses, linked with an Input-Output framework to capture value chain transmission of price changes.As a pilot exercise, an analysis of the distributional impact of price changes during the cost of living crisis between 2020 and 2022 was evaluated for Pakistan. The cost of living crisis was marginally progressive in nature with slightly higher price increases at the top of the distribution than the bottom.This reflects expenditure patterns across the distribution and the good specific price changes.While energy prices increased more in Pakistan than in other countries, the relatively low budget share (particularly of purchased fuels) means that the net impact on welfare due to increases in energy prices is relatively low. The biggest driver of the welfare loss at the bottom was food price inflation which comprises over half the budget share, while other goods and services were the biggest driver at the top of the distribution. Models similar to the PRICES model provided valuable analysis in recent crisis, where people have lost income sources, like during the COVID-19 pandemic lockdowns (O'Donoghue et al, 2020;Sologon et al, 2022;Lustig et al, 2021;Doorley et al, 2020;Li et al, 2022;Bruckmeier et al, 2021), or experienced rapid price growth, like during the 2021-2023 cost of living crisis (Menyhért, 2022;Curci et al, 2022).Traditionally, these models have been used to assess the distributional impacts of tax-benefit systems (Sutherland & Figari, 2013), including environmental taxes (Feindt et al, 2021;Cornwell & Creedy, 1996) and other indirect taxes (Decoster et al, 2010;2011).The PRICES model expands on the available tools to assess distributional impacts of price changes by integrating multiple data sources into a unified scalable model. A central contribution of the PRICES model is that it unifies multiple aspects important for the distributional impact of price changes in a single framework using standardized datasets available across many countries.The PRICES model accounts for differences in consumption patterns along the income distribution and estimates income and price elasticities for different household types and income groups.It includes the calculation of VAT, excise and ad valorem taxes, allowing the model to compute producer prices.It includes an environmentally extended multi-regional input output model to simulate the impact of input price changes due to policies (such as a carbon tax) on producer prices.The explicit modelling of indirect taxes allows the assessment of the joint impact of indirect taxes and carbon taxes on consumer prices.Lastly, the model includes a procedure to impute expenditure patterns into datasets with richer information on households' employment situation, income sources and tax liabilities.The resulting dataset can be used to assess the net distributional impact of price changes and complex mitigation measures, accounting for global supply chains, and price and income responses by households. A.2. Fuel Prices Price changes due to carbon pricing or carbon taxation requires an additional component to model pollution.This encompasses both direct effects in terms of fuel consumption of households and indirect effects through the polluting activity of the value chains of the goods consumed, estimated through an Input-Output framework.Most models will incorporate a specific technological structure as outlined in the Input-Output modelling structure.However decarbonisation of potentially transport or electricity generation will require changed technologies, which means that the Input-Output structure will have to change.In this framework, this is achieved by changing the Input-Output framework used.Price changes influence consumption behaviour via own and cross price effects, while revenue recycling from generated revenue from indirect taxes and carbon taxes influence consumption behaviour via income effects.In order to model these behavioural effects, the model incorporates a simple demand system. Figure 1.Structure of an Price based Microsimulation Model linear expenditure system (LES).The dependent variable is typically the expenditure share.Two of the most popular methods are the translog system ofChristensen et al. (1975) and theDeaton and Muellbauer (1980a) almost ideal demand system (AIDS), with the latter extended byBanks et al. (1997) to include a quadratic expenditure term (QUAIDS). In the QUAIDS model, expenditure share equations have the form = ∑ ( ) + ( Figure 2. Expenditure Category specific Price Growth November 2020-November 2022 An IO table contains information about sectors of an economy, mapping the flows of inputs from one sector to another or to final demand (that consumed by households, NGOs, governments, or exported, etc.).Output in each sector has two possible uses; it can be used for final demand or as an intermediate input for other sectors.In an sector economy, final demand for sector 's produce is denoted by and the output of sector i by ix .Intermediate input from sector into sector is defined as ij a j
2023-10-03T06:42:39.005Z
2023-09-30T00:00:00.000
{ "year": 2023, "sha1": "656a357147feba59730bff79e83fc37337fd6666", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a1b716a3294753ed5df3a7fbb98490172fdd5f5f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
152663300
pes2o/s2orc
v3-fos-license
Corporate entrepreneurship and financial performance : The role of management It is hypothesised that a positive relationship exists between the financial performance of an organisation and the level of intrapreneurship within the organisation with causation running from entrepreneurship to financial outcomes. Using a three-factor key intrapreneurship model developed by Goosen, De Coning and Smit (2002) and financial outcomes from a sample of companies listed in the industrial sector of the Johannesburg Stock Exchange, this proposition is put to the test. The results support the hypothesis that the key factors innovativeness, proactiveness and management’s internal influence all significantly contribute to financial performance if regarded individually, but that the last factor dominates the first two external factors when used simultaneously. The conclusion underscores the importance of the impact of leadership on financial outcomes. Introduction Over recent years corporate intrapreneurship or intrapreneurship has been viewed as a means of invigorating corporate organisations.This view is based in part on the belief that intrapreneurial elements will assist the organisation to be more dynamic and more competitive.What is not known is what the quantitative effects of higher levels of intrapreneurship will be.A number of authors have alluded to the possibility that there could be a relationship between financial performance and intrapreneurship. In the light of this, this study investigates the relationship between financial performance and intrapreneurship in the South African context, specifically in industrial organisations utilising a composite index that represents financial performance and key factors representing intrapreneurship.The financial index is based on previous work done on the Industrial Sector of the Johannesburg Stock Exchange.The intrapreneurship key factors are based on the work of various authors.Two of the three key factors used in the study, focus organisationally outwards and one inwards.The two outward focussing key factors are taken from the 'classical' model for intrapreneurship as represented by the ENTRESCALE (Knight, 1997).The key factor that focuses inwards is a new contribution and it examines the effects of management on intrapreneurship (Goosen, De Coning & Smit, 2002). It is shown that there is a relationship between financial performance and intrapreneurship as represented by the three key factors.The key factors that represent the classical model correlate moderately with the financial index.The key factor, management contributed by the study is a significant predictor of financial success.Organisations with higher levels of intrapreneurship, as defined by this variable, are therefore more likely to be financially successful than those that have lower levels of intrapreneurship. The article is structured as follows.Section 2 offers a literature review, followed by and explanation of the methodology followed in Section 3. Section 4 deals with the data analysis and Section 5 discusses the results and their implications .A brief conclusion is offered in Section 6. A number of authors support the view that the creation and introduction of new products and technologies, which are usually associated with intrapreneurship, lead to higher levels of financial performance, for example Cheney, Devinney and Winer (1991) or Lengnick-Hall (1992). The work of Morris and Sexton is of particular relevance.They found that "there is reason to believe that the level of entrepreneurial intensity may positively affect performance outcomes in a company" (Morris and Sexton, 1996: 8).Their findings lend specific support to similar research done by Covin and Slevin (1989) and Zahra and Covin (1995), in that a relationship exists between intrapreneurship (the degree and the amount of entrepreneurial behaviour in organisations) and financial performance. Still, one should note that the intrapreneurship-performance relationship should preferably be viewed longitudinally.Morris and Sexton (1996: 11), Zahra (1995: 242) and Zahra and Covin (1995: 55) found that the relationship between corporate entrepreneurship and financial performance strengthens over time.(One of the factors that may cause short-term negative profits might be the investment made in research and development to produce new innovations).Van der Post (1997: 75) proposes that financial performance is a sound basis on which to make inferences about organisational effectiveness as it encompasses the outcomes of all system dimensions of an organisation.It can be reasoned with Cornwall and Perlman (1990: 15) that intrapreneurship, is in essence, a system for generating wealth and as such the calculation of shareholders' wealth will be indicative of the measure of intrapreneurship found in organisations.Zahra and Covin (1995: 47) support this view.They state that there are at least two reasons for expecting a relationship between entrepreneurial activities and subsequent organisational performance.Firstly, innovativeness can be a source of competitive advantage for an organisation.Innovative companies develop strong, positive market reputations.They also adapt to market changes and exploit markets or opportunity gaps.Sustained innovation moreover distances intrapreneurial organisations from their industry rivals, and thus increases financial returns.Secondly, intrapreneurial organisations are by definition, more proactive than traditional organisations.Their quick market response therefore gives them added competitive advantage.Zahra and Covin (1995) point out that Dess andMiller in 1993 andLieberman andMontgomery in 1988 noted that quick market responses can be translated into superior organisational performance.However, the manner in which organisations are structured and managed could have significant influence on performance.Organisational make-up should therefore be examined. Organisational structure is the design of an organisation.It is the formal pattern according to which people and jobs are grouped.Business processes take place within organisations' structures.Cornwall and Perlman (1990: 106) hold that structures and communication are the factors that bind organisations together.Policies, practices and measurements make intrapreneurship and innovation possible (Drucker, 1993: 148).Once an organisation has decided on the core elements of its strategy, it should build structures that will support that strategy.Tropman and Morningstar (1989: 157) are emphatic that if this strategy includes innovation, then the organisation must create a structure that will support entrepreneurship.Ironically, this fact is well understood but not easily executed in existing organisations.The organisation has to devise relationships that centre on intrapreneurship.It has to ensure that its rewards and incentives, its compensation, personnel decisions and policies all reward the appropriate entrepreneurial behaviour. In the comparison between entrepreneurial organisations and traditional organisations, the bureaucratic structure comes to mind.Power and decision-making are often centralised at the top in a bureaucracy.Bureaucracies are moreover characterised by excessive rules and procedures that restrict originality and freedom.Systems are mechanistic at their core.Cornwall and Perlman (1990: 107) propose, in stark contrast to this, that the entrepreneurial organisation be structured for empowerment by low centralisation, low formalisation and limited size.Selfmanaged teams should replace the bureaucratic functional unit and jobs should steer away from high levels of specialisation.Essentially, the entrepreneurial structure should enhance co-operation and allow freedom that will facilitate innovation.Cornwall and Perlman (1990: 111) sound the warning that empowerment and delegation must not be equated with anarchy, and that entrepreneurial structures should be controlled. Intrapreneurship The 'key factor' intrapreneurship instrument developed by Goosen, De Coning and Smit (2002) was used in the study to measure corporate entrepreneurship because of its focus on the effect of management on organisations internally.The instrument consists of three factors of which two focus internally and one externally.The instrument is based in part on the ENTRESCALE (Knight, 1997), which was initially developed by Khadwalla (1977).It was subsequently refined by Miller and Friesen (1984), and Covin and Slevin (1989).The remainder of the instrument that was used focuses internally into organisations and represents management's influence on structures and processes as well as relations within the organisation. The ENTRESCALE contributed two factors to the instrument that was used.The first, Innovativeness, represents the dimensions Product lines, Product changes and R&D leadership.The second factor Proactiveness, represents New techniques, Competitive posture, Risktaking propensity, Environmental boldness and Decisionmaking style. The third key factor, management's internal influence, especially on structures and processes, as well as relations, represents the dimensions Goals, Creativity systems, Rewards, Intracapital and Communications systems, Staff input, Intrapreneurial freedom, Problem solving culture, Intrapreneurial championing and Empowerment. Financial performance The literature considers several approaches to measuring financial performance.Some relate to financial dimensions and others to operational dimensions such as market share, market positioning or to change (Murphy, Trailer & Hill, 1996).Examples are the views of Zahra and Covin (1995), Cron and Sobol (1983), Teo and King (1996), Byrd and Marshall (1997). This study however, uses the measure as proposed by Van der Post (1997) based on ease of access, simplicity and previous testing in a South African environment.Four measures were used namely, return on average assets (ROAA), return on average equity (ROAE), total asset growth (TAGR) and share return (SR). The research model Based on what has been stated above a research model was formulated.It is depicted in Figure 1 I In this model FP is financial performance which is an index factorised from the measures (ROAA), return on average equity (ROAE), total asset growth (TAGR) and share return (SR), whilst the key factors M i , I i and P i , represent intrapreneurship, I. Financial parameters and organisations included in the study Financial data is from the Bureau of Financial Analysis (a bureau within the Graduate School of Business of the University of Pretoria).The Industrial Sector of the Johannesburg Stock Exchange was examined.It was not possible to do a longitudinal study nor to measure perceptions and then analyse financial data.Perceptions were measured post hoc.Zahra and Covin (1995) suggest that financial measurements, in the testing for a relationship with corporate entrepreneurship, should be measured over longer periods.This should be done in order to ensure that the results of entrepreneurship within the organisation have manifested in the financial performance.It is therefore preferable to measure financial results over periods as long as ten years.However, it can be debated whether this methodology is applicable when associated with post hoc measurements.In this study the relationship between intrapreneurship, as expressed through the views of executive management, and financial performance was examined.The views of management were probed during the years 2001 to early 2002.The post hoc views of management should therefore have bearing on the financial details.A period of ten years seemed inappropriate and it was thus decided to use the published information over a shorter period. It is generally accepted that planning in organisations fall in three categories, short-term, medium-term and long-term.Many organisations, including governmental institutions, follow a 'rolling' three or five year planning period for medium-term plans in which planning is an annual, but continuous process for three to five years.Mitchell (1978: 296) confirms this as preference for corporate planning.It was thus decided to analyse the financial data for a period of three years, as closely as possible to the measurement of management's perceptions. A factor analysis confirmed that the four financial variables load on a single factor.The Corporate Financial Index was constructed for the 231 organisations as a weighted average using the four factor loadings.A further 12 organisations, that operated outside of South Africa, were eliminated from the study as they were delisted or were suspended from the Johannesburg Stock Exchange at the time of measurement.The final population for the study consisted of 219 organisations of which only 109 organisations finally participated in the entrepreneurial survey.Of these responses 19 proved not to be useful. Data analysis Data were summarised, and the quality of the data determined.Measures of normality, location and variability were computed.The SPSS program (SPSS, 2001) used for the statistical analysis identified ten values as extreme or as 'outliers'.Four organisations were identified as falling outside of three standard deviations of the mean.This was confirmed by the fact that in the regression analysis there are four values with standardised residual values exceeding either +3.3 or -3.3 which can be categorised as 'outliers' (Tabachnick & Fidell, 1996: 139).To improve the usability of the data for this purpose it was decided to remove the four data lines.This resulted in 86 valid data sets for use in the statistical analysis.This final sample size conforms to Tabachnick and Fidell's (1996:132) recommendation for regression analysis that N > 50 + 8m where m = number of independent variables thus 50 + (32), or 82. The data detailing participating and non-participating organisations, with their respective financial indices, were also tested to establish if a relationship could be found between the financial performance of organisations and their decision to participate or not.A non-parametric test indicated that there is not a significant relationship between financial performance and the choice to participate or not at the 5 percent level of significance. The main research hypothesis states that there is no relationship between the financial performance index and the key intrapreneurship factors.The research question is therefore expressed as follows: Y = α +β 1 X 1 + β 2 X 2 + β 3 X 3 + ε where Y = the dependent variable financial performance; X 1 = management; X 2 = innovativeness; X 3 = proactiveness and α and β i are regression coefficients. The stepwise regression's ANOVA table reports a significant F statistic (19.888).The coefficient summary is detailed in Table 1 below.This result indicates that there is only one major predictor of the dependent variable -the independent variable Management.The stepwise regression model explains 17,3% of the variation in the dependent variable Financial Performance.This is not unexpected, as financial performance is the result of a number of variables and not only management's influence on relations in an organisation.Khandwalla (1977: 665) alludes to this by suggesting that organisational performance consists of demographic variables, environmental variables, strategic variables, technological variables and structural variables, amongst others. The results of the regression analysis lead to the rejection of the null hypothesis as there is a relationship between the composite financial index and at least one of the key factors. The regression output was examined to ensure that the classical assumptions of regression analysis were valid. The relationship between the independent variables was tested for multicollinearity using condition indices.Normality, linearity, homoscedasticity and the independence of residuals refer to the nature and the underlying relationships between variables.All these assumptions were investigated by examining the residuals scatter plots. Residuals are the differences between the obtained and the predicted dependent variable scores.Residual scatter plots are used to investigate: Neither the histogram nor the P-P plot indicated that there is a significant deviation from normality.This is confirmed by the residual statistics in which the standardised residuals have a mean of 0 and a standard deviation of 0,976. The data were also inspected for outliers using Mahalanobis distances.A Mahalanobis distance is the distance of a particular case from the centroid of the remaining cases, where the centroid is the point created by the means of all the variables (Pallant, 2001: 220).It is used to detect any case that has a strange pattern of scores across all the variables, four in the case of this study.Mahalanobis distances were inspected and two cases, were found to exceed the critical values (Pallant, 2001: 144) However, given the size of the data file, and the fact that four data points had already been removed before the analysis, the data points and information were retained. A standardised scatter plot of the standardised predicted dependant variable by the standardised residuals shows a random pattern across the range of the standardised predicted dependant variable and as such indicates that the assumption of homoscedasticity is not materially violated. Linearity of data can be inspected by inspection of the scatter plots.An inspection of the observed versus the predicted values (for regression analysis) indicates data points that are symmetrically distributed around a diagonal line -an indication of linearity.Similarly the distribution around a horizontal line of the scatter plot of residuals versus predicted values confirms linearity A further rule of thumb that can also be used as an indicator is the comparison of the standard deviations of the dependent variable and the residuals.An indication of non-linearity is when the standard deviation of the residuals exceeds the standard deviation of the dependent variable (Garson, 2002).The data were inspected and it indicated the following: • Standard deviation of the dependent variable: 6,265 • Standard deviation of residuals: 5,5427 These confirmed the assumption of linearity. The independence of observations is normally tested by the Durbin-Watson coefficient.Independent observations will result in a Durbin-Watson statistic of between 1,5 to 2,5 (SPSS, 2001: 401).The analysis results in a Durbin-Watson statistic of 2,114, which indicates independence of observations. Having determined the form of the relationship between the variables, the findings are confirmed during correlation analysis, which determines the strength and direction of the relationship between variables.All key factors had significant correlations with the composite financial index. The results of the correlation analysis are listed in Table 2. Discussion of the results It was the main goal of this research to examine the relationship between the key intrapreneurship factors and a calculated financial index that would represent an organisation's performance.This goal originated from the belief that entrepreneurial activity could possibly result in positive increases in financial performance.The work done by Zahra (1986) and especially Covin and Slevin (1986) had to be examined in the South African context.They found a moderate correlation of r = 0,39 (p < 0,001) between entrepreneurial posture and a financial performance scale.When tested individually, the ENTRESCALE's intrapreneurship factors had significant (at a p < 0,01 level) correlations between the financial index and key factors with r = 0,344 for Innovativeness and r = 0,375 for Proactiveness. The contribution of this research added to this in that the correlation for Management was r = 0,504.The individual dimensions that constitute the key factor relations are briefly discussed below to ascertain their individual contribution. To assist in the interpretation of the key factor Management, a principal component factor analysis was done on the raw data that represent the key factor.The raw data set was examined for its suitability for factor analysis.The Kaiser-Meyer-Olkin measure of sampling is 0,841.The proximity to 1 indicates the suitability of the data for factor analysis.This is confirmed by Bartlett's test of sphericity, which is significant at 0,000.The resulting component matrix is detailed below in Table 3. Goal setting loaded 0,891 on the key factor.Demanding management is sometimes seen as applying pressure.However, cognisance must be taken of the work of Faul (1986) that establishes the link between goal-orientated pressure and productivity.Intrapreneurial dimensions (such as innovative behaviour) should be included in the setting of goals. Innovation and creativity systems loaded 0,714 on the key factor.The literature study has shown that intrapreneurial organisations manage innovation and creativity.Organisations should implement systems that would allow the development and active support of creativity and innovation.These systems should furthermore allow for the prudent assessment and evaluation of new ideas. Rewards loaded 0,934 on the key factor.This dimension points to the rewarding of appropriate innovative behaviour in intrapreneurial organisations. Intracapital loaded 0,871 on the key factor.Intracapital denotes the specific and procedural management of capital expenditure for intrapreneurship projects or ventures.It takes cognisance of, and discounts risk before expending the capital. Communication loaded 0,844 on the key factor.Intrapreneurial communication points to free and open communication, in which ideas are shared and information is freely exchanged. Staff input loaded 0,737 on the key factor.Input into the organisation and management's decisions, work methodology, views, to name but a few, could lead to richer decisions (of management) that are thus more informed and this could lead to more profitable results.An example is the inclusion of collective intelligence in business planning.Collective intelligence is the sum of the observations and contact of all personnel rather than only a few analysts.In a hypothetical instance a member of staff involved in marketing can add value to the planning processes with his or her observations at the 'coal face'.Similarly, engineering staff might propose a simple solution to a production problem, which could otherwise be expensive to the organisation.As such it is possible to conceptualise the correlation between staff input and financial performance. Intrapreneurial freedom and empowerment loaded 0,702 and 0,722 respectively on the key factor.It embodies the ability of staff to make certain decisions, to contribute to innovations, and to add to ideas and suggestions through their creativity.In some instances it can also imply the involvement in venturing.This wider concept or dimension touches on virtually every area within the organisation (production, human resource management, etc.), therefore it could have an effect on performance. Problem solving culture loaded 0,706 on the key factor.It embodies an organisation's collective will to find answers to problems, and to contribute to solutions as individuals and as groups.It is the opposite of simply accepting circumstances, and it is the looking for optimisation and excellence.It points to a spirit of dynamism in the organisation.The findings of the study concur with Faul (1986) that a problem solving culture contributes to financial performance. Executive championing of intrapreneurship is a very important dimension of the key factor Management.This dimension loaded 0,672 on the key factor.The dimension alludes to intrapreneurship in the wider context, and consequently explains a portion of the correlation between Management and organisations' financial performance.An executive cannot champion intrapreneurship by simply verbalising understanding and support.It includes the actions of the executive in his subscription to intrapreneurship.It is associated with the direct support of all the elements that constitute intrapreneurship including the structuring of the organisation, systems and processes to facilitate intrapreneurship and financial support.It will also set the tone for risk affinity or risk aversion, which in turn will influence innovative behaviour. Earlier it was stated that even though an organisation might be intrapreneurial in terms of its posture, many opportunities would be lost if internal conditions were not conducive to intrapreneurship.A typical example of this could be when an organisation wants to compete aggressively in terms of its market share, but loses opportunities because of internal factors such as the potential of its employees remaining unharnessed, or because there is little communication between management and staff.The correlations found between financial performance and management's internal influence, point to the fact that organisations could add to their financial performance by implementing the proposed model. Conclusion The topic of the influence of leaders on organisational outcomes is well researched.The work of Baum et al. (1998), House, Spangler andWoycke (1991), Smith, Carson and Alexander (1984), House and Singh (1987), Day and Lord (1988), and Barling, Weber and Kelloway (1996) indicates that positive organisational outcomes are associated with higher levels of leadership.This study provides additional support for this, and contributes to current understanding by indicating the positive relationship between the intrapreneurship factors, specifically management's influence (viewed internally), and financial performance. Table 1 : Results of stepwise regression Stepwise regression model Condition indices are computed as the square roots of the ratios of the largest eigenvalue to each successive eigenvalue.Values greater than 15 indicate possible problems and values larger than 30 suggest a serious problem with multicollinearity (SPSS: 2001).No factor had an index greater than 15.
2019-05-14T14:04:45.520Z
2002-12-31T00:00:00.000
{ "year": 2002, "sha1": "60d4426f697ce891e0e8a6f8d805054d36d6ca91", "oa_license": "CCBY", "oa_url": "https://sajbm.org/index.php/sajbm/article/download/708/640", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4d00af1ea8a753a54aabe9cc747975dd0e80337f", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
242129264
pes2o/s2orc
v3-fos-license
A study of serum zinc levels in febrile seizures Febrile seizures are the most common cause of convulsions in children. However, the exact underlying etiology and the pathophysiological mechanisms are yet to be established. Various theories have been put forward regarding the role of trace elements as predisposing factors in causing the convulsions. Among them, Zinc is the most interesting trace element whose role in diarrhea and pneumonia is well proven. Serum Zinc levels has been long associated with the febrile seizures. The main reason is still unknown for febrile seizures. In this prospective study, we evaluated zinc level in children with the first FS attack. Our findings can help clinicians with a fair idea of serum zinc levels about the use of zinc supplements for preventing the recurrence of febrile seizures via regulation of some neurological functions. Introduction Febrile convulsion or now known as the febrile seizures, is the most common seizure that has been identified in children in our daily paediatric practice [1,2] . Febrile seizures usually occurs between the six months and five years of age and rarely occurs before and after that [3] . It is a genetic related age-limited disorder, which only occurs with febrile illness [4] . It is important to exclude central nervous system infections and electrolyte imbalance before febrile seizures is diagnosed. Also, patients should have no history of afebrile seizures [4,5] . FS is classified into two simple and complex groups. Simple FS is generalized, lasts for 10-15 minutes, and occurs once in 24 hours. Conversely, complex FS is characterized by prolonged focal seizures, which occur more than once in 24 hours [5] . The main mechanism of FS pathophysiology is not clear yet [2,5] . Today, it is known that genetic factors play a major role in the occurrence of FS, although some environmental factors, such as trace elements (e.g., zinc), may be involved in the association of genetic changes with FS occurrence [5,6] . Generally, zinc is an important trace element, which contributes to growth and development, neurological function, nerve impulse transmission, and hormone release [7] . It also stimulates the activity of pyridoxal kinase, as the enzyme modulating the level of gamma aminobutyric acid (GABA) [8] . In this prospective study, we evaluated zinc level in children with the first attack. Our findings can help clinicians make a about the use of zinc supplements for preventing the recurrence of febrile seizures via regulation of some neurological functions [6][7][8][9][10][11] . Aims and Objectives Primary objective is to find the serum levels of zinc in febrile convulsions. Materials and Methods This study was done in the Department of Paediatrics, Kanachur Institute of Medical Sciences, Mangalore. This study was done from June 2017 to June 2018 30 patients were selected who were admitted in the Department of Paediatrics. Careful History was taken and the blood was collected and sent to the Department of Biochemistry for estimation of serum zinc levels. Inclusion criteria Confirmed cases of febrile seizures ~ 64 ~ Exclusion criteria CNS infections. Patients on multivitamin and minerals. Discussion Limited numbers of studies have been conducted regarding the role of zinc in occurrence of febrile seizures. Burhanoglu M et al. reported that the average level of serum zinc in children affected with febrile seizure was less than control group [12] . Ehsani F et al. carried out study on 34 children with febrile seizure and 58 healthy children revealed that the serum zinc level in children with febrile seizure was lower than those in control group and the difference was significant, statistically [13] . Tütüncüoğlu S et al. reported that the serum zinc level among children with febrile seizure was considerably lower than those in control group [14] . In a study by Hamed SA et al., it was shown that the trace elements such as zinc have crucial role in pathogenesis of seizures [15] . The study of Gündüz Z et al. on 102 children with febrile seizures indicated that the serum zinc level in the group affected with febrile seizures was significantly lower than those in control group [16] . In a very latest study by Mishra OP et al. on 20 children with febrile seizures and 48 children as control group, it was reported that the serum zinc level in children affected with febrile seizure was lower than those in control group, and the difference was significant [17] . In contrast to our study, Kafadar I et al. found no significant difference in serum Zinc concentration in children with febrile convulsion and other two control groups. This may be due to the smaller sample size in their study [18] . The reason for reduction of serum zinc level in patients affected with febrile seizure is not clear. However, fever and acute infection may have some roles in developing such condition [19] . It is believed that the release of Tumor Necrosis Factor (TNF) and interleukin (IL) during fever or tissue injury may result in reduction of serum zinc level [13] . Izumi Y et al. proposed that the hypozincemia during fever trigger the NMDA receptor, one of the members of glutamate family of receptors, which may play an important role in the initiation of epileptic discharge during febrile seizures [20] . Conclusion The mean serum zinc levels have been reported in this study and this study makes an attempt to be the base line study for comparison for further studies.
2021-10-05T16:07:30.656Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "918600f77803f464ed8a613e045f1c80c48a0254", "oa_license": null, "oa_url": "https://www.paediatricjournal.com/article/view/134/4-1-24", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "081ec25f04dbf768b09dc40daf7c72666c2ae207", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
265863094
pes2o/s2orc
v3-fos-license
Examining the Impact of @waste4change's Instagram Campaign on User Attitudes towards Waste Management : Waste management is a social problem that needs to be addressed to foster a positive attitude toward the need to develop waste management. Campaigns through social media need to be evaluated for their benefits. Only a few studies have evaluated the impact of exposure and presentation of information conveyed via social media. This study aims to see the effect of exposure and presentation of information from @waste4change delivered through Instagram on user attitudes. The population is social media users Introduction Waste management needs attention not only from the government but also requires community involvement.The volume of waste generation in Indonesia in 2022 will reach 19.45 million tons.Based on the type, most of the waste generated is in the form of food waste, with a proportion of 41.55%, then the plastic waste is in second place, with a proportion of 18.55% (Putri, 2023).Modernization in the current era of globalization also contributes to the increase in waste generation each year.People live a consumptive lifestyle which contributes to more waste generation.Alfitri et al. (2020) said that pollution is one of the biggest environmental threats.It was further said that around 20% -50% of waste in developing countries could not be collected due to the lack of waste collection services and an inadequate waste management system.Garbage can still be found piled up on the streets, which can cause disease.In general, it is said that developing countries have the potential to face higher waste problems compared to industrialized countries. Poor waste collection services in Indonesia can be seen from the lack of Final Disposal Sites (TPA) due to the limited available land and inadequate facilities and infrastructure (Dewanti et al., 2019).The Bantar Gebang Integrated Garbage Disposal Site (TPST) in Bekasi is almost complete due to the ever-increasing piles of garbage; even the Provincial Government of DKI Jakarta plans to buy land from residents around the TPST to expand the TPST land (Raharjo, 2021).Pangkalpinang in the Bangka Belitung Islands is experiencing a landfill crisis, with only two hectares of land remaining (Davina, 2021).Apart from that, Pematangsiantar, especially in Tanjung Pinggir, is also experiencing a landfill crisis, where garbage piles up on the shoulders of the road, and it is difficult for garbage trucks to enter the land (Manurung, 2021). In addition, waste is not managed properly due to the infrequent waste sorting activities; even though some places have separated waste by type, waste workers often mix the separated waste back when it is brought to the waste disposal centre (Ismail, 2019). The community needs to gain awareness of waste management in order to reduce waste piles in Indonesia; this shows that residents' awareness of waste management still needs to improve (Haswindy & Yuliana, 2017).It has also been discovered that attitude is a crucial element in trash management (Gnimadi, 2022).A social movement is needed to free Indonesia from waste problems; social movements are formed because of the results to be achieved (Haris et al., 2019).Social movements to solve environmental problems can be carried out online through social media.Social media and social movements are two important interrelated components because people use social media as a source of information (Sitowin & Alfirdaus, 2019); social media provides a virtual space to form a movement with the community to fight for shared values.The use of social media as a space for education, information, and campaigns on environmental issues has been carried out.Nurislam (2020) shows how people use social media, YouTube, to watch waste management skills content.The study by Budiarti et al. (2020) describes a campaign carried out via Instagram @lesswasteshift to increase public awareness of protecting the environment, especially in reducing waste.Maryam et al (2021) show that an organization provides education through campaigns on social media; this activity is carried out to provide public knowledge and understanding about environmental damage and positive behaviour to overcome this problem.Kristanti & Marta (2021) showed that the content of YouTube channel has persuasive and educative to the users.This shows the role of social media in educating the public regarding waste management through informative content, which can then change their attitudes and behaviour. The need for education about waste management in the community and the use of social media as a virtual campaign space makes this study necessary.Exposure to information from social media is expected to create positive attitude changes in audiences.This refers to the Information Integration Theory from Martin Fishbein (1973), which states that information can potentially influence the formation of attitudes of recipients of information.Previous studies have shown that exposure to social media information affects changes in attitudes (Imanda, 2021;Ashari, et al., 2016;Umniyati, 2017;Intyaswati, 2022).However, there still needs to be more studies looking at the relationship between exposure and presentation of waste management information on the attitude of recipients of information.This study aims to look at the role of exposure and presentation of @waste4change account information in changing audience attitudes. Research Method This study used a survey method with a purposive sampling technic.The sample criteria were followers of the Instagram account @waste4change who read, saw, or heard information related to waste management provided by the Instagram account @waste4change. Researchers distributed questionnaires in the form of Google Forms via Instagram to the followers @waste4change.Data collection was carried out in April 2022 and got 100 respondents. The instrument of the research refers to a predetermined study.Factor Analysis (EFA) have met the standard criteria; the results of the Measure of Sampling Adequacy tested the validity of the questionnaire on the question items resulting in a value that met the standard (> 0.50) for each item (Intyaswati, 2023).A reliability test was carried out using Alpha Cronbach's, and the results have the standard value (> 0.70).Data analysis used multiple linear regression to determine whether or not there was a relationship between variables that had a causal relationship using SPSS 24 program.This section describes how the research was conducted, research design, data collection techniques, instrument development, and data analysis techniques.This section explains how the data was collected/generated and an explanation of how the data was analysed. -- The information provided by @waste4change adds to followers' confidence in waste managementWeight-The accuracy of the information presented by @waste4change -Clarity of information presented by @waste4change -The relevance of the information provided by @Frequently follow the development of waste management information on the Instagram account @waste4change Duration -Time spent viewing, reading, or listening to the information provided by @waste4change Attention -The level of concentration or focus in viewing, reading, or hearing the information presented by @ International Journal of Science Education and Cultural Studies International Journal of Science Education and Cultural Studies, ISSN 2964-2604, Volume 2 Number 2 September 2023 https://doi.org/10.58291/ijsecs.v2n2.12863 Table 1 Presentation, with two indicators, namely valence and weight, 2) Exposure, with three indicators, namely frequency, duration, and attention.The dependent variable is attitude, with three indicators including cognitive, affective, and conative.Validity test using Exploratory International Journal of Science Education and Cultural Studies International Journal of Science Education and Cultural Studies, ISSN 2964-2604, Volume 2 Number 2 September 2023 https://doi.org/10.58291/ijsecs.v2n2.12864
2023-12-07T16:05:33.298Z
2023-09-07T00:00:00.000
{ "year": 2023, "sha1": "37e6e326d7c8d8c66a74b44622f0fae035b8db77", "oa_license": "CCBYSA", "oa_url": "https://ejournal.sultanpublisher.com/index.php/ijsecs/article/download/128/92", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "895ecd468ad0e9840d026a444e8737cad765e518", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [] }
16754256
pes2o/s2orc
v3-fos-license
Determination of factors affecting relapse of vaginitis among reproductive-aged women: An experimental study Introduction Vaginitis is a common problem for women, especially in reproductive-aged women. It is a worldwide health problem with many side effects but could be prevented by a health-promoting lifestyle related to vagina health. The aim of this study was to determine the factors affecting relapse of vaginitis. Methods In this experimental study, 350 reproductive-aged women with vaginitis were selected from 10 health centers in Kermanshah (Iran) during 2015 and were equally included in the intervention and control groups. To collect data, a researcher-created questionnaire, which included sociodemographic and health-promoting lifestyle questions, was used. The educational intervention was performed over 20 sessions, each lasting 25–35 minutes. An intervention group was educated by face-to-face education, pamphlets, phone contacts, text messages, and social media. Another group continued the routine clinic education and treatment without contacting the intervention group. Data were analyzed through chi-square and a logistics regression model using IBM-SPSS version 20. Results The results of the study indicated a significant relation between sociodemographic characteristics such as women and their husbands’ literacy, job, family size, income, area for each member of family, tendency of pregnancy, body mass index (BMI), and caesarean experience (p<0.001) and vaginitis. In addition, significant relationships between health-promoting lifestyle dimensions and prevention of vaginitis were identified. Relapse after intervention in the intervention group was 27.7% and 72.3% in the control group. According to the logistic regression analysis, chance for relapse of vaginitis in the group that did not receive intervention was more than the same chance in the intervention group (OR=5.14). Conclusion Health-promoting lifestyle intervention influences prevention of vaginitis. Health-promoting lifestyle, literacy promotion, prevention of caesarian, and obesity are beneficial to improvement in lifestyle dimensions associated with vagina health could be implemented as a successful prevention method. Therefore, it seems that applying a health-promoting lifestyle is essential for a healthy vagina and prevention of vaginitis. Background and study logic Most women have had vaginitis at some point in their lives (1). Vaginitis is the most prevalent infection in the genital tract and is recognized among women in the primary health care sector and in gynecology clinics. It has been estimated that vaginal infections alone account for >10% of outpatient visits to providers of women's health (2). Vaginitis is a general term that refers to inflammation of the vaginal wall generally caused by one of three disorders: yeast infections, bacterial vaginosis, and trichomoniasis (3,4). They affect women of all ages, but they are most common during the reproductive years (The American College of Obstetricians and Gynecologists, August 2011) The subpopulation of women (less than 10%) suffers from recurrent often intractable episodes (5). Nearly 75% of all adult women have had at least one yeast infection in their lifetime, and 40 to 45 percent will have two or more (6)(7)(8)(9). Vaginitis is a global health problem that affects men, women, families, and communities. It may have severe consequences such as infertility, ectopic pregnancy, chronic pelvic pain, abortion, and an increased risk of HIV transmission, preterm birth or delivery of a low-birth weight infant. Therefore, proper prevention and treatment of these diseases are of great importance (8,(10)(11)(12)(13)(14)(15). Some predisposing factors for vaginitis include obesity, physical activity deficiency, high intake of sugar, carbohydrates, cola and alcohol, low intake of dairy products, low Vitamin C, stress, sleep disorders, and antibiotics, consequently reducing simple carbohydrates and refined foods (4). Nutrient intake may be associated with vaginitis. For example, dietary fiber, fruits, and vegetables (16). Too much sugar is also found to be bad for vaginal health and can make one more prone to yeast infections. Therefore a strict low-carb diet is recommended (15,20). If women want to enjoy having a healthy vagina, they should stick with a balanced diet, avoid processed and sugar-rich foods and eat plenty of fresh vegetables and fruits (19,20). Increased psychosocial stress is associated with greater bacterial vaginosis prevalence, recurrent candida vulvovaginitis, and incidences independent of other risk factors (7,17). According to the above, lifestyle is associated with vaginitis. The term "lifestyle" is a theory that is often used to refer to the way people live (21). Lifestyle includes behaviors such as food habits, sleeping and resting, physical activity and exercising, weight control, smoking and alcohol consumption, immunization against disease, coping with stress and ability to use family and society supports (19). The WHO defines health promotion as the process of enabling people to increase control over and to improve their health (WHO, 1986). Health-promoting lifestyles refer to individual actions to attain positive health outcomes (Pender, Murdaugh, & Parsons, 2011 [22]). Pender classified health-promoting lifestyles in six dimensions: nutrition, physical activity, stress management, interpersonal relationships, spiritual growth, and health responsibility (23,24). Maintaining health typically requires improvement in health-promoting lifestyles (25). Women are the primary health promoters all over the world. For their effective participation in health promotion, women require access to information, networks, and funds (26). Women's health is the fundamental component of development and improvement in the developing world (27). Regarding the increase in prevalence genecology tract infections in various communities, WHO often emphasizes their prevention and control. The necessity of consultation and education for efficient preventive healthy behaviors is one of the hottest topics in sexual and reproductive health. Education of women at the reproduction age toward infection prevention, using health services and self-care methods aimed at reduction of disease transmission and treatment, is a necessity in society (10, 28) . Objectives The main objective of this current research was to determine factors affecting vaginitis and relapse of this disease among reproductive-aged women. The specific objectives were to determine the difference between the intervention group and control group to assess the effect of a health-promoting lifestyle on vaginitis. We tried to empower women's lifestyles by educational intervention concentrated on three main dimensions: nutrition behaviors, physical activity, and mental health. Research design and setting This experimental study was conducted on 350 reproductive-aged women with diagnosed vaginitis in health centers affiliated to Kermanshah University of Medical Sciences (a city in west Iran) during 2015. Patients were divided into intervention and control groups. The intervention group (n=175 participated in a health-promoting lifestyle designed education based on the adopted Pender's health promotion model of additional routine treatment; the control group (n=175) received routine treatment. The tool for gathering data was a researcher-made questionnaire administered for all 350 participants in the beginning of the study and six months after intervention. Sample size and Sampling method The sample size with a 95% confidence level and power of 80% was calculated 137.5 (~140) by considering the OR=1.7, P = 0.45 (relapse of vaginitis according epidemiology of vaginitis), and L = Ln (1.7) = 0.53. By hypothesis based on 20% abscission, 175 patients were considered in each group. It was a stratified two-stage clustered sampling. The health centers in Kermanshah (a total of 24 centers) were considered as clusters. Then health centers were stratified in five geographic areas (north, south, east, west, and center) of the city. In the first stage for each stratum, two health centers were selected randomly (10 centers), in the second stage from each selected health center, women with vaginitis were selected randomly. Participant allocation The 10 centers were named on 10 cards. These cards were scrambled, and each time one card was picked out, it was considered as an intervention center, and the next card was associated with the control center. Therefore, five health centers were allocated for the intervention group and five centers for the control group. Experiment and data collection To collect data, a researcher-created questionnaire included sociodemographic questions (age, literacy, job, husband's job, husband's literacy, economic statues, area for each member of family, BMI, history of abortion, history of vaginitis, caesarean, and family size); the second part of questionnaire included questions about lifestyle, which were designed in three dimensions (nutrition behaviors, physical activity, and mental health) with a multiple Likert scale range of choices being used. Although there are some questionnaires such as Walker's health-promoting lifestyle profile, there is no questionnaire about lifestyle related to vagina health. Consequently, the questionnaire to measure lifestyle related to vagina health was designed and validated via psychometric process. After randomly selecting and dividing clinics into intervention and control groups, the control group received the usual treatment, and the intervention group received health-promoting lifestyle education emphasized in three dimensions: nutrition behaviors, physical activity, and mental health additional routine treatment. Content of intervention adapted to women's needs in three aspects of lifestyle associated with vagina health and prevention vaginitis. The intervention was performed over 20 sessions in 20 to 30 min¬ute increments. Face-to-face education, pamphlets, phone contacts, text messages, and social (Internet) media followed the intervention group. Statistical analysis To approach the aims of the research and answer the questions, data were analyzed by IBM© SPSS© Statistics version 20 (IBM© Corp., Armonk, NY, USA) using tests including chi-squares test, and logistic regression (p < 0.05) was considered a statistically significant level. The confidence interval was 95%. Study variables, bias, and confounders Health-promoting lifestyle was the intervention variable, and relapse of vaginitis was considered the outcome. Because centers were randomized into two groups in this study, it is expected that most of the biases were adjusted. The intervention group and control group had no contact with each other. To avoid bias, Kermanshah health centers were requested not to allow any other similar intervention to be conducted during the time this research was carried out. Research ethics The research project received the confirmation of the Institution Ethics Committee of Tehran University of Medical Sciences (number 9021108004-133603, dated 12/30/2015). All required arrangements were done. Participants were intimated with details of the study, asked to read and sign a consent form, and assured of confidentiality. The women volunteers were given the opportunity to leave the study if they became uncomfortable. The control group was given the opportunity to participate in the health-promoting educational program after the study. All questionnaires was completed in privacy and in a comfortable room in the health centers after receiving routine treatment by patients in the office. A total of 350 of participant were assured the information would remain private. Results The mean age of women in this study was 31.03±7.65 years. Most of our study population (93.1%) consisted of housewives. The financial status of participants was moderate (88.9%). Mean of literacy years in these women was 8.89±3.65 and 9.94±3.54 in their husbands. Mean of BMI 26.84± 3.61. Most (68.3%) participants were living in families with three or four members. Demographic variables were not significantly different between two groups. Other features of women participating in the study based on intervention and control groups are shown in (Table 1). Eighty-eight percent of all participants had vaginitis at least once in six months before the study: 51.3% in the intervention group and 48.7% in the control group. Although there is not any difference between groups before intervention, we noted a correlation between participants' lifestyle and relapse of vaginitis. Relapse of vaginitis in the intervention group is 17.7% and 46.3% in the control group. Chi-square tests show a significant (p<0.001) relation between intervention and relapse of vaginitis (Table 2). Chi-square test results in Table 3 indicate a relation between relapse of vaginitis and some socioeconomic characteristic such as women and their husbands' literacy, job, family size, income, area allocated for each member of the family, tendency of pregnancy, BMI, caesarean history, and experience of health education about healthpromoting lifestyle associated with vaginitis (p<0.001). In addition, the same results (Table 4) addressed a significant relationship between health-promoting lifestyle and vaginitis; these included nutrition behaviors, physical activity, mental health, genital health, social support and interpersonal relationships, responsibility for health, selfefficacy, benefits and barriers of health-promoting lifestyle and role of health providers. Chi-square tests showed a significant relationship between all health-promoting lifestyle dimensions and prevention of vaginitis. Independent variables were included in the logistic regression. In this model, odds ratio and confidence intervals are shown for the variables that resulted in significant P-value. Logistic regression shows the chance of vaginitis in different groups of participants. According to the logistic regression analysis, in regard to factors that were positively associated with relapse of vaginitis, chance for relapse of vaginitis in the group that did not receive intervention is (OR=5.14) times more than the same chance in the intervention group, 95% CI (confidence interval) is 2.79-9.44. Women who were in the primary level of education (under seven years education) and who had 7-12 years education, respectively, had (OR=1.77 and 2.33) times of the same chance for women who had a university education. Literacy of their husbands was significantly related to vaginitis as well. Women who had husbands under seven year's education had a 3.25 times chance more than women who had a husband with a university education, and relapse of vaginitis in women who had a husband with (7-12) years education was 1.71 times more than highlevel education (more than 12 years education). Findings showed that the chance for those in bad economic situations was (OR=2.17, 95% CI: 0.52-8.89) times, and women who were in moderate economic statues (OR=1.39, 95% CI: 0.706-2.74) times of the chance for women who were in good economic situations. In regards to family size associated with relapse of vaginitis, the chance for more than six members in a family was (OR=2.12, 95% CI: 0.74-5.99) times, and in family that had five members it was (OR=1.74, 95% CI: 0.704-4.315) times the same chance in a family with three members. Chances in women with more than one caesarean history (OR=1.53, 95% CI: 0.629-3.77) times the chance for the same women without caesarean. Change in BMI has been associated with relapse of vaginitis. Women who had increasing weight gain over six months or who stabilized their weight had more chance for vaginitis. Chances for women with increasing BMI was (OR=12.85, 95% CI: 5.21-31.69) times the same chance for women who decreased their weight. Women who had no changes in their weight and kept their weight the same as six months ago as overweight (OR=3.45, 95% CI: 1.71-6.99) more than women who decreased their weight. 26-5.96) times the same chance in women with health promoting physical activities. Chance for relapse of vaginitis in the group in a weak level of genital health (OR=0.11, 95% CI: 0.76-158.62) times the same chance in women with good genital health. Women in a moderate level of genital health (OR=2.46, 95% CI: 0.25-23.60) times the same chance in women with good genital health. Social supports and good interpersonal relationships with others were factors associated with vaginitis; logistic regression has shown the chance of vaginitis in women who have weak social supports (OR=1.82, 95% CI: 0.122-27) times the same chance in women in the reference group. Relapse of vaginitis in women who were in a moderate level of social support had three times the same chance in women with a good level of social support and interpersonal relationships. Women who received weak services of health providers had 1.53 times more than the reference group a chance for vaginitis. Women in a moderate level (OR=2.03 .95% CI: 0.31-13.02) times the same chance in women who had received good services from health providers. The chance for vaginitis in the group who had weak responsibility about health (OR=9.02, 95% CI: 1.60-50.83) times the same chance in women with good responsibility about health. Women in a weak level of perceived benefits of health-promoting lifestyle had more chances for relapse of vaginitis; the chance for vaginitis in this group (OR=23.33, 95% CI: 2.21-24.60) times the same chance in women who had a good perception about the benefits of a health-promoting lifestyle. Also the chance of vaginitis in the moderate group (OR=7.67, 95% CI: 0.87-67.23) times the same chance in the reference group. Women in a moderate level of perceived barriers for health-promoting lifestyle had more chance for relapse of vaginitis; the chance for vaginitis in this group (OR= 1.68, 95% CI: 0.26-10.99) times the same chance in women who had a good perception about barriers for a health-promoting lifestyle. Women in a weak level of self-efficacy had more chance for vaginitis (OR=11.25, 95% CI: 1.17-10.79) times the same chance in women who had good self-efficacy. Discussion This study addressed a significant relationship between prevention of vaginitis or relapse of it, and a healthpromoting lifestyle that included proper nutrition behaviors, physical activity, mental health, genital health, social supports and interpersonal relationships, responsibility for health, self-efficacy, perceived benefits, and barriers of a health-promoting lifestyle and role of health providers. Also the finding of this study indicated a significant relation between relapse of vaginitis and socioeconomic characteristics (women and their husbands' literacy, job, family size, financial status, area for each member of family, tendency of pregnancy, BMI, and caesarean experience). Review of the literature about vaginitis shows that lifestyle is associated with vaginitis. For example, nutrition behaviors such as sugar, dairy, vegetable, and fruit intake; physical activity leads to weight control; BMI; stress, also mental health including relationships with others, problem-solving skills, perception of counseling, and acceptance of responsibility about health are relate to vaginitis. Therefore, three aspects of lifestyle-nutrition behaviors, physical activity, and mental health-were the most important subjects in our intervention. Results of this study indicate the positive effect of health-promoting intervention on reproductive-age women's health and concordances to the results of the study conducted by Abbas Rahimi Foroushani et al. (29). There have been surveys in Kermanshah about vaginitis and the need for implementation of an educational intervention (30). Thus design of intervention adjusted an understanding of the problem and the context (31). According to the rising scores of lifestyle dimensions for patients who received intervention, quality of life has improved in these women. This indicates that intervention is necessary for women, as Sneha Barot has said: Investment in sexual and reproductive health is necessary to reach ideal global health (32). Other studies showed treatment and behavioral intervention programs are effective methods of treatment, but the behavioral intervention program is superior and cost-effective with few side effects (33). Lifestyle factors are related to many diseases that could be prevented through exercise, good nutrition, avoidance of tobacco and alcohol, psychological well-being and healthy interpersonal relationships, relaxation, stress management, suitable sleeping, spiritual involvement, spending time in nature, and service to others, says Dr. Roger Walsh of the University of California, Irvine's College of Medicine and Michael O'Donnell (34). In the nutrition field, routine diary fresh vegetables and fruits intake reduce the chance to get vaginitis, whereas sweets intake increases probability of vaginitis (35). In Watson and Calabretto's study, as in our study, the vaginal inflammation in the group that used yoghurt was decreased up to three times. Keeping a balanced diet and nutritional considerations are vital for women. In addition, inadequate physical activities lead to obesity, which is a susceptible factor for vaginitis. Implement of 30-minute simple exercises and walking three to five times per week enabled women to achieve normal BMI and prevent obesity. A study conducted by Baheiraei et al. on women of reproductive age in Tehran reveals that physical activity makes positive changes in health. These studies have reported the relationship between self-efficacy and the subdomains of health-related lifestyles such as nutrition, physical activity, health responsibility, and other subdomains (36). Also in our study, we tried to raise self-efficacy in simple ways. Patients learned to keep to their routine activities even in dusty and warm weather. They were able to engage in simple physical activities at home; they also arranged their diet in simple and cost-benefit methods. In this study, we could improve female scores in mental health settings because we believe it is an important aspect of lifestyle related to vagina health, as noted by Fiocco (37). Further, Mojgan Mirghafourvand et al., in a study of health-promoting behaviors and their predictors in Iranian women of reproductive age (36), and Jiang et al. (2007) have similar findings in their intervention (38). Women in this study learned how to overcome stress by pursuing relaxing activities such as meditation, breathing slowly and deeply, doing regular exercises, listening to music, and maintaining communication with others, especially relatives, as different ways to manage stress. This was in line with the results of the Dale et al. study. Adequate sleep is a major component of women's health. Determining regular hours for sleeping and having a daily program for walking also could promote the quality and quantity of sleeping were also learned by women, according to Abedi et al. in their study (39,40). The results showed that the health-promoting educational intervention largely reduces relapse of vaginitis and promoted lifestyle of women. It is our recommendations for future action for the implementation of other surveys about reproductive health, especially vaginitis and behavioral factors such as sexual behaviors that we tried to follow. According to our findings, the intervention revealed that the intervention could be delivered with fidelity and was acceptable to patients (41). There is no doubt that it will be beneficial, in which case it should be implemented. Finally, the effects of the intervention are sufficiently promising. In that case, the researcher who understands the underlying problem has developed a credible intervention and considered that the key points in evaluation will be in a strong position in which to conduct a worthwhile, rigorous, and achievable definitive trial, as Campbell et al. noted in 2007 (42). Conclusions This study helped us to determine important factors associated with vagina health, prevention of vaginitis, and relapse of this issue. The findings of this study showed a significant relationship between lifestyle and vaginitis. In addition, there is a significant relationship between some sociodemographic characteristics (women and their husbands' literacy, job, family size, financial status, tendency of pregnancy, and caesarean experience) and vaginitis. The importance of these outputs is to apply health-promoting lifestyle intervention adjusted to the context concentrated on target groups, especially housewives, couples in low levels of literacy and financial status, and prevention of caesareans as an innovated protocol for prevention of vaginitis and relapse of this disease in primary health care services.
2018-04-03T04:30:20.099Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "c394fc1e4240fc8d5a4a20b075253b5db1fa6094", "oa_license": "CCBYNCND", "oa_url": "http://www.ephysician.ir/2017/3499.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c394fc1e4240fc8d5a4a20b075253b5db1fa6094", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263803271
pes2o/s2orc
v3-fos-license
Infrared laser therapy decreases systemic oxidative stress and inflammation in hypercholesterolemic mice with periodontitis Background Near-infrared irradiation photobiomodulation (NIR-PBM) has been successfully used in periodontal treatment as an adjuvant tool to locally improve cell function and regeneration. Although the relationship between periodontitis and systemic disease constitutes an important aspect of periodontal clinical research, the systemic effects of NIR-PBM in periodontitis are not well known. In this study, we aimed to investigate the effects of NIR-PBM on systemic oxidative stress and inflammation in an apolipoprotein E (ApoE) knockout mouse model of periodontal disease (PD). Methods We evaluated alveolar bone loss by measuring the distance from the cementoenamel junction (CEJ) to the alveolar bone crest (ABC), reactive oxygen species (ROS) production in blood cells, inflammatory activity, plasma cholesterol levels, and lipid peroxidation levels in three experimental groups: (1) ApoEC, control group without intervention; (2) ApoEP, first molar ligation-induced periodontitis for 4 weeks; and (3) ApoEP + PBM, exposed to 808 nm continuous wave, ø ~ 3 mm2, 100 mW, 60 s of NIR-PBM for 7 consecutive days after 4 weeks of periodontitis. At the end of the experimental protocols, ApoEP mice presented significantly increased alveolar bone loss, ROS production, inflammatory activity, plasma cholesterol, and lipid peroxidation levels compared to the ApoEC group (P < 0.05). NIR-PBM for 7 days in the ApoEP + PBM mice significantly decreased systemic ROS production, inflammatory response, plasma cholesterol, and lipid peroxidation levels, similar to those found in the ApoEC group (P > 0.05). However, it was not capable of preventing alveolar bone loss (P > 0.05 compared to ApoEP mice). Conclusion A 7-day treatment with NIR-PBM effectively reduces systemic oxidative stress and inflammatory parameters in hypercholesterolemic mice with PD. However, more studies with longer evaluation times are needed to confirm the systemic effects of locally applied NIR-PBM on PD associated with hypercholesterolemia. Introduction Photobiomodulation (PBM) using near-infrared irradiation (NIR) is based on the theory that low-level light can modify and enhance cellular function [1].The local reduction in edema and the decrease in oxidative stress markers and proinflammatory cytokines in PBM treatment are well established.However, some systemic effects, whereby light delivered to the body can positively impact distant tissues and organs, have also been reported [2].Cellular effects attributed to NIR-PBM include an increase in adenosine triphosphate (ATP) production, a reduction in reactive oxygen species (ROS) production, protection against toxins, enhanced cell proliferation, and reduced apoptosis [3]. Excessive ROS production can lead to increased oxidative stress, resulting in tissue damage, lipid peroxidation, damage to deoxyribonucleic acid (DNA), protein damage, and the oxidation of important enzymes.However, ROS can also function as signaling molecules or mediators of inflammation [4]. Inflammatory mediators encompass a variety of soluble and diffusible molecules that act locally at the site of infection and at more distant locations.These mediators may be of endogenous origin (such as lipopolysaccharides from gram-negative bacteria) or exogenous (related to toxins and bacterial products) [5].Increased oxidative stress plays a significant role in many human diseases, including periodontitis [6], hypercholesterolemia, atherosclerosis, chronic obstructive pulmonary disease, Alzheimer's disease, and cancer [7]. Hypercholesterolemia is the most important modifiable risk factor for cardiovascular disease.Its reduction significantly decreases the risk of this type of disease in the population [8].Moreover, high plasma cholesterol levels in hypercholesterolemic individuals reduce antioxidant activity and increase oxidative stress due to decreased superoxide dismutase (SOD) activity and increased malondialdehyde (MDA) activity, whereas the reverse is observed in normal individuals [9].A study by Katz et al. [10] suggested that hypercholesterolemia could potentially serve as a link between chronic periodontal inflammation and atherosclerosis. The relationship between periodontitis and systemic diseases constitutes an important area of clinical periodontal research.Periodontal disease (PD) may play a role in the development of a systemic inflammatory state by sharing inflammatory risk factors.However, systemic changes also affect oral health [11]. Periodontics has embraced laser technology in both surgical and nonsurgical treatments of periodontal tissues, either as a standalone treatment or as an adjunct, with many successful outcomes.Different lasers of high and low power have been utilized in periodontal treatments, offering benefits such as improved coagulation, antibacterial effects, root surface detoxification, removal of the smear layer, and enhanced bone recontouring [12]. While laser therapy has been employed in numerous studies related to periodontal disease (PD), its impact on systemic oxidative stress levels and inflammation in a hypercholesterolemia model of PD has not yet been explored.Therefore, our study aims to assess systemic levels of oxidative stress and inflammation in a hypercholesterolemic model, specifically using apolipoprotein E knockout (ApoE −/− ) mice with PD subjected to the effects of NIR-PBM. Animals and experimental groups The handling and care of the mice were conducted in accordance with the ethical principles outlined in the national and institutional guidelines for the care and use of laboratory animals, with approval from the ethics committee of Vila Velha University (No. 586-2021).ApoE −/− mice, aged 16 weeks and weighing 25-30 g, were provided with a standard chow diet and had access to water ad libitum.They were individually housed in plastic cages under controlled conditions of temperature (22-23 °C), humidity (60%), and a 12-hour light/dark cycle.The mice were divided into three experimental groups: ApoEC (n = 6-8), which received no intervention; ApoEP (n = 6-8), where periodontitis was induced for 4 weeks; and ApoEP + PBM (n = 6-8), which underwent PBM treatment for 7 consecutive days after 4 weeks of periodontitis induction (Fig. 1). ApoE knockout mouse model The ApoE knockout mouse model was developed by two laboratories in 1992 with the aim of creating better animal models for studying lipoprotein disorders and atherosclerosis and identifying genes that may modify atherogenesis and lesion progression [13,14].The increase in plasma cholesterol in ApoE knockout animals occurs due to the inhibition of the expression of the gene that encodes apolipoprotein E. Apolipoprotein E is a glycoprotein with a molecular weight of approximately 34 kDa that functions by binding to LDL (low-density lipoprotein) receptors to remove cholesterol from circulation in the liver.It is present in VLDL (Very-low-density lipoprotein), IDL (Intermediate low-density lipoprotein), and HDL (High-density lipoprotein).Its synthesis primarily occurs in the liver, but it is also produced in the brain and by macrophages [15]. To inhibit its expression, mouse embryonic stem cells are genetically modified by the insertion of two plasmids containing the neomycin resistance gene, which replaces part of the ApoE gene.These plasmids are inserted into blastomeres of wild-type mice (C57Bl/6), generating homozygous and heterozygous offspring.The crossing between homozygous animals results in ApoE knockout mice that present increased levels of VLDL in the plasma [13,14]. Compared to other animal models of atherosclerosis, ApoE knockout mice have several advantages.They develop atherosclerosis spontaneously, without the need for a high-cholesterol diet [16,17].The development of atherosclerosis involves the activation of proinflammatory signaling, which includes the expression of cytokines and chemokines and promotes increased oxidative stress.Oxidative stress plays a crucial role in inflammatory responses, apoptosis, cell growth, changes in vascular tone, and LDL oxidation [18]. Induction of periodontitis Following the protocol for induction in mice suggested by Pereira et al. [19], with modifications, the mice were anesthetized with ketamine and xylazine (91 + 9.1 mg/ kg) via intraperitoneal injection and positioned on an adapted surgical table that allowed for the opening of their oral cavity.Endodontic digital spacers #20 and #25 (Dentsply Maillefer, Ballaigues, Switzerland) with 3-mm bent tips were used to create space between the 1st and 2nd lower molars on the right side of the mice's oral cavity.First, the #25 spacer was inserted, and after its removal, the #20 spacer was placed, with a 6.0 suture (Ethicon, USA) tied to its stem.Spacer #20 was then carefully removed to allow the suture thread to pass between the two teeth.Once the thread was inserted into the interproximal space, two knots were tied in the mesial area of the 1st molar to secure the ligature for plaque retention.The ligature was maintained in this region for a period of four weeks.After 4 weeks, the induction of periodontitis was confirmed by the presence of visualized plaque at the molar ligature suture and bleeding upon ligature suture removal and supragingival scraping to remove bacterial plaque [20]. Laser irradiation A Gallium Aluminum Arsenide diode laser (AsGaAl) known as Laser Duo (MMOptics Ltda, São Carlos, Brazil) was used for the PBM treatment.During the treatment, the mice were gently immobilized on their backs, and the laser was positioned in the extraoral region at the angle of the mandible, allowing the light beam to penetrate the entire intraoral region of the right lower molars of the animals.Laser treatment for the mice in the ApoEP + PBM group was conducted using an infrared wavelength and an energy density of 6 J per session (808 nm, continuous wave, ø ~ 3 mm², 100 mW) for a duration of 60 s.The treatment was administered for seven days with a 24-hour interval between sessions, totaling seven sessions.The ApoEP group received the same treatment, with the exception that the laser remained inactive.The ApoEC group received no intervention and served as the control.The mice were anesthetized 24 h after the conclusion of the laser treatment, and blood was collected through an intracardiac puncture.Subsequently, the mice were perfused with 10 ml of phosphate-buffered saline (PBS), and their mandibles were dissected [21]. Mandible scanning electron microscopy and morphometric analysis To confirm the establishment of PD and assess the effects of NIR-PBM, the mandibles of the mice were extracted and dissected.The organic tissue was removed from the samples by soaking them in 3% sodium hypochlorite for four weeks.After this period, the mandibles were rinsed in running distilled water for one minute.Subsequently, they were dried in an oven at 37 °C for seven days and stored in a humidity-free environment.To obtain scanning electron microscopy (SEM) images, the samples were left for 48 h at 50 °C and then coated with pure gold in a vacuum coater (Desk V, Denton Vacuum).The samples were then analyzed in direct mode using a scanning electron microscope (Jeol, JEM-6610 LV) [22]. After obtaining SEM images of the mandible, to assess alveolar bone loss, the linear distance in micrometers (μm) from the cementoenamel junction (CEJ) to the alveolar bone crest (ABC) of the mesial root, following the long axis of the tooth, was measured in the first mandibular molar using ImageJ, a public domain image analysis software.Bone loss was expressed in micrometres (μm) [23]. ROS production was assessed in blood cells by measuring intracellular superoxide anion (•O 2 − ) and hydrogen peroxide (H 2 O 2 ) using changes in median fluorescence intensity (MFI) emitted by dihydroethidine and dichlorofluorescein (DHE and DCF, Sigma-Aldrich, USA, respectively).Briefly, 10 6 cells were incubated with 160 mmol L − 1 DHE and 20 mmol L − 1 DCF at 37 °C for 30 min in the dark.The data were acquired using the FACSCanto II, and overlay histograms were analyzed using FACS-Diva software by determining the average fluorescence intensity of 10,000 cells.Data were expressed as the median of emitted fluorescence intensity (MFI) [24]. Biochemical analysis Plasma cholesterol levels were assessed using a commercial colorimetric kit (Cholesterol Liquicolor -InVitro, Itabira/MG, Brazil).This kit utilizes reagents for the quantitative determination of cholesterol in plasma.The assay involves an enzymatic reagent -RGT (phosphate buffer, pH 6.5, 30 mmol/L; 4-aminoantipyrine, 0.3 mmol/L; phenol, 5 mmol/L; peroxidase > 5 KU/L; cholesterase > 150 U/L; cholesterol oxidase > 100 U/L; sodium azide, 0.05%) and a standard -STD (cholesterol, 200 mg/ dL; sodium azide, 0.095%).Ten microliters (10 μL) of the plasma sample was mixed with 1000 μL of RGT, and 10 μL of STD was mixed with 1000 μL of RGT.They were then incubated for 5 min at 37 °C.Absorbances were measured using a spectrophotometer at a wavelength of 546 nm.To determine cholesterol values, the absorbance of the sample was divided by the absorbance of the STD and multiplied by 200.The plasma cholesterol values are expressed in mg/dL. Inflammatory activity was assessed by measuring myeloperoxidase (MPO) activity.In this assay, hydrogen peroxide (H 2 O 2 ) is cleaved by MPO, and the resulting oxygen radical reacts with O-dianisidine dihydrochloride, leading to the formation of a colored compound.Plasma samples (12 μL) were transferred to a flat-bottom microplate, and the biochemical reaction was initiated by adding 236 μL of O-dianisidine solution (comprising 16.7 mg O-dianisidine hydrochloride, 90 ml distilled water, 10 ml potassium phosphate buffer, and 50 μL H 2 O 2 1%).Absorbance was measured using an iMark® Absorbance ELISA microplate reader (Bio-Rad, Washington, USA) at a wavelength of 460 nm, with data recorded at 15-second intervals over a period of 10 min.The results were expressed as arbitrary units of myeloperoxidase activity (u.a.myeloperoxidase) as a function of time [25]. To determine oxidative stress, plasma lipid peroxidation was assessed using the TBARS assay.The generation of free radicals and lipid peroxidation are rapid processes, measured by their products, with thiobarbituric acid-reactive substances (TBARS), particularly malondialdehyde (MDA), being the primary indicator.To measure metabolites reactive to TBA, 43 μL of plasma was placed in a microtube with 7% perchloric acid and mixed using a vortex for 60 s.After this step, the samples were centrifuged at 7400 rpm for 10 min, resulting in the formation of a white pellet at the bottom of the microtube.Subsequently, 47 μL of the supernatant was transferred to two labeled microtubes, and an additional 53 μL of 0.6% thiobarbituric acid was added.The tubes were then placed in a thermocycler at 95 °C for 1 h.Following this incubation, the samples were centrifuged again and read in a spectrophotometer at 532 nm using a 96-well plate and the iMark® Absorbance Reader ELISA microplate reader (Bio-Rad, Washington, USA).The malondialdehyde (MDA) level was expressed in μM of MDA per milligram of protein [26]. Statistical analyses The results are expressed as the mean ± SEM.Normal distribution of the variables was assessed using the Shapiro-Wilk test.When the results passed the normality test, the means of the values were statistically analyzed for comparisons among different groups using one-way ANOVA followed by Tukey's post hoc test, conducted with Graph-Pad Prism Software, version 8.02 (GraphPad, Inc., San Diego, CA, USA).Differences were considered significant when P < 0.05 [27]. Analysis of myeloperoxidase (MPO) activity The quantification of myeloperoxidase enzymes to assess inflammatory activity showed that the animals in the ApoEP group had higher levels of MPO (0.02153 ± 0.002214 a.u., P < 0.05.) when compared to those in the ApoEC group (0.01218 ± 0.002232 a.u.) and ApoEP + PBM (0.007571 ± 0.00263 a.u.).Moreover, mice in the ApoEP + PBM group had significantly decreased levels of MPO activity, similar to those found in the ApoEC mice, demonstrating a pronounced effect of PBM in controlling inflammatory activity in PD (Fig. 4). Superoxide and hydrogen peroxide levels The levels of superoxide anion and hydrogen peroxide, measured using the fluorescence markers DHE and DCF, respectively, showed that periodontitis significantly increased ROS production in ApoEP mice (147.5 ± 21.50 a.u.; 989.5 ± 35.50 a.u.; P < 0.05) compared to the ApoEC group (98.57± 5.331 a.u.; 491.7 ± 53.41 a.u.).NIR-PBM treatment restored DHE levels (83.75 ± 7.74 a.u.) to levels similar to those found in ApoEC mice.However, DCF levels in the ApoEP + PBM group decreased (651.3 ± 17.75 a.u.) compared to those in the ApoEP group but did not reach the levels observed in the ApoEC group (Fig. 5). Evaluation of lipid peroxidation levels Animals from the ApoEP group showed higher lipid peroxidation levels (0.02738 ± 0.004885 μmol MDA/ mg protein, P < 0.05) than those from the ApoEC group (0.008580 ± 0.001743 μmol MDA/mg protein).Moreover, the animals from the ApoEP + PBM group that received PBM treatment showed lower lipid peroxidation levels (0.01280 ± 0.0001581 μmol MDA/mg protein), similar to the untreated animals.These results indicate that periodontitis increases oxidative stress levels, but treatment with NIR-PBM is highly effective in reducing it (Fig. 6). Discussion Our study evaluates the effect of photobiomodulation (PBM) on experimental ligation-induced periodontitis in hypercholesterolemic mice (ApoE knockout).Various types of lasers are available for periodontal treatment.PBM is a complex treatment, offering a wide range of combinations due to its extensive variation in parameters that can be utilized: wavelength, source power, energy density, power density, irradiation time, and total applied energy [28].These variations have led to an increase in the number of published negative trials and have generated controversy, despite the large number of positive clinical results [29].The photobiomodulatory effects of laser irradiation vary among different cell types, and this aspect has often been overlooked as a potential explanation for the conflicting results reported in the literature following treatment [30]. The PBM parameters used in our study were similar to those presented by Santos et al. [31], who employed a laser with a wavelength of 808 nm in a rat model of critical bone defect.Theodoro et al. [32] utilized the GaAlAs diode laser as monotherapy and as an adjuvant to mechanical treatment, using the same wavelength and power as in our study.However, in their work, they administered the treatment in a shorter timeframe and within a single session in rats with periodontitis.They observed a notable influence on the healing processes, tissue repair, and greater efficacy in modulating the inflammatory response in animals treated with laser, both in monotherapy and as an adjuvant treatment. Several studies have shown that the application of diode lasers has bactericidal and detoxifying effects, and this technique demonstrates clinical benefits as an adjuvant to nonsurgical periodontal therapy [33,34].Lowintensity laser therapy promotes healing through collagen synthesis and angiogenesis [35]. In our study, after the removal of the ligature, supragingival scaling was performed to clean the tooth surfaces by removing plaque, and we administered 60 s of infrared laser treatment for seven consecutive days.It is worth noting that the literature often lacks comprehensive reports on all the parameters used, the number of sessions and the intervals between them [28].The literature has shown that the combination of low-intensity laser therapy and root planing increases the effectiveness of periodontal disease treatment [20,31,36]. Our ligature-induced periodontitis model proved to be effective, according to our analyses of alveolar bone loss, which showed a decrease in bone level in animals that had induced periodontitis compared to those without periodontitis, as well established in the literature [19,37].Treatment with PBM, however, was ineffective in promoting bone neoformation in both the ApoEP and ApoEP + PBM groups in our study. According to Tuner [28], PBM doses are cumulative, and several sessions in a short period can lead to inhibitory effects.In addition, Hablim et al. [2] stated that increasing PBM doses result in a cellular maximum response.If the dose exceeds the maximum value, the therapeutic effects of PBM will decrease and disappear, causing negative or inhibitory effects. We chose the ApoE knockout mouse model, an animal model that presents high plasma cholesterol levels and develops atheromatous lesions very similar to those in humans, to evaluate the systemic effects of periodontitis and PBM.We observed higher levels of plasma cholesterol in ApoE mice with untreated periodontitis than in those that received PBM treatment.The literature reports that periodontitis can lead to a greater reservoir of cholesterol esters within macrophages and poses a significant risk for systemic implications, such as atherosclerosis [38,39].Moreover, bacterial products, cytokines, and chemokines resulting from the infectious and inflammatory periodontal process enter the bloodstream and may stimulate the upregulation of endothelial cell surface receptors, as well as adhesion expression on the vascular endothelium.This, in turn, leads to circulating monocytes adhering to the blood vessel endothelium.These monocytes migrate to the subendothelial space and differentiate into macrophages, which can take up oxidized low-density lipoprotein (LDL) and transform into foam cells, eventually leading to the apoptosis of LDL-laden macrophages.This process results in the accumulation of lipids in the subendothelial space, contributing to the formation of atheromatous plaques [40].From this perspective, periodontitis may be considered a risk factor for cardiovascular diseases such as atherosclerosis [38].Regarding the influence of the laser on cholesterol levels, the literature provides conflicting results, with some studies suggesting an increase [41], while others report a decrease [42]. Myeloperoxidase (MPO), a heme peroxidase found in large quantities in the azurophilic granules of neutrophils, serves multiple functions, including antimicrobial activity, participation in the biochemical pathway of ROS production, and serving as an important indicator for assessing neutrophil infiltration into tissues, oxidative stress, and tissue damage [4] and as a marker of inflammatory activity.Wei et al. [43] discovered that MPO levels in the periodontitis patient group significantly increased following periodontal clinical evaluation, emphasizing the pivotal role of ROS in periodontal tissue destruction.This finding aligns with the results of our study.Another study, conducted by Uslu et al. [44], evaluated the effect of diode laser treatment when applied as an adjunct to root scraping and planning in an experimental model of periodontitis.They concluded that diode laser treatment reduces inflammation and oxidative stress in periodontal tissues, a result similar to what we observed in our study, where we found a significant decrease in inflammatory activity in PBM-treated mice. In our work, we assessed oxidative stress levels, which are positively associated with periodontitis [6].During periodontitis, neutrophils release ROS in response to invading microorganisms.ROS are responsible for oxidative stress and contribute to much of the tissue damage during infection [45]. In our study, we quantified superoxide anion (O 2 − ) and hydrogen peroxide (H 2 O 2 ) levels by flow cytometry using DHE and DCF, respectively, as markers of ROS.We observed a significant increase in O2 − and H 2 O 2 levels in the untreated periodontitis group.This increase was prevented in the NIR-PBM mice.However, previous research has indicated that low-intensity laser therapy can accelerate electron transfer (respiratory chain) and initiate ROS production, specifically increasing the production of superoxide anion, which can lead to cell damage [46]. Hablim et al. [2] raise the question of whether these types of ROS generated by PBM are identical to those naturally induced or not, with their benefits or harm depending on the rate at which they are produced.If superoxide is generated in the mitochondria at a rate that allows superoxide dismutase (SOD) to detoxify it into hydrogen peroxide, then H 2 O 2 can diffuse out of the mitochondria to activate beneficial signaling pathways.However, if superoxide is generated at a rate or at levels beyond the capacity of SOD to handle, the accumulated superoxide can damage the mitochondria [2].We also utilized the TBARS assay to evaluate the relationship between oxidative stress (lipid peroxidation), periodontitis, and PBM.Our study revealed a significant increase in lipid peroxidation levels in the ApoEP group when compared to the ApoEC group, while lipid peroxidation decreased in the APOE + PBM animals.The test detected malondialdehyde (MDA) formation resulting from the oxidation of lipid substrates [47].During this process, ROS bind to polyunsaturated fatty acids, generating byproducts that can damage the membrane system, DNA, and cell proteins.Our results are consistent with a study by Almerich-Silla et al. [48], which observed significantly higher levels of MDA in periodontitis patients than in healthy controls.It has also been demonstrated that elevated serum and salivary MDA levels without changes in antioxidant status can lead to systemic and local complications in patients with periodontitis [49].Increased lipid peroxidation has been found in the gingival fluid, plasma, and saliva of individuals with periodontitis [50,51].Others have also observed an association between PBM and reduced lipid peroxidation leading to a significant reduction in oxidative stress in cells and tissues, consistent with our findings [2,52,53]. Study strengths and limitations Our study is the first to demonstrate that NIR-PBM, as an adjunctive treatment for periodontal disease, has a significant beneficial effect in reducing systemic levels of cholesterol, inflammation, reactive oxygen species, and oxidative stress in a mouse model of periodontitis and hypercholesterolemia.These findings reinforce and support the importance of standardizing and including the use photobiomodulation therapy in dental clinical practice, especially, in situations where there is an associated chronic inflammatory systemic disease.However, some limitations of our study should be acknowledged.The wide range of NIR-PBM parameters in the existing literature can lead to contradictory results and could explain the absence of bone neoformation in the ApoEP + PBM group in our study.Additionally, more studies to evaluate the long-lasting systemic photobiomodulation effects need to be conducted, and finally, it's worth noting that an ApoE knockout mouse model used in this study may not fully capture all the nuances of a hypercholesterolemic individual with periodontitis. Conclusion In hypercholesterolemic mice with periodontitis, seven days of NIR-PBM treatment effectively reduced ROS production, plasma cholesterol, lipid peroxidation, and inflammatory activity.However, the observed benefits did not extend to bone formation, likely due to treatment duration and/or PBM dose.Our findings suggest that NIR-PBM has the potential to mitigate systemic factors in periodontal disease progression in hypercholesterolemic conditions.Future research with longer evaluation periods and varying doses is necessary to fully understand PBM's impact on hypercholesterolemia-related periodontitis. Fig. 2 Fig. 2 Typical scanning electron microscopy microphotographs showing the results of alveolar bone loss, measured in micrometers as the distance between the cementoenamel junction (CEJ) and the alveolar bone crest (ABC) in the experimental groups (40x objective, scale bar: 500 μm).(A) ApoEC (control, without periodontitis induction), (B) ApoEP (with 4 weeks of periodontitis), (C) ApoEP + PBM (with photobiomodulation for 7 consecutive days after 4 weeks of periodontitis), and (D) representative bar graph of alveolar bone loss in the experimental groups.Values are represented as the mean ± SEM (one-way ANOVA, Tukey's post hoc test, n = 6-8 animals per group).*P < 0.05 vs. ApoEC mice
2023-10-11T13:04:01.864Z
2023-10-10T00:00:00.000
{ "year": 2023, "sha1": "615dfd540a5b59c8b010957eba79ab54982d4691", "oa_license": "CCBY", "oa_url": "https://lipidworld.biomedcentral.com/counter/pdf/10.1186/s12944-023-01934-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9022ccd31c244f5df92abe9f6d75848a9463391a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255944091
pes2o/s2orc
v3-fos-license
Carotenoid metabolism: New insights and synthetic approaches Carotenoids are well-known isoprenoid pigments naturally produced by plants, algae, photosynthetic bacteria as well as by several heterotrophic microorganisms. In plants, they are synthesized in plastids where they play essential roles in light-harvesting and in protecting the photosynthetic apparatus from reactive oxygen species (ROS). Carotenoids are also precursors of bioactive metabolites called apocarotenoids, including vitamin A and the phytohormones abscisic acid (ABA) and strigolactones (SLs). Genetic engineering of carotenogenesis made possible the enhancement of the nutritional value of many crops. New metabolic engineering approaches have recently been developed to modulate carotenoid content, including the employment of CRISPR technologies for single-base editing and the integration of exogenous genes into specific “safe harbors” in the genome. In addition, recent studies revealed the option of synthetic conversion of leaf chloroplasts into chromoplasts, thus increasing carotenoid storage capacity and boosting the nutritional value of green plant tissues. Moreover, transient gene expression through viral vectors allowed the accumulation of carotenoids outside the plastid. Furthermore, the utilization of engineered microorganisms allowed efficient mass production of carotenoids, making it convenient for industrial practices. Interestingly, manipulation of carotenoid biosynthesis can also influence plant architecture, and positively impact growth and yield, making it an important target for crop improvements beyond biofortification. Here, we briefly describe carotenoid biosynthesis and highlight the latest advances and discoveries related to synthetic carotenoid metabolism in plants and microorganisms. Having a highly unsaturated hydrocarbon backbone makes carotenoids prone to oxidation (Ahrazem et al., 2016;Hou et al., 2016;Schaub et al., 2018;Wang et al., 2019;Koschmieder et al., 2021). This process can occur non-enzymatically when the conjugated double bonds are attacked by ROS (Moreno et al., 2021a), or catalyzed by highly specific CAROTENOID CLEAVAGE DIOXYGENASES (CCDs) and 9-CIS-EPOXY-CAROTENOID DIOXYGENASES (NCEDs) (Giuliano et al., 2003;Ahrazem et al., 2016;Hou et al., 2016). Oxidative cleavage of carotenoids yields bioactive apocarotenoids, including the precursors of the plant hormones ABA and SL, which are produced by NCEDs, CCD7 and CCD8, respectively (Sorefan et al., 2003;Booker et al., 2004;Johnson et al., 2006;Cutler et al., 2010;Felemban et al., 2019). Apocarotenoids exert a series of further biological activities, such as regulating of growth, biotic and abiotic stress response, retrograde signaling, photo acclimation, and include pigments and volatiles that play a role in plant-animal communication (Wang et al., 2019;Jia et al., 2019;Moreno et al., 2020;Moreno et al., 2021a). For instance, it has been recently shown that the apocarotenoid zaxinone is involved in the modulation of plant growth and in regulating SLs level and arbuscular-mycorrhizal (AM) colonization in rice (Figure 1), and SL and ABA levels in Arabidopsis (Wang et al., 2019;Ablazov et al., 2020;Votta et al., 2022). Moreover, the volatile apocarotenoid bionone, usually released by leaves, contributes to the scent of flowers in many plants and plays an interesting role as a herbivore repellent in plant-insect interaction ( Figure 1) (Ômura et al., 2000;Gruber et al., 2009;Moreno et al., 2021a). Furthermore, b-cyclocitral (b-cc) is another volatile and bioactive apocarotenoid. b-cc mediates 1 O 2 signaling and tolerance against abiotic stresses, and its oxidation give rise to b-cyclocitric acid that increases salt and drought tolerance ( Figure 1) (Ramel et al., 2012;D'Alessandro et al., 2019;Moreno et al., 2021a). In addition, it has been recently shown that b-cc is also a conserved root regulator . A deep understanding of the carotenoid pathway in plants and microorganisms can provide new tools and open up new options for establishing synthetic metabolism of carotenoids and enriching them and their derivatives in different organisms. In this mini-review, we summarize recent findings and the latest approaches to engineer carotenoid synthesis in plants and microorganisms for biofortification and beyond. Carotenoid biofortification in plants Generating biofortified crops is a long-term and worthwhile biotechnological goal to enhance the nutritional value of crops. Indeed, micronutrient malnutrition is still a significant public health problem that affects about one-third of the world's population (Thompson and Amoroso, 2014). Therefore, several staple crops have been biofortified to accumulate various micronutrients, including iron, zinc and provitamin A (McGuire, 2015;Cakmak and Kutman, 2017;Wakeel et al., 2018;Zheng et al., 2020;Rehman et al., 2021). Vitamin A deficiency (VAD) is the major reason for childhood blindness and mortality, particularly impacting preschool children (West and Darnton-Hill, 2008;Greiner, 2013). To combat VAD and compensate for the scarcity of vitamin A in animalderived products, several provitamin A biofortified crops, golden crops, have been generated by using metabolic engineering, including Golden Rice as the best-known example (Ye et al., 2000;Giuliano, 2017;Zheng et al., 2020). Carotenoid biosynthetic pathway in plants and microorganisms. In plastids, the condensation of isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP), derived from the MEP pathway, is catalyzed by GGPPS and gives rise to geranylgeranyl pyrophosphate (GGPP). The latter is a precursor of several important plastid isoprenoids, such as tocopherols, phylloquinones, plastoquinones, chlorophylls and gibberellins. Two GGPP molecules are then condensed into 15-cis-phytoene by PHYTOENE SYNTASE (PSY) in plants and by crtB in bacteria and fungi. Sequentially, the enzymes phytoene desaturase (PDS), z-CAROTENE ISOMERASE (Z-ISO), z-CAROTENE DESATURASE (ZDS) and CAROTENOID ISOMERASE (CRTISO) catalyze a series of desaturations and isomerizations, producing all-trans-lycopene from 15-cis-phytoene. In most fungi and bacteria, this conversion is carried out by a single enzyme, PHYTOENE DESATURASE (crtI). At this point, all-trans-lycopene undergoes a cyclization step, which is performed by LYCOPENE ϵ -CYCLASE (LCYE) and LYCOPENE b-CYCLASE (LCYB), leading to a-carotene and b-carotene, respectively. In fungi and non-photosynthetic bacteria, bcarotene is formed by crtY. In the a branch, the cytochrome P450 enzymes CYP97A and CYP97C convert a-carotene into lutein, while the non-heme diiron oxidase (HYD)/CYP97A in the b-branch transforms b-carotene into zeaxanthin. In microorganisms, the enzyme crtZ produces zeaxanthin, which can be further converted by crtW into astaxanthin. In plants, ZEAXANTHIN EPOXIDASE (ZEP) is responsible of the conversion of zeaxanthin into violaxanthin, which can be reversed into zeaxanthin through the action of VIOLAXANTHIN DE-EPOXIDASE (VDE). Violaxanthin is then transformed into neoxanthin by NEOXANTHIN SYNTHASE (NSY) as a last step of the pathway. Oxidative cleavage of carotenoids gives rise to apocarotenoids. The cleavage of b-carotene performed by carotenoid cleavage dioxygenases (CCDs) generates carlactone (not shown) the precursor of strigolactones (SLs), whereas 9-cis-violaxanthin is cleaved into the abscisic acid (ABA) precursor, xanthoxin (not shown), by nine-cis-epoxy-carotenoid dioxygenases (NCEDs). Plant and microbial enzymes are colored in blue and red, resperctively. Sun image represents photoisomerization. Figure was designed in Biorender. One of the main strategies that has been pursued to increase carotenoid content is the "push" approach, which relies on enhancing the carotenoid metabolic flux by over-expressing one or more biosynthetic enzymes (Zheng et al., 2020). Since PSY catalyzes a ratelimiting step, it has been a major target for genetic engineering in many plants, including rice, tomato and cassava (Paine et al., 2005;Fraser et al., 2007;Paul et al., 2017;Yao et al., 2018;Strobbe et al., 2018). PSY was constitutively over-expressed for the first time in tomato (Solanum lycopersicum), resulting in dwarf plants, likely due to a depletion of the precursor GGPP that also feeds gibberellin biosynthesis (Fray et al., 1995). To by-pass unwanted side effects, constitutive-expression strategies were replaced by using tissue-specific promoters. In canola (Brassica napus), the bacterial PHYTOENE SYNTHASE (crtB) was overexpressed under the control of a seed-specific promoter, generating orange embryos that reached up to a 50-fold increase in carotenoid content (Shewmaker et al., 1999). Co-expression of PSY from daffodil (Narcissus pseudonarcissus) under an endosperm specific promoter and crtI from Erwinia uredovora (formerly Pantoea ananatis) under 35-S promoter in rice (Oryza sativa) led to Golden Rice, that accumulated band a-carotene, zeaxanthin and lutein, which are responsible for the yellow color of the grain (Ye et al., 2000;Datta et al., 2003;Strobbe et al., 2018). Later, "Golden Rice 2 (GR2)" was generated by replacing the daffodil-derived PSY with the more efficient ortholog from maize, increasing carotenoid content up to 23-fold compared to the former "Golden Rice" (Figure 2A) (Paine et al., 2005). Additionally, PSY was introduced in other crops, including potato, kiwi, and cassava, hence generating multiple golden crops (Al-Babili and Beyer, 2005;Diretto et al., 2007;Ampomah-Dwamena et al., 2009;Welsch et al., 2010). More recently, new strategies have been developed to increase carotenoid content in several crops (Table 1). Carotenoid biofortification has been successfully achieved using CRISPRmediated genome editing by specific gene/locus targeting (Zheng et al., 2021). A CRISPR-Cas9-based method was developed to generate a biofortified, marker-free rice line (Dong et al., 2020). First, a mutant screen analysis facilitated the identification of a specific "safe harbor" in the rice genome, in which the introduction of DNA is expected to not cause any side effect. The "safe harbor" locus was then used for introducing the GR2 carotenoid-biosynthesis cassette. The CRISPR cassette was then segregated by back-crossing, leading to a marker-free line that shows the golden phenotype. In another example, CRISPR-Cas9 was used to knock-out LCYE in Cavendish banana by targeting its fifth exon (Kaur et al., 2020). Here, the presence of several indels within the LCYE gene led to an up to 6-fold increase in b-carotene content (Figure 2A). Transient expression allows the expression of genes without the integration in the genome, generating results in a faster way compared to stable transgenic plants (Page et al., 2019). Transient expression systems are a promising tool for enriching carotenoids, including provitamin A, in green tissues at a specific developmental stage without interfering with normal plant growth and development (Rodrıǵuez-Concepcioń and Daròs, 2022). In lettuce (Lactuca sativa) and zucchini (Cucurbita pepo), the virus-mediated expression of crtB induced the re-organization of internal plastid structures, which resulted in the differentiation of chloroplasts into chromoplasts and conferred a yellow color to the fruits (Figure 2A) (Llorente et al., 2020). In tobacco (Nicotiana tabacum), the high phytoene levels triggered by the transient expression of crtB interferred with chloroplast functions by lowering their photosynthetic efficiency and activating an endogenous developmental program enabling complete chloroplast-chromoplast switch and increasing carotenoid storage capacity. In addition, the overexpression of this gene stimulated the accumulation of b-carotene and lutein in the agroinfiltrated leaves, conferring a yellow leaf phenotype. ORANGE (OR) proteins have been shown to post-transcriptionally regulate PSY activity during carotenoid biosynthesis and promote chromoplast biogenesis (Lopez et al., 2008;Chayut et al., 2017;Osorio, 2019). The expression of OR gene enhanced carotenoid accumulation in several crops by triggering the formation of chromoplasts containing carotenoid sequestering structures (Lopez et al., 2008;Park et al., 2015;Yazdani et al., 2019). However, only one to two big chromoplasts were found in the cells of the orange cauliflower or mutant (Paolillo et al., 2004). A recent study demonstrated that the OR His variant, which contains a single histidine substitution ("golden SNP"), interacts with two proteins involved in plastid division (PARC6 and ARC3), thus limiting the chromoplast number (Sun et al., 2020). The overexpression of ARC3 in a mutated OR His Arabidopsis line increased carotenoid accumulation up to 85% compared to the control. One of the main challenges of reaching a high-carotenoid accumulation in plants is the tight pathway regulation in plastids, which is achieved by metabolic feedback and feedforward signaling (Cazzonelli and Pogson, 2010). Hence, enabling carotenoid production based on cytosolic mevalonate-derived isoprenoid precursors is a very interesting option that has been recently explored (Andersen et al., 2021). A viral vector that co-express CrtE (GGPP synthase), crtB, and crtI genes was inoculated in tobacco leaves, thus successfully stimulating lycopene accumulation outside the chloroplast and turning the leaves yellow (Majer et al., 2017). This strategy was then optimized in Agrobacterium tumefaciens by the additional use of a truncated version of the enzyme hydroxymethylglutaryl CoA reductase (HMGR), which boosted the mevalonate (MVA) pathway in the cytosol and provided more GGPP precursors (Andersen et al., 2021). While cytosolic phytoene remained more bioaccessible, lycopene was stored in less accessible cytosolic crystalloids. Interestingly, the content of extra-plastidial carotenoids was similar to that of endogenous chloroplast carotenoids, which explains the orange coloration of infected leaves (Andersen et al., 2021). The employment of new technologies, such as CRISPR-based gene-editing tools and viral vectors, is expected to open new prospects in carotenoid biofortification in the near future. Indeed, genome editing tools may enable precise manipulation of regulatory genes governing the carotenoid biosynthetic pathway. As shown by Dong et al. (2020), CRISPR technologies also allow to accommodate transgenes at specific loci. However, this approach is limited by the low efficiency of donor delivery and plant transformation, which makes the biofortification of some crops, e.g. pearl millet, a difficult task. A further novel approach is the generation of non-plastid sinks to redirect carotenoid biosynthesis and boost both the production and storage of carotenoids in green vegetables. Given the promising results obtained with installing transient carotenoid biosynthesis in the cytosol, it can be expected that generating corresponding, stably transformed plants may make significant contributions to biofortification (Morelli and Rodriguez-Concepcion, 2022). Carotenoid metabolic engineering beyond biofortification Climate change and extreme weather events directly impact agriculture and crop production (Raza et al., 2019;Pareek et al., 2020). According to food demand predictions, the current increase in crop yields is insufficient to compensate for the losses due to global warming (Kromdijk et al., 2016). The emerging combination of several abiotic stresses, such as increasing drought, extreme temperatures, and high UV irradiation, directed researchers to better understand stress-resistance processes towards developing stress-tolerant crops (Pereira, 2016). Carotenoid biosynthesis and accumulation seem to positively impact the resistance of plants to different types of environmental stress, such as high-light, increased temperature and drought (Uarrota et al., 2018;Kim et al., 2018;Swapnil et al., 2021). Accordingly, mutants of photosynthetic organisms with reduced carotenoid content are more susceptible to photo-oxidation (Ramel et al., 2013). Therefore, modifying carotenoid biosynthesis represents a promising option for developing resilient crops. Xanthophyll cycle plays a major role in protecting the photosynthetic apparatus from photo-oxidative stress (Latowski et al., 2011). In the Arabidopsis lut2 npq2 double mutant, the xanthophylls neoxanthin, violaxanthin, antheraxanthin, and lutein were all replaced by zeaxanthin (Havaux et al., 2004). This conversion resulted in an enhanced tolerance against photo-oxidation and in a phenotype similar to that of high-light-acclimated leaves. Moreover, a different study showed that doubling the size of the xanthophyll pool led to increased resistance to high light and high-temperature conditions (Johnson et al., 2007). This result is probably due to a reduction of the ROS-induced lipid peroxidation in presence of enhanced zeaxanthin content. In recent years, metabolic engineering of carotenoids succeeded in enhancing crop yield and fitness (Table 1). For instance, the expression of the Arabidopsis VDE, ZEP, and the PSII subunit S (PsbS) in tobacco leaves enabled accelerated response to fluctuating light, thus enhancing the efficiency of CO 2 assimilation in the shade by 14% and increasing plant dry weight biomass up to 15% (Kromdijk et al., 2016). Interestingly, opposite results were observed when this strategy was applied in Arabidopsis (Garcia-Molina and Leister, 2020). The same construct was recently introduced in soybean (Glycine max), leading to a~33% increase in yield (De Souza et al., 2022). Thus, species-specificity of the impact of foreign gene expression in crops and model plants requires further analysis. Recently, genetic manipulation of the LCYB gene has been shown as a promising strategy for crop improvement beyond biofortification (Moreno and Al-Babili, 2022). A single-gene strategy has been applied in carrot (Daucus carota) and in tobacco (Nicotiana tabacum cv. Xanthi) where the expression of the carrot LCYB1 gene led to changes in plant growth, architecture, and development (Moreno et al., 2013;Moreno et al., 2016;Moreno et al., 2020). In tobacco, the alteration in carotenoid and phytohormone composition triggered several phenotypes, including longer internodes, early flowering, accelerated development, increased biomass, yield, and photosynthetic efficiency. Interestingly, the transgenic tobacco lines also showed enhanced abiotic stress tolerance ( Figure 2B) (Moreno et al., 2021b). Moreover, the RNA interference (RNAi) NtLCYB tobacco lines showed impaired growth and photosynthesis, and reduced pigment content and plant variegation (Kössler et al., 2021). A similar approach was applied to different tomato cultivars where the overexpression of LCYB, from plant or bacterial origin, modulated carotenoid, apocarotenoid and phytohormones patterns, resulting in several phenotypes, including biomass partitioning and altered growth, and an improvement in fruit yield and shelf-life, and abiotic stress tolerance ( Figure 2C) (Mi et al., 2022). These promising results proved that the alteration of the carotenoid pathway, via LCYB genetic manipulation, can directly impact several interconnected metabolic networks, including the biosynthesis of phytohormones and signaling molecules affecting several plant traits, such as growth, yield, and stress tolerance, which are key for crop improvement. It would be interesting to investigate how manipulation of well-characterized carotenoid metabolic genes can influence the phenotype and other desirable traits in cereals, such as rice and pearl millet, beyond biofortification. Future research should also focus on the impact of modifying carotenoid content on rhizospheric interactions and root symbiotic associations, i.e. AM fungi, and explore the possibility of improving beneficial interactions to generate crops with better performance. Metabolic engineering of carotenoid biosynthesis in microorganisms In recent years, the demand for natural carotenoids has continuously increased with the rapidly growing food, pharmaceutical and cosmetic industry; thus, creating a need for natural sources for their mass production (Clugston, 2020;Zerres and Stahl, 2020;Loṕez et al., 2021). The production of carotenoids from bacteria has received wide attention due to their short life cycle and high productivity (Sajjad et al., 2020). Carotenoids extracted from bacteria are safe for humans as those obtained from traditional sources such as plants or chemical synthesis (Numan et al., 2018). Indeed, there is a wide range of applications for bacterial carotenoids, including the using of Brevibacterium linens in the fermentation of Limburger and Port-du-Salut cheeses, which is responsible for the characteristic color of these dairy products (Guyomarc'h et al., 2000). In addition, astaxanthin produced in Mycobacterium lacticola is used for fish feeding, due to its antioxidant activity and to obtain the red color that attracts consumers (Kirti et al., 2014). A recent study showed that engineered cyanobacteria can produce valuable carotenoids such as astaxanthin and lutein, which exert beneficial biological activities, such as being antioxidants and important colorants (Honda, 2022). In particular, the model cyanobacterium Synechocystis sp. PCC 6803 is able to divert 50% of its carbon flux to the synthesis of carbon-containing compounds (Angermayr et al., 2016). To produce high levels of astaxanthin from CO 2 in this cyanobacterium, the key enzymes crtW and crtZ were coexpressed, and the carbon flux was redirected towards the endogenous MEP pathway by increasing precursor availability, leading to the accumulation of up to 29.6 mg/g (dry weight) of astaxanthin ( Figure 2D) (Diao et al., 2020). Utilizing fungal organisms is considered as one of the most advantageous ways for mass production of carotenoids (Wang et al., 2021b). Beaker yeast, Saccharomyces cerevisiae, has a large cell size, can tolerate distinct growth conditions, e.g. low temperature, and possesses segmented organelles, thus making it a promising host to install carotenoid production (Madhavan et al., 2022). To improve the production of lycopene in S. cerevisae, key genes related to fatty acid synthesis and triacylglycerol (TAG) production were overexpressed together with a fatty acid desaturase (OLE1) that forms unsaturated fatty acids. In addition, a gene (FLD1) encoding Seipin that regulates lipid-droplet size was deleted. The resulting S. cerevisiae strain showed a 25% increase in lycopene accumulation, compared to the original high-yield strain ( Figure 2D) (Ma et al., 2019). Overexpressing three lipase-coding genes (LIP2, LIP7 and LIP8) from Yarrowia lipolytica together with PHYTOENE S Y N T H A S E / L Y C O P E N E C Y C L A S E ( c r t Y B ) , c r t I a n d GERANYLGERANYL DIPHOSPHATE SYNTHASE (crtE) cloned from the red yeast Xanthophyllomyces dendrorhous is a further promising strategy that resulted in 46.5 mg/g (dry weight) of bcarotene accumulation, i.e. 12-fold higher than the analogous strain lacking lipase expression ( Figure 2D) (Fathi et al., 2021). The model yeast, Y. lipolytica, is one of the widely used species in food industry. Y. lipolytica possesses a high concentration of acetyl-CoA, which is essential to enhance the production of b-carotene (Gao et al., 2017;Zhang et al., 2018). A novel study described two different effective approaches in Y. lipolytica to overcome the LCYB substrate inhibition, which represents an undesired regulatory mechanism triggered by high substrate concentration. (Ma et al., 2022). Firstly, by using a structure-guided protein design, the single variant mutation Y27R was generated, which completely removed the substrate inhibition without reducing the enzymatic activity. Then, a GGPPS-mediated restrictor was constructed, which regulates the lycopene formation rate, thus limiting the carbon flux through the carotenoid biosynthesis pathway and, consequently, alleviating substrate inhibition. The final engineered strain led to 39.5 g/L of b-carotene production in Y. lipolytica. Microalgae are a diverse group of photosynthetic organisms found in aquatic habitats, which provide new options for enhanced production of carotenoids, due to their low cultivation costs, simplicity and rapid growth rate. They are a common host used by the pharmaceutical industry for the production of naturallly coloring pigments, besides being a source for biofuels (Varela et al., 2015;Khan et al., 2018;Novoveskáet al., 2019;Sahoo et al., 2020). Dunaleilla salina, Haematococcus pluvialis, and Chlorella vulgaris are examples for microalgae rich in b-carotene, astaxanthin, lycopene, lutein, and zeaxanthin content (Xu et al., 2018;Velmurugan and Muthukaliannan, 2022). Chlamydomonas reinhardtii is one of the fastest-growing microalgae, which has been used for the production of carotenoids, including b-carotene and lutein (Rathod et al., 2020). In a recent study, it was shown that the expression of the bifunctional PHYTOENE-b-CAROTENE SYNTHASE (crtYB) from X. dendrorhous in C. reinhardtii resulted in a 72% and 83% increase in b-carotene and lutein content, respectively, when exposed to "short duration on high-light" (SD-HL) ( Figure 2D) (Rathod et al., 2020). In a different study, overexpression of a codon-optimized native b-CAROTENE KETOLASE (BKT) in C. reinhardtii pushed the conversion of more than 50% of total carotenoids into astaxanthin ( Figure 2D) (Perozeni et al., 2020). In addition, it was shown that the overexpressing native LCYE in C. reinhardtii enhanced the lutein accumulation by up to 2.6-fold through increasing the conversion of lycopene into a-branch carotenoids, which eventually increased lutein production (1.6 ug/ml of culture) ( Figure 2D) (Tokunaga et al., 2021). Additionally, a recent study demonstrated the impact of overexpressing a mutated and wild type ORANGE (OR) in C. reinhardtii under the control of a strong light-inducible promoter (Yazdani et al., 2021). The mutated CrOR His strain contained up to 1.6-fold and 3.2-fold enhanced total carotenoid content, compared to the wild-type CrOR overexpressing and the mock line, respectively. In Dunaliella salina, a CRISPR-Cas technique was used to target the 1 st and 3 rd exons of the b-CAROTENE HYDROXYLASE (Dschyb) gene, which significantly enhanced the b-carotene level, reaching about 1.4 g/ml (Hu et al., 2021) (Figure 2D). Taken together, microorganisms are promising sources for enhanced production of carotenoids for research and industrial use. Identifying new genes and enzymes from different microbial sources will enlarge our toolkit for carotenoid mass-production, needed to expand the variety of products and to meet the increasing demand of growing industry in the area of food, cosmetics and pharmacy. Concluding remarks The carotenoid metabolic pathway has been studied extensively and manipulated over the years to generate crops with improved carotenoid content and productivity. The advent of new gene-editing techniques, such as CRISPR, allows precise and targeted editing of carotenoid-related genes, thus avoiding the side effects conferred by the random insertion of the transgenic cassette. However, despite CRISPR-based tools proved to be highly efficient in generating precise deletions and single-nucleotide substitutions, gene knock-ins are still very difficult to achieve in many crops. In fact, donor insertions still have a very low efficiency and several limitations, such as the high number of off-targets and the limited donor length. Further efforts are needed to develop new CRISPR-based strategies to efficiently obtain cisgenic plants with native gene knock-in. For instance, gene duplication approaches can be used to obtain gene overexpression without the introduction of a foreign donor. An alternative approach could be the deregulation of carotenoid key genes by swapping or disturbing their native promoters (Lu et al., 2021). However, the employment of "classical" transgenic approaches is expected to remain indispensable if carotenoid biofortification requires the introduction of phytoene synthesis and its multi-step desaturation to drive carotenoid biosynthesis in carotene-free tissues, such as rice endosperm, or cellular compartments, i.e. cytoplasm. With respect to our knowledge on carotenoid and apocarotenoid metabolism, the deployment of stress, light, and chemically inducible promoters regulating the expression of carotenoid biosynthesis and catabolic genes, and suitable analytical tools could provide new insights in carotenoid metabolism. Developing such new strategies supported by innovative and versatile analytical techniques and methodologies is crucial for better elucidating carotenoid and apocarotenoid metabolism, signaling and regulation and, hence, for developing the crops of the future with a higher yield and adaptation. Author contributions AS: wrote the abstract, sections Carotenoid biosynthetic pathway in plants and microorganisms, Carotenoid biofortification in plants, Carotenoid metabolic engineering beyond biofortification, part of Metabolic engineering of carotenoid biosynthesis in microorganisms, and prepared Figures 1, 2 (with LA and YA). LA: wrote part of section 4. JM, YA and SA-B: extensively edited the provided manuscript and provided supervision. All authors contributed to the article and approved the submitted version. Funding This work was supported by baseline funding and the Competitive Research Grant 2020(CRG 2020, both given by King Abdullah University of Science and Technology (KAUST) to Salim Al-Babili. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-01-18T14:31:47.095Z
2023-01-18T00:00:00.000
{ "year": 2022, "sha1": "35b1f6022f41e675c568d12b9f1db93fa6beaf98", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "35b1f6022f41e675c568d12b9f1db93fa6beaf98", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
238424744
pes2o/s2orc
v3-fos-license
Genome-Wide Characterization and Analysis of bHLH Transcription Factors Related to Anthocyanin Biosynthesis in Fig (Ficus carica L.) The basic helix–loop–helix (bHLH) transcription factor family is the second largest transcription factor family in plants, and participates in various plant growth and development processes. A total of 118 bHLH genes were identified from fig (Ficus carica L.) by whole-genome database search. Phylogenetic analysis with Arabidopsis homologs divided them into 25 subfamilies. Most of the bHLHs in each subfamily shared a similar gene structure and conserved motifs. Seventy-two bHLHs were found expressed at fragments per kilobase per million mapped (FPKM) > 10 in the fig fruit; among them, 15 bHLHs from eight subfamilies had FPKM > 100 in at least one sample. bHLH subfamilies had different expression patterns in the female flower tissue and peel during fig fruit development. Comparing green and purple peel mutants, 13 bHLH genes had a significantly different (≥ 2-fold) expression. Light deprivation resulted in 68 significantly upregulated and 22 downregulated bHLH genes in the peel of the fruit. Sixteen bHLH genes in subfamily III were selected by three sets of transcriptomic data as candidate genes related to anthocyanin synthesis. Interaction network prediction and yeast two-hybrid screening verified the interaction between FcbHLH42 and anthocyanin synthesis-related genes. The transient expression of FcbHLH42 in tobacco led to an apparent anthocyanin accumulation. Our results confirm the first fig bHLH gene involved in fruit color development, laying the foundation for an in-depth functional study on other FcbHLH genes in fig fruit quality formation, and contributing to our understanding of the evolution of bHLH genes in other horticulturally important Ficus species. The basic helix-loop-helix (bHLH) transcription factor family is the second largest transcription factor family in plants, and participates in various plant growth and development processes. A total of 118 bHLH genes were identified from fig (Ficus carica L.) by whole-genome database search. Phylogenetic analysis with Arabidopsis homologs divided them into 25 subfamilies. Most of the bHLHs in each subfamily shared a similar gene structure and conserved motifs. Seventy-two bHLHs were found expressed at fragments per kilobase per million mapped (FPKM) > 10 in the fig fruit; among them, 15 bHLHs from eight subfamilies had FPKM > 100 in at least one sample. bHLH subfamilies had different expression patterns in the female flower tissue and peel during fig fruit development. Comparing green and purple peel mutants, 13 bHLH genes had a significantly different (≥ 2-fold) expression. Light deprivation resulted in 68 significantly upregulated and 22 downregulated bHLH genes in the peel of the fruit. Sixteen bHLH genes in subfamily III were selected by three sets of transcriptomic data as candidate genes related to anthocyanin synthesis. Interaction network prediction and yeast two-hybrid screening verified the interaction between FcbHLH42 and anthocyanin synthesis-related genes. The transient expression of INTRODUCTION Transcription factors are key regulatory elements in life processes (Yamasaki et al., 2013;Guo and Wang, 2017). To date, more than 60 transcription factor families have been found in plants. According to the number of lysine and arginine residues in the DNAbinding domain, transcription factors are divided into four categories: zinc finger (ZF) type, helix-turn-helix (HLH), basic helix-loop-helix (bHLH), and basic leucine zipper (bZIP). The most commonly found transcription factors in higher plants are members of the WD40, MYB, WRKY, bHLH, and bZIP families (Kosugi and Ohashi, 2002). The bHLH transcription factors, also called MYCs, form the second largest family of transcription factors in plants (Feller et al., 2011), with 162, 95, 167, and 152 bHLH genes identified in Arabidopsis , grape (Wang et al., 2018), rice (Li et al., 2006), and tomato (Wang et al., 2015), respectively. The bHLH domain is approximately 60 amino acids long, containing a basic region and an HLH region. The basic region, located next to the N-terminus, contains the DNA cis-acting elements E-box (5'-CANNTG-3') and G-box (5'-CACGTG-3') that regulate gene expression, whereas the HLH region consists of two amphipathic α-helices linked by a loop that serve as the dimerization domain to promote protein interactions, producing homodimers or heterodimers (Massari and Murre, 2000). bHLHs can act as either repressors or activators of gene transcription and play important roles in various physiological processes, such as sexual maturation, metabolism, and development (Feller et al., 2011). According to evolutionary relationships, the specificity of DNA binding and conservation of specific amino acids or domains (except the bHLH domain), members of the bHLH superfamily, have been assigned into subfamilies, or subgroups, by different researchers. The bHLH transcription factors in Arabidopsis, poplar, rice, moss, and algal genomes have been divided into 32 subfamilies (Carretero-Paulet et al., 2010). Members of subfamilies 9 and 27 are essential for the growth and development of terrestrial plants, and subfamilies 7, 18, 19, and 20 are unique to angiosperms. Members of subfamily 5 regulate flavonoid/anthocyanin metabolism, epidermal cell development, and trichome initiation. According to the classification of subgroups, 166 bHLH genes of the Arabidopsis thaliana genome are divided into 13 major subgroups (I-XIII). Genes in a particular major group contain a similar number of introns at conserved positions, the encoded proteins have similar predicted lengths, and the bHLH domain is in a similar position in the protein. Genes within each major group can be further divided into a total of 26 subgroups Pires and Dolan, 2010). The 95 bHLH genes in grape (Wang et al., 2018), 167 bHLH genes in rice (Li et al., 2006), and 152 bHLH genes in tomato (Wang et al., 2015) are divided into second-level subgroups by this method. Anthocyanin biosynthesis is achieved by structural genes in the anthocyanin-biosynthesis pathway (Allan et al., 2008). At the transcriptional level, it is mainly regulated by a series of transcription factors, especially members of the R2R3-MYB gene family. In Arabidopsis, R2R3-MYB PAP1 and PAP2 regulate anthocyanin biosynthesis (Zimmermann et al., 2004;Gonzalez et al., 2008). In flavonoid biosynthesis, bHLH proteins serve as cofactors of R2R3-MYB, together with WD40, making up the MYB-bHLH-WD40 (MBW) complex (Hichri et al., 2011a;Xie et al., 2012;Wang et al., 2020). Most bHLHs that are involved in anthocyanin biosynthesis belong to subgroup III, which is functionally conserved and has been shown to regulate plant defense and development Heim et al., 2003). In Arabidopsis, subgroup III fbHLHs are involved in both flavonoid biosynthesis and trichome formation. The members share a conserved amino acid, arginine, which is involved in protein interactions and normal functions (Ludwig et al., 1989;Zhao et al., 2012). Since the identification of bHLH transcription factor Lc (leaf color) in corn (Ludwig et al., 1989), TT8, GL3, and EGL3 of subgroup IIIf have been found to interact with TTG1 (WD40 protein family) and MYB Heim et al., 2003) to form protein complexes that regulate flavonoid biosynthesis. Most plants have at least two bHLHs belonging to two distinct clades within subgroup IIIf Feller et al., 2011). They are described as bHLH-1 (represented by ZmR/ZmLc, AtGL3, AtEGL3, AtMYC1, PhJAF13, and AmDel) and bHLH-2 (represented by ZmIn, AtTT8, PhAN1, and VvMYC1) (Albert et al., 2014). The bHLH-2 genes are essential for anthocyanin biosynthesis. The overexpression of R2R3-MYB PAP1 resulted in elevated transcript levels of TT8 in Arabidopsis (Gonzalez et al., 2008). In petunia, PhAN2 requires the bHLH cofactor PhAN1 or PhJAF13 to enhance the promoter activity of dihydroflavonol 4-reductase (DFR) (Spelt et al., 2000). The co-expression of VvMYC1 and VvMYBA1 in grape suspension cells led to anthocyanin accumulation (Hichri et al., 2011a). Previous transcriptome analysis of fig has suggested that FcMYB114, FcCPC, FcMYB21, and FcMYB123 regulate anthocyanin biosynthesis (Wang et al., 2019;Li et al., 2020a), but that no anthocyanin-related fig bHLH has been identified. In addition, genes from bHLH subgroup III d + e have been shown to regulate the jasmonic acid (JA) signaling pathway, thereby enhancing plant defense capabilities and promoting anthocyanin biosynthesis (Xie et al., 2012). Low temperature promoted the expression of MdbHLH3, which increased anthocyanin accumulation and fruit coloring in apple (Xie et al., 2012;Yang et al., 2017). The fig (Ficus carica L.), which originated from the Mediterranean coastal region, is one of the earliest cultivated fruit trees in the world. The fig fruit (syconia) demonstrates a typical double sigmoid growth curve with a rapid growth phase, a lag phase, and another rapid growth phase (Flaishman et al., 2008). Ripe figs with dark color and red flesh have a high anthocyanin content, are beneficial to health, have a great market potential. In a previous study, we confirmed that the coloring of cv. Purple-Peel was due to anthocyanin accumulation . However, the bHLH transcription factors Plant Materials The common fig cv. Purple-Peel from a commercial orchard in Weihai city, Shandong province, China (37 • 25 ′ N, 122 • 17 ′ E) was used. The fig trees were 7 years old with 3 m × 3 m spacing and standard cultivation. "Purple-Peel" is a bud mutation of "Green Peel, " a main fig cultivar in China . Six stages of the main crop fruit were sampled for gene-expression analysis based on the characteristics of fruit development. The fruit samples were marked as stages 1-6: stage 1 represented phase I (the first rapid growth period), stages 2, 3, and 4 were the early, middle, and late stages of phase II (slow growth period), and stages 5 and 6 represented phase III (the second rapid growth period). In this study, following Wang et al. (2019), stages 4 and 5 fruits were termed young and mature, respectively. Sixty fruits were randomly selected at each stage, and 20 were used as the biological replicate. The peel and female flower tissue were separated onsite at the time of sampling. Fresh samples were quick-frozen with liquid nitrogen and stored at −80 • C for subsequent experiments. Phylogeny and Multiple-Sequence Alignment of FcbHLH Genes ClustalX version 2.0 with default parameters was used to perform multiple-sequence alignments of the predicted bHLHs of fig and Arabidopsis (Larkin et al., 2007;Guo and Wang, 2017). A phylogenetic tree of the bHLHs was constructed with MEGA6.0, using the neighbor-joining (NJ) method with parameters set as follows: mode "p-distance, " gap setting "Complete Deletion, " and calibration test parameter "Bootstrap = 1000" (Tamura et al., 2011). Gene Structure and Protein Sequence Motif Analyses The intron/exon structure map of the fig bHLHs was generated online using the Gene Structure Display Server (GSDS: http://gsds.gao-lab.org/). The conserved motifs were analyzed online using MEME4.11.2 (https://meme-suite.org/meme/tools/ meme), with parameters set to: number of repetitions "any, " highest motif number "20, " motif length "6-200, " and default values for the other parameters. The results were constructed with TBtools . Chromosomal Location and Collinearity of bHLH Genes The positions of FcbHLHs on the 13 fig chromosomes were determined by mapping bHLH gene sequences to fig chromosome survey sequences using BLAST programs. The Mapchart v2.2 software was used to display the precise genelocation results. The genome data of Ficus hispida and Ficus microcarpa were downloaded from the database of National Genomics Data Center (https://bigd.big.ac.cn/search/?dbId= gwh&q=PRJCA002187&page=1) . The grape genome (Vitis vinifera) was also downloaded (https://data.jgi. doe.gov/refine-download/phytozome?organism=Vvinifera). An interspecies collinearity analysis of bHLHs between fig and F. hispida, F. microcarpa, Arabidopsis and grape was performed using MCscanX and TBtools (Tang et al., 2008;Chen et al., 2020). The final map was generated with Circos version 0.63 (http://circos.ca/). The non-synonymous replacement rate (Ka) and synonymous replacement rate (Ks) of the replicated gene pairs were calculated using KaKs_Calculator 2.0 (Wang et al., 2010), and environmental selection pressure was analyzed by Ka/Ks ratio. Functional Verification of bHLH Proteins The interaction network of 118 FcbHLH proteins was analyzed using the STRING protein interaction database (http://string-db. org/), with Arabidopsis selected for species parameters. E-value was set to 1e-4. Gene-Expression Analysis Three fig fruit RNA-seq libraries established by our laboratory were re-mined. The first library contained data of the "Purple-Peel" fig fruit during development (NCBI Accession No. PRJNA723733). Briefly, syconia peel and the internal female flower tissue were collected at six stages of fruit development. The second library contained data of young and ripe "Purple-Peel" and the peel of its mutated mother cv. Green Peel fruit (NCBI Accession No. SRP114533) . The third library contained data of bagged and naturally grown "Purple-Peel" fruit (NCBI Accession No. PRJNA494945) (Wang et al., 2019). TBtools was used to analyze the expression patterns of FcbHLHs in each library, and significant differential expression was determined by p < 0.05 and |log2(fold change) | ≥ 1. A weighted gene co-expression network analysis (WGCNA) was performed to identify the modules of co-expressed genes (Langfelder and Horvath, 2008). Correlations of the co-expression relationships between FcbHLH42 and other transcription factors were calculated according to their FPKM changes over the six stages of "Purple-Peel" fig development. The co-expression modules of FcbHLH42 were visualized with Cytoscape 3.8.2. The thresholds for co-expression were set as correlation coefficient > 0.5 and p < 0.001. The relative expression levels of FcbHLH42, strawberry (Fa)MYB10, Nicotiana benthamiana (Nb)F3H, NbDFR, NbANS, and NbUFGT in control and transient transgenic tobacco (Nicotiana tabacum) leaves were determined by quantitative reverse transcription (RT-q) PCR. The primer sequences are detailed in Supplementary Table 8. RNA extraction, DNA elimination, RNA quality check, and reverse transcription were carried out using the standard protocols of our laboratory . The RT-qPCR was carried out with ABI QuantStudio 6 Flex Real-Time PCR System (ABI, Waltham, MA, United Stated) using SYBR-Green Master Mix (Vazyme, Nanjing, China). The reaction program was: pre-denaturation at 94 • C for 1 min, denaturation at 94 • C for 15 s, annealing at 60 • C for 30 s, and extension at 72 • C for 1 min, for a total of 40 cycles. A relative quantification analysis with three replicates for each sample was performed as described in Zhai et al. (2021). Significance was analyzed with the SPSS 26.0 software. Cloning of FcbHLH42 and Transient Expression Strawberry MYB10 was shown to act synergistically with sweet cherry bHLH to promote anthocyanin synthesis, but neither was able to promote anthocyanin synthesis alone (Wang et al., 2019). We took FaMYB10 as bait to verify the function of FcbHLHs. FaMYB10 was obtained from the strawberry cDNA library, and FcbHLH42, FcbHLH3, FcMYC2, and FcbHLH14 from the "Purple-Peel" fig cDNA library. The primers are shown in Supplementary Table 8. FcbHLH42 was transiently expressed using the HyperTrans vector system (Albert et al., 2021). The constructs were transformed into a Agrobacterium tumefaciens strain GV3101, and a fresh single colony was picked and cultured overnight at 28 • C, and then centrifuged at 4,200 × g for 15 min. The bacteria were resuspended in a 15-ml agroinfiltration solution (10 mM MgCl 2 , 10 mM MES, pH 5.6) + 200 µM acetosyringone. N. benthamiana plants were grown in the greenhouse. The positive and negative controls, FaMYB10-expressing solution and FcbHLH42expressing solution, respectively, were infiltrated into the back side of the leaves of 5-week-old tobacco (N. tabacum) plants (Sparkes et al., 2006). The leaves were photographed, and total anthocyanin content and gene expression were determined 7 days after the infiltration. Leaf tissue color was measured following Wang et al. (2019). Three biological replicates were used for the tests. Color Measurement For treatment and controls, 1 g tobacco leaf tissue was collected and added to a 10-ml color extraction solution (methanol:water 1:1, pH 2). After ultrasonic extraction with oscillation at 200 rev/min at 25 • C for 10 min, the sample was centrifuged at 10,000 × g at 4 • C for 10 min, and the supernatant was collected. The leaf tissue was washed twice, and the supernatants were combined. After filtration through a 0.45-µm membrane, anthocyanin content was determined by differential pH method. Three biological replicates were used. Significance was analyzed with the SPSS 26.0 software. Table 1). The predicted number of amino acids encoded by FcbHLHs ranged from 91 (FcbHLH5) to 892 (FcbHLH56), with an average of 356 amino acids per gene. The molecular masses of these proteins ranged from 10.25 (FcbHLH5) to 97.2 kD (FcbHLH56), and the isoelectric points were from 4.59 (FcbHLH28) to 11.53 (FcbHLH111), with 62.71% of them lower than 7, as predicted by ExPasy. This was similar to the isoelectric point pattern reported for the bHLH families of Arabidopsis and rice (Li et al., 2006). The hydrophilicity of the proteins ranged from −1.014 (FcbHLH017) to −0.014 (FcbHLH053), indicating that all FcbHLHs are hydrophilic. The instability index (II) ranged from 24.6 to 76.72, with only three indicated stable proteins (II < 40). The aliphatic index was between 50.08 and 102.86. Nuclear localization was predicted for most of the FcbHLHs, while cytoplasmic, chloroplastic, and mitochondrial matrix localization was predicted for a few of them. No signal peptide was found for any of the FcbHLHs by SignalP, demonstrating that they are non-secretory proteins. Genome-Wide Identification and Phylogenetic Analysis To understand the evolutionary relationship of FcbHLH genes, a phylogenetic tree was constructed ( Figure 1A). FcbHLHs were present in 25 of the 26 Arabidopsis bHLH subgroups; they were absent in subgroup II. Subgroups IX and X, both with 14 members, were the largest subgroups of FcbHLHs, while subgroups IIIf and IVd were the smallest, each with only a member. Vb, VIIIb + c, IX, and XI. The biggest numerical difference was found in subgroup Ia, with FcbHLH members being less than half of their Arabidopsis counterparts. FcbHLH had more members than Arabidopsis in subgroups IIIa, IVb, VIIIa, and X. Sequence and Structure Analyses Conserved motifs of FcbHLHs are shown in Figure 1B. Although the length of the FcbHLHs of different subfamilies varied greatly, the length and position of the conserved motifs were very similar. Motifs 1-10 are shown in different colors (Figure 1B), and the details of the 10 conserved motifs are shown in Table 1 and Supplementary Table 2. The number of introns in the FcbHLHs ranged from 0 to 19 (Supplementary Figure 1). In some subfamilies, the structural pattern of all members was similar. For example, members of subgroup VIIIb had no introns, whereas members of subgroup Ia had two introns, and the corresponding positions of the introns were conserved. The promoter region was obtained by searching the 2kb sequence upstream of the translation initiation site of FcbHLHs from the fig genome (Mori et al., 2017;Usai et al., 2020). At least 16 cis-regulatory elements were predicted (Supplementary Figure 1). The elements participated in responses to abiotic stresses (light deprivation, drought, low temperatures, anaerobic conditions, defense, and stress), hormone responses (salicylic acid, gibberellin, methyl jasmonate, abscisic acid, and auxin), circadian rhythm regulation, and nutrition and development (meristem expression and endosperm expression) in all the FcbHLH genes. Light response and anaerobic-induced abiotic stress response regulatory elements, and MYB-binding sites were found in five bHLH (FcbHLH8, FcbHLH24, FcbHLH83, FcbHLH46, and FcbHLH21) promoters. Expansion Patterns and Collinear Correlation Large-fragment chromosome replication and tandem repeat are key means of gene family expansion. In our study, the 118 FcbHLHs were unevenly distributed among the chromosomes, with a maximum of 23 on chromosome 5 and a minimum of 3 on chromosome 13 (Figure 2A). It is generally believed that tandem replication occurs when the distance between genes is <100 kb, and 15 pairs of FcbHLHs were in that range (Supplementary Table 3). An intraspecific collinearity analysis showed that 11 pairs of FcbHLHs originated from fragment replication ( Figure 2B and Supplementary Table 4). The results demonstrated that tandem replication and fragment replication were important events in the expansion of the FcbHLH gene Expression Pattern of FcbHLHs in Fruit Among the 118 FcbHLH genes, 72 were at FPKM > 10 in at least a sample of the peel and female flower tissue at different stages of fig fruit development (Figure 3). Fifteen FcbHLHs from eight subfamilies demonstrated FPKM > 100, with subgroup XII leading the list with three members (FcbHLH4, FcbHLH31, and FcbHLH75). All three of these genes were upregulated along female flower development; in the peel they continued to increase until the fruit started ripening, at which point their expression decreased. Members of the same subgroup could have different expression patterns: in subgroup VIIIb + c; for example, FcbHLH54 and FcbHLH83 were expressed in the late stage of female flower and peel development, whereas FcbHLH56, FcbHLH91, FcbHLH81, and FcbHLH116 were specifically expressed in the early stage of peel development, and FcbHLH40 was highly expressed in the early and middle stages of female flower development. Members of subfamily IIId + e, such as FcMYC2, FcbHLH96, FcbHLH31, and FcbHLH4, which are closely related to the JA signaling pathway, were clearly repressed during female flower and peel development. "Purple-Peel" and "Green Peel" are a pair of bud mutant cultivars with a different peel color at fruit ripening ( Figure 4A). The expression pattern of FcbHLHs during fruit development was consistent in the two cultivars: 64 members showed the same expression trend, with 20 upregulated and 44 downregulated FcbHLHs during fruit development ( Figure 4B). In the peel of the ripe fruit, 51 FcbHLHs were highly expressed in "Green Peel, " but only 35 members were highly expressed in "Purple-Peel." FcbHLH17 of subgroup IIId + e and FcbHLH35 of subgroup IIIc were upregulated in "Purple-Peel" and downregulated in "Green Peel" during fruit ripening, with the expression level of FcbHLH35 in ripening-stage "Purple-Peel" fruit peel being significantly higher than that in its "Green Peel" counterpart (2.11-fold). The differential expression of FcbHLH family members in this pair of cultivars supported their different secondary metabolite contents . Anthocyanin synthesis in fig peel is light-dependent, whereas in the female flower tissue it is not ( Figure 4C). For the bagged fruit, 68 and 22 FcbHLHs demonstrated upregulation and downregulation in the peel of the mature fruit, respectively, of which FcbHLH98 of subgroup Ib was downregulated by 1.24fold and showed an increment in the female flower. FcbHLH42 of subgroup IIIf was downregulated by 0.82-fold after bagging and might be involved in the light-dependent anthocyanin-synthesis pathway in the peel (Figure 4D). Interaction Network Prediction of bHLHs Involved in Anthocyanin Biosynthesis Among the 118 FcbHLHs, 16 were assigned to subfamily III and their phylogenetic distance is shown in Figure 5A. Among the 16 genes, only FcbHLH42, encoding a bHLH-2 protein, was clustered in the IIIf subgroup, which also included ZmLC1, AtbHLH42 (TT8), AtMYC1, and other bHLHs that have been confirmed to be related to anthocyanin synthesis. FcbHLH42 was most closely related to apple (Md)bHLH3 and VvMYC1. FcbHLH42 expression was upregulated by 1.68-and 1.62-fold at stages F3 and F4, when female flower color is developing, and upregulated by 1.99-fold at stage P6 of peel coloring (Figure 3). "Purple-Peel" fruit bagging led to the 1.06-fold repression of FcbHLH42 compared with the control fruit. A relative expression analysis showed high synchronicity between FcbHLH42 expression and the corresponding anthocyanin content in the female flower and peel of fig fruit. Moreover, in the peel of the ripening fruit, FcbHLH42 was repressed in both "Purple-Peel" and "Green Peel" after bagging ( Figure 5B). Proteins with predicted interaction scores higher than 0.7 with FcbHLH42 are shown in Figure 5C. The red circles are R2R3-MYB genes, i.e., TT1 (MYB75), TT2, MYB5, MYB113, MYB114, MYB90, MYBL2, and TTG1 (WD40). The green circles represent key enzymes in the anthocyanin biosynthesis pathway, predicting that FcbHLH42 may be FcbHLH3, FcMYC2, and FcbHLH14 are clustered in subgroup IIId + e (Figure 5A). MYC2 is the core element of the COI1-JAZ-MYC2 complex that serves an important role in the JA signaling pathway in plants. The bHLHs in the IIId + e subgroup conservatively participate in the regulation of genes related to stress response and JA signaling. In our transcriptome data, FcMYC2 was upregulated by 1.8-fold in the late-stage peel of the naturally grown fruit, whereas it was downregulated in the bagged fruit. FcbHLH42 Promotes Anthocyanin Accumulation in Transgenic Tobacco by Interaction With MYB A positive interaction between FaMYB10 andFcbHLH42 is shown by Y2H, along with weak interactions of FaMYB10 with FcbHLH3 and FcMYC2 ( Figure 6A).The role of FcbHLH42 in anthocyanin biosynthesis was analyzed further with transient transgenic technology using tobacco leaves. Leaves with combined overexpression of FcbHLH42 and FaMYB10 were purple, whereas those with the control Agrobacterium linecontaining vector, or a single injection of FaMYB10 or FcbHLH42, were not ( Figure 6B). The anthocyanin content following FcbHLH42 + FaMYB10 overexpression was also significantly higher than that of the control and the single transcription factor injections ( Figure 6C). The transcription levels of four genes related to anthocyanin synthesis, i.e., NbF3H, NbDFR, NbANS, and NbUFGT, were all significantly increased in the FcbHLH42 + FaMYB10 combination. In the control group without color change, the expression levels of the four genes are low to undetectable (Figures 6D-I). Gene Structure Provides Information on FcbHLH Evolutionary Relationships The bHLH transcription factor family is the second largest family in plants and participates in various regulatory metabolic activities. Following the taxonomy of the Arabidopsis bHLH family, 118 FcbHLH genes were divided into 25 subgroups in this study, with most members in a particular subgroup bearing the same intron pattern and conserved motifs, suggesting the regulation of similar biofunctions (Figure 1 and Supplementary Table 1). The phylogenetic topology diagram revealed 10 highly conserved amino acid motifs in the 118 FcbHLHs. Signature Motifs 1 and 2 were found in almost all FcbHLH proteins and were always adjacent to each other, constituting the bHLH domain ( Figure 1B). Most of the conserved motifs in a particular subgroup were similar, supporting the evolutionary classification of the FcbHLH gene family. The uniqueness and conservation of motifs in each subgroup indicate that the functions of the encoded bHLHs in that subgroup are stable, and that the specific FcbHLH genes of subgroup III. After log 2 conversion, log 2 FC > 0 indicates upregulation, and log 2 FC < 0 indicates downregulation. F1-F6 and P1-P6 represent the six stages of female flower and fruit peel development, respectively. GPY and GPM, peel of young and mature "Green-Peel" fruits, respectively. PPY and PPM, peel of young and mature "Purple-Peel" fruits, respectively. PFM, female flower of mature "Purple-Peel" fruit. BPPM and BPFM, peel and female flower of mature "Purple-Peel" fruits that were bagged at the young stage. (C) Interaction network of FcbHLH42 from the perspective of Arabidopsis thaliana homologous genes. Red represents transcription factors, and green represents key synthase genes. motifs are pivotal in the implementation of the corresponding regulatory function. The expansion of a gene family is mainly driven by gene duplication and subsequent diversification; tandem repeats and large-fragment replication are two major means of gene expansion (Vision et al., 2000). Tandem repeats refer to two adjacent genes on the same chromosome, and large-fragment replication events involve different chromosomes (McGowan et al., 2020). Chromosome localization indicates that FcbHLH genes are unevenly distributed (Figure 2A). It was speculated that 29 of the 118 FcbHLH genes had tandem repeat events, similar to the ratio reported for the potato (20 out of 124) and tomato (14 out of 159) (Sun et al., 2015) bHLH families. The sequence of tandem replications was very similar in the conserved region, and their genetic relationship in the evolutionary tree was also very close. As a result, similar functions are expected. bHLHs Expressed in Fig Fruit The bHLH family plays a number of important regulatory roles in fruit-related growth and development, such as carpel, anther, and epidermal cell development, phytochrome signaling, flavonoid biosynthesis, and hormone signaling (Feller et al., 2011;Vanstraelen and Benkova, 2012) Among the 159 tomato bHLH genes, 11 displayed a tendency toward fruit-specific expression defined by >2-fold expression in fruit compared with other tissues. The bHLHs further showed a divergent expression during fruit development and ripening, and ethylene-responsive elements were found with the promoter of 7 bHLH genes (Sun et al., 2015). Three highly expressed bHLHs in the fig fruit, FcbHLH4, FcbHLH31, and FcbHLH75, all belonging to subgroup XII, were upregulated at fruit ripening. Subgroup XII has been shown to regulate brassinosteroid signaling, flower initiation, and cell elongation in other plants (Niu et al., 2017). The role of the highly expressed FcbHLHs needs to be further elucidated. Basic helix-loop-helixes that have been suggested to regulate anthocyanin and proanthocyanin biosynthesis are often nominated according to their expression pattern and phylogenetic clustering. In the positively correlated coexpression network of FcbHLH42 in subgroup IIIf, there were key structural genes for anthocyanin synthesis (FcANS, FcUFGT, etc.), and fig MYBs (FcMYB114, FcMYB5, etc.). The expression of grape VvMYC1 has been reported to be correlated with the synthesis of anthocyanins and proanthocyanins in skin and seeds during berry development, suggesting that VvMYC1 is involved in the regulation of anthocyanin and proanthocyanin synthesis in grapes. Similarly, the transient expression of VvMYC1 and VvMYBA1 induced anthocyanin synthesis in grapevine suspension cells (Hichri et al., 2010(Hichri et al., , 2011b. In blueberry, seven bHLH genes had differential expression patterns during fruit development (Zhao et al., 2019). Three jujube candidate bHLH genes, ZjGL3a, ZjGL3b, and ZjTT8, were suggested to be involved in anthocyanin biosynthesis and classified into subgroup III (Shi et al., 2019). Functional validation is required to confirm the specific role of these bHLHs. FcbHLH Involvement in Fig Fruit Anthocyanin Biosynthesis Only a few bHLH transcription factor genes, such as VvMYC1, FvbHLH9, MdbHLH3, and MdbHLH33, have been identified as being associated with anthocyanin biosynthesis in fleshy fruit (Espley et al., 2007;Hichri et al., 2011a;Li et al., 2020b). Our study revealed the first bHLH involved in fig fruit anthocyanin biosynthesis. Previous studies have shown that bHLH genes of subgroup IIIf interact directly with anthocyanin biosynthesis. Previous reports have elucidated two functionally redundant bHLHs, AmInc I and AmDel, which directly regulate anthocyanin biosynthesis in Antirrhinum majus (Albert et al., 2021). In apple, MdbHLH3 and MdbHLH33 have been characterized in relation to anthocyanin biosynthesis (Xie et al., 2012). In figs, only FcbHLH42 was assigned to subgroup IIIf. The selection of FcbHLH42 for a further study on its involvement in anthocyanin synthesis was also supported by its homologous clustering with confirmed anthocyanin biosynthesis-regulating VvMYC1 and MdbHLH3, and the positive results from protein interaction and co-expression analyses. Although the function of FcbHLH42 was confirmed by a series of experiments, including transient expression, in this study, further investigation could reveal other FcbHLHs that regulate anthocyanin biosynthesis in the fig fruit. The FPKM values of FcbHLH42 were 15 and 26 in the female flower tissue and peel during the stage of rapid anthocyanin content increase, but were not very high compared with those of the highly expressed FcbHLHs, or the FPKM values of the color development-regulating FcMYBs. FcbHLH42 is a bHLH-2 gene. In addition to bHLH-2, bHLH-1 proteins could act in controlling anthocyanin biosynthesis . bHLH-1 and bHLH-2 transcription factors are suggested to function via distinct mechanisms (Pesch et al., 2015;.A previous transcriptome study has revealed that FcMYB114, FcCPC, FcMYB21, and FcMYB123 might regulate anthocyanin biosynthesis in the fig fruit (Wang et al., 2019;Li et al., 2020a). WD40 and other MYB transcription factors are shown to be positively correlated with FcbHLH42 in this study, which provides the basis for better analyses of various regulatory models of anthocyanin synthesis in figs. Moreover, our experiments showed that FcbHLH3, FcMYC2, and FcbHLH14 are closely related to JAZ family members and have a predicted interaction with TT2/TTG1/MYB75 (Supplementary Figure 3 and Supplementary Table 7). Their role in anthocyanin synthesis warrants a further study. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
2021-10-08T13:22:04.455Z
2021-10-08T00:00:00.000
{ "year": 2021, "sha1": "8ba28519741e9cf4fb2c67ad3612dcdfb739ab23", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2021.730692/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ba28519741e9cf4fb2c67ad3612dcdfb739ab23", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
212668999
pes2o/s2orc
v3-fos-license
Gasdermin E suppresses tumor growth by activating anti-tumor immunity Cleavage of the gasdermins to produce a pore-forming N-terminal fragment causes inflammatory death (pyroptosis)1. Caspase-3 cleaves gasdermin E (GSDME, also known as DFNA5), mutated in familial aging-related hearing loss2, which converts noninflammatory apoptosis to pyroptosis in GSDME-expressing cells3–5. GSDME expression is suppressed in many cancers and reduced GSDME is associated with decreased breast cancer survival2,6, suggesting GSDME might be a tumor suppressor. Here we show reduced GSDME function of 20 of 22 tested cancer-associated mutations. Gsdme knockout in GSDME-expressing tumors enhances, while ectopic expression in Gsdme-repressed tumors inhibits, tumor growth. Tumor suppression is mediated by cytotoxic lymphocyte killing since it is abrogated in perforin-deficient or killer lymphocyte-depleted mice. GSDME expression enhances tumor-associated macrophage phagocytosis and the number and functions of tumor-infiltrating NK and CD8+ T lymphocytes. Killer cell granzyme B also activates caspase-independent pyroptosis in target cells by directly cleaving GSDME at the same site as caspase-3. Non-cleavable or pore-defective GSDME are not tumor suppressive. Thus, tumor GSDME is a tumor suppressor by activating pyroptosis, which enhances anti-tumor immunity. Evidence that GSDME might act as a tumour suppressor (including epigenetic GSDME inactivation by promoter DNA methylation in many cancer lines and primary cancers 2,6 ; suppression by GSDME of colony formation and cell proliferation in gastric cancer, melanoma and colorectal cancer and of breast-cancer invasiveness; and worse five-year survival and increased metastases from breast cancers that poorly express GSDME 2,6 ) prompted us to probe whether and how GSDME might function as a tumour suppressor. Induction of inflammatory cell death in GSDME-expressing cancers subjected to intrinsic stresses (hypoxia or endoplasmic-reticulum stress) or extrinsic challenges (chemotherapy, radiation or attack by cytotoxic lymphocytes) that activate caspase 3 could have a marked effect on the tumour microenvironment, immune-cell recruitment and function, and tumour growth. Here we show that GSDME in tumours suppresses tumour growth by increasing the anti-tumour functions of tumour-infiltrating natural-killer (NK) and CD8 + T killer lymphocytes. GSDME converts apoptosis to pyroptosis We first found that Gsdme messenger RNA and/or protein in seven mouse tumour cell lines was within the range seen in primary breast cancers and colorectal cancer in The Cancer Genome Atlas (TCGA) database (https://www.cancer.gov/tcga) (Extended Data Fig. 1a-c). Gsdme was knocked out in two highly expressing mouse lines, namely EMT6 triple negative breast cancer and CT26 colorectal cancer (Extended Data Fig. 1d-g), and one human neuroblastoma line, namely SH-SY5Y (Extended Data Fig. 1h). Moreover, Gsdme or GSDME was stably expressed in two poorly expressing mouse cancer cell lines-B16-F10 melanoma (hereafter referred to as B16) and 4T1E triple negative breast cancer (Extended Data Fig. 1i, j)-and in a human cervical carcinoma cell line (HeLa) (Extended Data Fig. 1k). To examine the effect of GSDME on cell death, we treated B16 cells that were expressing empty vector or overexpressing mouse GSDME (mGSDME) with raptinal, a rapid caspase 3 activator 7 . We found that Article mGSDME did not alter cell-death kinetics and extent, as measured by annexin V/propidium iodide staining (Extended Data Fig. 2a). Raptinal did not trigger pyroptosis in B16 cells expressing empty vector (with pyroptosis being assessed by uptake of SYTOX green and release of lactate dehydrogenase (LDH)), but did so in mGSDME-overexpressing cells after roughly 40 min (Extended Data Fig. 2b, c). Pyroptotic ballooning cell membranes and SYTOX green uptake were detected by time-lapse fluorescence microscopy in mGSDME-overexpressing cells only (Extended Data Fig. 2d and Supplementary Videos 1, 2). Emptyvector cells instead became apoptotic (detached and shrunken, with membrane blebbing). After adding raptinal, caspase-3-mediated cleavage was detected independently of mGSDME overexpression, beginning within 20 min, and mGSDME cleavage was detected coincidently if mGSDME was expressed (Extended Data Fig. 2e). Although raptinal did not appreciably change cellular levels of the nuclear protein HMGB1, only mGSDME-overexpressing B16 cells released HMGB1. Similarly, raptinal or tumour-necrosis-factor (TNF)-related apoptosis-inducing ligand (TRAIL) converted apoptosis to pyroptosis only in HeLa cells that overexpressed human GSDME (hGSDME) (Extended Data Fig. 2f-i and Supplementary Videos 3,4). Thus, GSDME is cleaved rapidly after caspase 3 activation and then permeabilizes the cell membrane, converting apoptosis to pyroptosis, as previously reported 3,4 . Loss-of-function GSDME mutations in cancer If GSDME is a tumour suppressor, then some GSDME-expressing cancers might have loss-of-function (LOF) mutations. We examined the TCGA database of single-nucleotide polymorphisms in primary cancers for GSDM mutations (Extended Data Fig. 3a). GSDME and GSDMC had the most mutations, and there were more GSDME mutations around the caspase 3 cleavage site. GSDME single-nucleotide polymorphisms in the N terminus were mapped onto the GSDME N-terminal pore, modelled on the basis of the mouse GSDMA3 N terminus 8 (Extended Data Fig. 3b, e). We expressed 18 GSDME N-terminal conserved-site mutants in HEK293T cells and tested the cells for pyroptosis. All of the mutant proteins were well-expressed (Extended Data Fig. 4c, f), and 16 of 18 cancer-associated single-nucleotide polymorphisms substantially reduced LDH release compared with the wild-type GSDME N terminus (Extended Data Fig. 3d, g), suggesting that some cancer-associated GSDME N-terminal mutations cause LOF. Mutations in the globular domain close to the oligomerization and cell-membrane-binding sites had the largest effect. Four premature stop mutants (GSDME 1-46, 1-210, 1-451 or 1-491) also did not cause pyroptosis in HEK293T cells, although some of the mutated proteins (1-451 and 1-491) may have been unstable (Extended Data Fig. 3h-l). We verified a previously described 3 LOF F2A mutation (single-letter amino-acid code) (Extended Data Fig. 3m, n). F2, D18 and P212 are conserved between human and mouse GSDME. Expression of mGS-DME N termini with an F2A, D18V or P212L mutation in HEK293T cells, or of F2A or P212L full-length mGSDME mutants in 4T1E cells, markedly reduced spontaneous or raptinal-induced pyroptosis, respectively, compared with unmutated mGSDME (Extended Data Fig. 3o, r). Thus, 20 of 22 (91%) studied cancer-related GSDME mutations cause LOF. Fig. 4a-e) and CT26 (Extended Data Fig. 4f-j) tumours knocked out for Gsdme grew much faster in immunocompetent mice than did tumours expressing endogenous Gsdme (Extended Data Fig. 4a, f). The tumour microenvironment of Gsdme −/− EMT6 tumours had fewer CD8 + T and NK cells and a trend towards fewer tumour-associated macrophages (Extended Data Fig. 4b). Tumourinfiltrating lymphocytes (TILs: CD8 + T and NK cells) from Gsdme −/− EMT6 and CT26 tumours also expressed less granzyme B (GzmB) and/ or perforin (PFN) (Extended Data Fig. 4c, d, g, h), and produced less interferon-γ (IFNγ) and TNF after stimulation with phorbol 12-myristate 13-acetate (PMA) and ionomycin (Extended Data Fig. 4e, i, j). Thus, endogenous GSDME suppresses tumour growth and promotes TIL function. Ectopic mGSDME expression in 4T1E tumours producing enhanced green fluorescent protein (eGFP) (Fig. 1e-g Fig. 1 | Ectopic expression of pore-forming, but not inactive, GSDME reduces tumour growth and enhances tumour immune responses. a-d, Orthotopically implanted 4T1E cells stably expressing wild-type mGSDME (n = 6 mice per group), inactive F2A mGSDME (n = 6 mice per group) or empty vector (EV; n = 7 mice per group) were analysed for tumour growth (a) and TIL function (b-d). b-d, Percentages of CD8 + (b) and NK (c) TILs expressing GzmB or PFN, and of CD8 + TILs producing IFNγ or TNF induced by PMA and ionomycin (d). e-g, Anti-tumour immunity after orthotopic implantation of 4T1E cells stably expressing eGFP and then stably transduced to express mGSDME or empty vector(n = 7 mice per group). e, Mean numbers of tumour-specific GFP-tetramer-positive (tet + ) CD8 + TILs per milligram of tumour. f, Percentage of CD8 + TILs activated by eGFP peptide to produce IFNγ or TNF. g, Percentage of GFP + tumourassociated macrophages (TAMs) that phagocytosed tumour cells. a, The area under the curve of tumour growth curves was compared by one-way analysis of variance (ANOVA) with Holm-Sidak correction for type I error. b-d, Comparisons were calculated by one-way ANOVA using the Holm-Sidak method for multiple comparisons. e-g, Comparisons were calculated by twotailed Student's t-test. Data shown are mean + s.e.m. and are representative of two independent experiments. Each dot represents data from an individual mouse. also reduced tumour growth (Extended Data Fig. 5d). In this model, tumour-specific CD8 + TILs could be examined using GFP tetramers. GSDME markedly increased the number of CD8 + TILs that stained with GFP tetramers (Fig. 1e). Tetramer-positive TILs in GSDME-overexpressing tumours expressed substantially more PFN (Extended Data Fig. 5e), and also produced more cytokines after stimulation with GFP peptide (Fig. 1f). Tumour-associated macrophages in mGSDME-expressing tumours were twice as likely to be GFP-positive, indicating increased in vivo phagocytosis of tumour cells (Fig. 1g). Killer lymphocytes mediate tumour suppression Enhanced immune function in mGSDME-expressing tumours suggests that tumour inhibition might be immune-mediated. To investigate this hypothesis, we compared GSDME-expressing and Gsdme −/− EMT6 tumours in wild-type BALB/c mice and in NSG mice (for 'non-obese diabetic (NOD), severe combined immunodeficient (SCID), interleukin-2-receptor-γ null'), the latter of which lack mature lymphocytes (Fig. 2a). Both Gsdme +/+ and Gsdme −/− EMT6 tumours grew more rapidly in NSG than in wild-type mice, indicating immune protection in the wild-type mice even in the absence of GSDME. Gsdme deficiency did not markedly affect tumour growth in NSG mice, whereas Gsdme −/− EMT6 tumours grew faster in immunocompetent mice. A requirement for lymphocytes in the anti-tumour effect of GSDME was also observed by comparing empty-vector and mGSDME-overexpressing B16 tumours (Fig. 2c). To determine whether killer cells are important for GSDME-mediated tumour inhibition, we implanted Gsdme +/+ and Gsdme −/− EMT6 tumours in mice lacking CD8 + T cells and/or NK cells and in mice treated with control antibodies (Fig. 2b and Extended Data Fig. 7). Depletion of either CD8 + T or NK cells modestly but noticeably reduced GSDME-mediated tumour inhibition, but GSDME expression did not alter tumour growth substantially in mice lacking both CD8 + T and NK cells, indicating that both types of killer cell are responsible for tumour suppression (Fig. 2b). Similarly, depletion of either killer-cell subset markedly increased the growth of only GSDME-expressing and not empty-vector B16 cells (Extended Data Fig. 8). Thus, both CD8 + T and NK cells mediate the tumour-suppressive effects of GSDME. This killer-cell dependence of protection suggests that pyroptosis causes immunogenic cell death (ICD) 9 . The gold-standard criterion of ICD is protection from secondary-tumour challenge after vaccination with tumour cells undergoing ICD. To determine whether pyroptosis causes ICD, we vaccinated mice with either wild-type or GSDMEoverexpressing B16 cells subcutaneously in the left flank, and further challenged the mice ten days later with wild-type B16 on the right flank ( Fig. 2d-f). At the vaccination site, GSDME and caspase 3 cleavage were easily detected by immunoblot only in GSDME-overexpressing tumours (Fig. 2d), indicating that pyroptosis occurred spontaneously in vivo and that cell death increased greatly in GSDME-overexpressing tumours. Moreover, vaccination with GSDME-overexpressing, compared with wild-type, B16 cells substantially reduced the growth of challenge wild-type B16 tumours (Fig. 2e) and improved tumour-free survival (Fig. 2f). Five of eight mice vaccinated with wild-type B16 cells developed palpable tumours, whereas only one of eight mice vaccinated with GSDME-overexpressing B16 cells did (P = 0.039, χ 2 test). Thus, pyroptosis is a form of ICD that occurs spontaneously in GSDME-overexpressing tumours. Recognition by CD8 + T and NK cells triggers both cytokine secretion and cytotoxic granule-mediated, PFN-dependent cell killing, but the latter is generally considered key to anti-tumour immunity. To determine whether killing mediates tumour suppression, we compared the growth of empty-vector and mGSDME-overexpressing 4T1E tumours in wild-type and Prf1 −/− (PFN-deficient) mice (Fig. 2g). Both tumours grew much faster in PFN-deficient than in wild-type mice. Although GSDME markedly reduced 4T1E growth in wild-type mice, GSDME conferred no noticeable advantage in PFN-deficient mice, indicating that granuledependent cytotoxicity was responsible for GSDME's tumour suppression. Given that GSDME had no substantial effect on tumour growth in mice lacking killer lymphocytes or PFN, GSDME's tumour suppression Antibody depletion (Extended Data Fig. 7a) was verified on day 3 after tumour challenge and on day 11 at necropsy (Extended Data Fig. 7b, c). c, Growth of empty vector or mGSDME-overexpressing B16 cells in C57BL/6 mice (left, empty vectorn = 5 mice per group, mGSDME n = 8 mice per group) and NSG mice (right, n = 6 mice per group). d-f, B16 vaccination model. C57BL/6 mice were vaccinated in the left flank with empty vector or GSDME-positive B16 cells and challenged 10 days later with empty vector B16 cells in the right flank (n = 8 mice per group). Shown are immunoblots of lysates of representative left-flank tumours at necropsy probed for caspase 3, GSDME and actin loading control (d), right-flank tumour growth (e) and tumour-free survival (f). g, Comparison of growth of orthotopic empty vector and mGSDME-positive 4T1E tumours in wild-type (WT) (n = 7 mice per group) and Prf1 −/− (empty vector n = 6 mice per group, mGSDME n = 7 mice per group) BALB/c mice. The area under the growth curves was compared by two-tailed Student's t-test. A log-rank test was used for survival analysis. Data are mean + s.e.m. and are representative of two independent experiments. Article must be primarily mediated by killer lymphocytes. GSDME-negative tumours tended to be larger than GSDME-positive tumours at later time points even in NSG and lymphocyte-depleted mice, suggesting that cell-intrinsic mechanisms of GSDME-mediated tumour suppression may exist. Killer cells activate pyroptosis The strong dependence of GSDME-mediated tumour suppression on cytotoxicity suggested that killer lymphocytes might cause pyroptosis in GSDME-expressing targets. To determine whether killer cells induce pyroptosis, we incubated the human NK line YT with empty-vector and hGSDME-overexpressing HeLa cells. Although cell death was comparable ( Fig. 3a), pyroptosis occurred only in GSDME-expressing HeLa cells and increased with more NK cells (Fig. 3b, c). Using time-lapse microscopy, we found that both empty-vector and GSDME-positive HeLa cells began to detach about an hour after adding YT cells ( Fig. 3d and Supplementary Videos 5, 6). Empty-vector HeLa cells showed progressive apoptotic morphology and did not take up SYTOX green over 160 min, whereas GSDME-positive HeLa cells began to take up SYTOX green after 15-20 min and underwent increasing pyroptotic membrane ballooning. Caspase 3 was cleaved in YT-cocultured empty-vector and GSDME-expressing HeLa cells collected 4 h after adding YT cells, but GSDME was cleaved only in GSDME-positive tumours, which released much more HMGB1 (Fig. 3e). Treatment of GSDME-positive HeLa cells with YT cells or another human NK line, NK-92, or with raptinal or TRAIL produced a GSDME fragment of the same size (Fig. 4a), suggesting that NK cells cleaved GSDME at the caspase 3 site. To determine whether NK-cell-induced pyroptosis depends on cytotoxic granule release, we measured pyroptosis in the presence or absence of the Ca 2+ chelator EGTA, which inhibits cytotoxic granule release and PFN (Fig. 4b). EGTA completely blocked pyroptosis, suggesting that degranulation was required. The caspase 3 inhibitor zDEVD-fmk or the pan-caspase inhibitor zVAD-fmk only partially blocked YT-induced pyroptosis. To confirm caspase-3-independent pyroptosis, we compared YT-mediated killing of CASP3 +/+ and CASP3 −/− hGSDME-overexpressing HeLa cells (Fig. 4c, d). We found that CASP3 deficiency only partially reduced YT-induced pyroptosis, suggesting that NK cells activated both caspase-dependent and caspase-independent pyroptosis. To test whether necroptosis or ferroptosis-other caspase-independent inflammatory death pathways-contribute, we added a necroptosis inhibitor (necrostatin-1s; ref. 10 ) or three ferroptosis inhibitors (ferrostatin-1, α-tocopherol or desferoxamine 11 ), to YT cocultures with empty-vector or GSDME-overexpressing HeLa cells (Extended Data Fig. 9a). None of these inhibitors suppressed YT-induced pyroptosis of GSDME-overexpressing HeLa cells, suggesting that necroptosis and ferroptosis are not involved. Moreover, expression of receptor-interacting serine/threonine kinase 3 (RIPK3), a protein that is needed to activate necroptosis, was not detected by quantitative reverse transcription (qRT)-polymerase chain reaction (PCR) in any of the cell lines studied (Extended Data Fig. 9b-e). Thus, cytotoxic granule release induces partly caspase-independent pyroptosis in GSDME-positive tumours. D270 cleavage mediates tumour suppression Because both GzmB and caspase 3 use cleavage at D270 to activate GSDME, mutation of this residue in tumours should abrogate tumour suppression. To test this hypothesis, we compared the growth of B16 (Extended Data Fig. 11a-d) and 4T1E (Extended Data Fig. 11e-h) cells stably overexpressing wild-type or D270A mGSDME or empty vector. Although wild-type GSDME reduced tumour growth as before (Fig. 1 and Extended Data Fig. 6), tumours overexpressing D270A GSDME or expressing empty vector grew indistinguishably (Extended Data Fig. 11a, e). Overexpression of wild-type GSDME enhanced CD8 + and NK TIL functionality, but overexpression of D270A GSDME did not alter GzmB, PFN or cytokine expression in CD8 + or NK TILs (Extended Data Fig. 11b-d, f-h). When Gsdme −/− EMT6 cells were knocked in to express empty vector or wild-type, F2A nonfunctional or D270A noncleavable GSDME at levels comparable to that of endogenous GSDME in wild-type EMT6, only wild-type GSDME substantially reduced tumour growth (Extended Data Fig. 11i, j), providing additional evidence that GSDME cleavage at D270 and pore formation are required for tumour suppression. Discussion Here we have shown in melanoma, triple negative breast cancer and colorectal cancer tumours that GSDME expression acts as a tumour suppressor by inducing pyroptosis, which enhances anti-tumour killer-cell cytotoxicity. The anti-tumour activity of GSDME was abrogated in mice lacking killer lymphocytes or PFN. Mutations abolishing GSDME pore formation or cleavage by GzmB/caspase 3 also blocked tumour suppression. Tumour suppression occurred without any extrinsic treatment. We consider what it is that initially triggers in vivo pyroptosis. It may have been initiated by spontaneous apoptosis of hypoxic or stressed regions of the tumour or by immune-mediated killing. Our working model is that GSDME expression in spontaneously dying tumour cells provides inflammatory danger signals that recruit immune cells to the tumour microenvironment and promote their functionality. GSDME expression increased not only the number and function of TILs, but also macrophage-mediated phagocytosis, which is predicted to enhance anti-tumour adaptive immunity. Some tumours evade immunity by resisting phagocytosis 13,14 ; tumour GSDME may help to overcome this immune evasion strategy. Killer-cell-mediated death was previously thought of as noninflammatory. Here we have shown, however, that killer lymphocytes activate pyroptosis when GzmB cleaves GSDME at the caspase 3 site. Caspaseresistant cancer cells should be susceptible to killer-cell-mediated apoptosis (because Gzm-activated death is mostly caspase-independent) as well as pyroptosis, provided that the cancer cells express GSDME. Pyroptosis may augment killer-cell immunity by providing adjuvantlike danger signals. Our results indicate that pyroptosis, similar to necroptosis 15 , is a form of ICD 9 . Implanted GSDME-positive B16 melanoma cells, but not GSDME-negative cells, spontaneously underwent pyroptosis and protected mice from challenge with wild-type B16 cells. Protection by such vaccination-the gold standard for ICD 9 -did not require chemotherapy or radiation, suggesting that GSDME-expressing tumours, even otherwise 'immunologically cold' tumours such as B16, are spontaneously undergoing pyroptotic ICD in vivo. It is worth noting that the GSDME-positive tumours studied here do not release interleukin (IL)-1β or require it for immune protection, because Il1b was expressed by only one of the mouse cell lines studied (Extended Data Fig. 9f, g) and IL-1β was not detected in sera of GSDME-positive tumour-bearing mice (data not shown). Direct Gzm-mediated induction of pyroptosis provides a simple mechanism for triggering inflammatory death-much simpler than canonical inflammasome activation, which requires at least four molecules (a sensor, an adaptor, an inflammatory caspase and GSDMD), or even the noncanonical pathway, which requires an inflammatory caspase and GSDMD. GSDME and other GSDMs may sense other mislocalized cytosolic proteases as danger signals. The GSDM linker region is an unstructured loop, making it a good protease substrate. Consistent with this, neutrophil serine proteases, which are homologous to Gzms, can cleave GSDMD to induce neutrophil netosis 16,17 . Cancer cells have developed two strategies-epigenetic suppression of GSDME expression and LOF mutations-to avoid GSDME-mediated tumour suppression. Epigenetic suppression of GSDME is more common than GSDME mutation 2,4,6 . We have shown here that many cancerrelated GSDME mutations reduce pyroptosis, and that mutations of D270, the shared GzmB/caspase 3 cleavage site and a prominent cancer mutation, have enabled tumours to evade tumour suppression by GSDME. Therapeutic strategies to induce GSDME-such as use of the DNA methylation inhibitor decitabine 4 , an approved leukaemia and myelodysplasia drug-are worth exploring. Online content Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, Fig. 4 | GzmB directly cleaves GSDME to cause pyroptosis. a, Immunoblot of hGSDME-positive HeLa cells after no treatment or treatment with YT or NK92 cells or with TRAIL for 4 h or raptinal for 1 h. b, Effect of EGTA, zVADfmk and zDEVD-fmk on YT-induced SYTOX green uptake in empty vector and hGSDME-positive HeLa cells. c, d, Expression of GSDME and caspase 3 in hGSDME-positive (control) and hGSDME-positive CASP3 −/− HeLa cells (c) and YT-induced SYTOX green uptake (d) in empty vector and hGSDME-positive and hGSDME-negative CASP3 −/− HeLa cells. e, f, Immunoblots, probed for Flag-GSDMD (e, left) or Flag-GSDME (e, right; f), of cell lysates of HEK293T cells expressing Flag-tagged wild-type or D270A (f, right lanes) hGSDME after 1 h incubation with phosphate-buffered saline (PBS) or recombinant GzmA or GzmB (800 nM). g, Immunoblots, probed for GSDME, of cell lysates of hGSDME-positive HeLa cells, knocked out or not (control) for CASP3, after 1 h incubation with GzmB. h, Coomassie-stained SDS-PAGE gel of in vitro reaction of recombinant GzmB incubated with recombinant hGSDME for 1 h. *NT and *CT, N-terminal and C-terminal GSDME cleavage products; FL, fulllength GSDME. i, SH-SY5Y cells treated with PFN plus or minus GzmB or with medium ('Untreated'), for 2 h or for the indicated times, were analysed by immunoblot of cell lysates probed for caspase 3, GSDME or actin. j, k, Effects of GSDME knockout (KO) on the cell death and pyroptosis of SH-SY5Y cells treated with PFN plus or minus GzmB, assessed after 1 h by CellTiter-Glo ( j) or SYTOX green uptake (k). Differences among multiple groups in b, d, j, k were analysed by one-way ANOVA using the Holm-Sidak method for multiple comparisons. Data are mean ± s.d. of biological triplicates and are representative of three (a-h) or two (i-k) independent experiments. ***P < 0.0001. Article acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-020-2071-9. Data reporting No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to outcome assessment. Plasmids Full-length human GSDMD and GSDME and mouse Gsdme were cloned into pFlag-CMV4 plasmids. GSDME and Gsdme point mutation plasmids were generated by quick-change PCR (Stratagene). GSDME truncations were amplified by PCR from pFlag-CMV4-GSDME. pLVX-Puro empty vector was a gift from C. Bowman-Colin. Full-length wild-type GSDME and mutant GSDMEs were cloned into pLVX-Puro using a Hifi one-step kit (Gibson Assembly). LentiCRISPR-v2 puro and LentiCRISPR-v2 hygro vectors were obtained from Addgene and guide RNAs were cloned into the vectors as previously described 19,20 . All plasmids were verified by sequencing. Stable cell lines To generate lentiviruses, pLVX-puro Gsdme plasmid was transfected into HEK293T cells with pSPAX2 and pCMV-VSV-G at a 1/1/2 ratio. Supernatants collected 2 days later were used to transduce B16 and 4T1E cells for 48 h. Puromycin (Sigma, 3 μg ml −1 ) was then added to select GSDME-expressing cells. pLVX-puro GSDME was used to generate HeLa cells stably expressing GSDME, and pLVX-puro empty vector was used to generate control cells. zVAD-fmk and zDEVD-fmk were from BD Biosciences. SYTOX green was from Invitrogen. Vybrant DiD dye was from ThermoFisher Scientific. A CellTiter 96 kit (Promega) was used to measure cell proliferation. Gene-expression assays RNA was extracted using TRIzol reagent according to the manufacturer's instructions and was subject to reverse transcription using the SuperScript III system (Invitrogen). Gsdme expression was assayed by qRT-PCR using SsoFast Supermix (Bio-Rad). Breast cancer (BRCA) and colon cancer (COAD) RNA-sequence expression data were obtained from TCGA using the University of California Santa Cruz (UCSC) Xena bioinformatic tool 21 . The log2 difference between GSDME and GAPDH expression was calculated for both tumour and normal tissue and plotted using Prism software. Cell-death assays To measure membrane lysis, culture medium was collected and LDH release was measured using the CytoTox 96 cytotoxicity assay (Promega) according to the manufacturer's instructions. To assess pyroptosis induced by the GSDME N terminus in HEK293T cells, we carried out HEK293T transfection using the calcium-phosphate method and measured LDH release 20 h after transfection. Pyroptotic cells were also imaged using an Olympus IX70 inverted microscope, and protein expression was analysed by immunoblot. For raptinal-induced pyroptosis in B16 GSDME cells, 10 μM raptinal was used to treat B16 cells for 2 h and LDH release was measured at indicated time points. For TRAILinduced pyroptosis in HeLa GSDME cells, LDH release was measured 16 h after cells were treated with 100 ng ml −1 TRAIL. For YT-induced pyroptosis in HeLa GSDME cells, LDH release was measured 4 h after treatment (E/T ratio = 2/1). For PFN/GzmB-induced pyroptosis in SH-SY5Y cells, LDH was measured 1 h after treatment in GSDME-knockout cells or 2 h after treatment in the presence of caspase inhibitors. Overall cell death due to pyroptosis or apoptosis was measured at the same time as LDH release. To measure overall cell death in raptinal-treated B16 cells, we stained samples with allophycocyanin (APC)conjugated annexin V (Invitrogen) and propidium iodide (PI) (Sigma) according to the manufacturer's instructions and analysed the results by BD FACSCanto II (BD Biosciences) using FlowJo V.10 (TreeStar) software. Cell death was determined by counting annexin V and/or PI-positive cells. To measure overall cell death in HeLa or SH-SY5Y cells, we assessed cell viability by measuring ATP levels using a CellTiter-Glo kit (Promega). The untreated cells were considered as a 100% viability control and cell death was inferred as a reduction in the number of viable cells. SYTOX green uptake and time-lapse microscopy To assess raptinal-induced cell death, we seeded cells in 96-well plates overnight and treated them with 10 μM raptinal for 2 h in the presence of Article 2.5 μM SYTOX green. Fluorescence at 528 nM after excitation at 485 nM was continually recorded every 10 min using a Biotek Synergy plate reader. For time-lapse microscopy, cells seeded in glass-bottom 35-mm dishes (MatTek) overnight were treated with 10 μM raptinal in complete RPMI medium containing 2.5 μM SYTOX green and imaged using a Zeiss 880 laser scanning confocal microscope within an environmental chamber maintained at 37 °C and 5% CO 2 . For YT-induced cell death, HeLa cells were seeded in 96-well plates overnight and pretreated with EGTA (2 mM), zVAD-fmk (30 μM), zDEVDfmk (30 μM), Nec-1s (20 μM), α-tocopherol (vitamin E, 100 μM), Fer-1 (2 μM) or DFO (100 μM) as indicated for 0.5 h, before YT cells at indicated E/T ratios and 2.5 μM SYTOX green were added. Fluorescence at 528 nM after excitation at 485 nM was continually recorded every 30 min using a Biotek Synergy plate reader. Readings were normalized to control wells containing only YT cells. For time-lapse microscopy, HeLa cells were seeded in glass-bottom 35-mm dishes overnight. YT cells, stained with Vybrant DiD dye according to the manufacturer's instructions, were added at an E/T ratio of 2/1 together with 2.5 μM SYTOX green. Beginning 1 h later, cells were visualized over 90 min in an environmental chamber using a Zeiss 880 laser scanning confocal microscope. Mouse studies All procedures were conducted in compliance with all the relevant ethical regulations and were approved by the Harvard Medical School Institutional Animal Care and Use Committee. Female C57BL/6, BALB/c and NOD.Cg-Prkdc scid Il2rg tm1Wjl /SzJ (NSG) mice (6-8 weeks old) were purchased from Jackson Laboratories. Prf1 −/− mice ( Jackson Laboratories) in the BALB/c background were bred on site. All mice were housed in the Harvard Medical School Animal Facility. For tumourchallenge experiments, B16 EV, B16 mGSDME and B16 mGSDME F2A cells (approximately 1.5 × 10 5 cells per mouse) or CT26 control and Gsdme −/− cells (approximately 2 × 10 6 cells per mouse) were injected subcutaneously into the right flank of C57BL/6 and BALB/c mice, respectively. 4T1E (empty vector, mGSDME, mGSDME F2A, mGSDME P212L or mGSDME D270A) cells (approximately 5 × 10 4 cells per mouse); 4T1E-eGFP empty vector and GSDME-overexpressing cells (approximately 1.5 × 10 5 cells per mouse); or EMT6 (control, Gsdme −/− , Gsdme −/− knocked in to express empty vector, GSDME F2A, GSDME D270A, or wild-type GSDME) cells (approximately 3 × 10 5 cells per mouse) were injected into the fourth mammary fat pad of BALB/c mice. For the vaccine/challenge experiment, C57BL/6 mice were vaccinated with 1.5 × 10 5 B16 empty vector or B16 mGSDME cells in the left flank and challenged 10 days later with 2 × 10 5 B16 empty vector cells in the right flank. Tumour growth was monitored by measuring the perpendicular diameters of tumours every other day. When control tumours grew to roughly 3-4 mm in diameter (10-20 days after implantation), all mice in an experiment were killed and tumours were collected for analysis. For cell depletion in the B16 tumour model, CD8 antibody (clone 2.43), NK1.1 antibody (clone PK136) or the isotype control antibody (all from BioXCell) were injected intraperitoneally (300 μg per mouse) starting on day 2 after tumour challenge for three consecutive days, and every five days thereafter. For cell depletion in the EMT6 tumour model, NK cells were depleted with anti-asialo-GM1 antibody (30 μl per mouse, clone poly21460, BioLegend) on days −1, +1 and +6 of tumour challenge. Specific cell depletion was verified by staining for CD4, CD8, CD49b and NKp46, and by flow cytometry of peripheral blood mononuclear cells obtained on day 3 or 7 after tumour challenge and of tumour-infiltrating lymphocytes obtained at the time of necropsy (Extended Data Fig. 7a). Isolation of tumour-infiltrating immune cells Tumours were collected, cut into small pieces and treated with 2 mg ml −1 collagenase D, 100 μg ml −1 DNase I (both from Sigma) and 2% FBS in RPMI with agitation for 30 min. Tumour fragments were homogenized and filtered through 70-μm strainers, and immune cells were purified by Percoll-gradient centrifugation and washed with Leibovitz's L-15 medium. Protein expression and purification The full-length coding sequence of human GSDME was cloned into the pDB.His.MBP vector to generate a recombinant construct with an N-terminal polyhistidine-maltose-binding protein (His 6 -MBP) tag followed by a tobacco etch virus protease (TEV) cleavage site. The plasmid was verified by DNA sequencing and transformed into Eschericha coli BL21 (DE3) cells. Successful transformants were selected on an LB plate supplemented with 50 μg ml −1 kanamycin, transferred to LB medium with the same antibiotic, and grown at 37 °C with vigorous shaking until the optical density reached 1.0. Protein expression was then induced with 0.5 mM isopropyl-β-d-thiogalactopyranoside (IPTG) at 26 °C overnight. Cells were collected by centrifugation and frozen in liquid nitrogen for long-term storage at −80 °C. To purify human GSDME, we resuspended thawed E. coli pellets in buffer A (50 mM Tris-HCl at pH 8.0, 150 mM NaCl) and sonicated the cells to lyse them. The recombinant protein was captured on nickel-charged (Ni)-NTA beads (Qiagen) using a gravity-flow column and eluted with buffer A supplemented with 500 mM imidazole. The His 6 -MBP tag was removed by overnight incubation with TEV at 4 °C followed by Ni-NTA affinity chromatography. The flow-through containing GSDME was concentrated and further fractionated using a Superdex 200 gel filtration column (GE Healthcare Life Sciences) equilibrated with buffer A. Monomer fractions of GSDME were pooled and frozen at −80 °C for further use. Recombinant GzmA and GzmB were purified from HEK293T cells and PFN was purified from YT-Indy NK cells as previously described 23,24 . PFN/GzmB killing assay The PFN/GzmB-mediated killing assay in SH-SY5Y cells was performed as previously described 24 . In brief, 500 nM GzmB and/or sublytic PFN in buffer P (10 mM HEPES pH 7.5 in Hanks' balanced salt solution (HBSS)) was added to SH-SY5Y cells in buffer C (10 mM HEPES pH 7.5, 4 mM CaCl 2 , 0.4% bovine serum albumin (BSA) in HBSS). The sublytic concentration of PFN was determined as a concentration that caused 5-15% cytolysis of the target cell on its own. After 2 h incubation, LDH release, cell death, SYTOX green uptake and GSDME cleavage were assayed as described above. In vitro cleavage assay For in vitro cleavage in cell lysates, HEK293T cells transiently expressing Flag-hGSDMD or Flag-hGSDME for 48 h were lysed in lysis buffer containing 50 mM Tris-HCl pH 7.4, 150 mM NaCl and 1% Triton X100 (2 × 10 6 cells per millilitre), and cell lysates (20 μl) were incubated with GzmA or GzmB at 37 °C for 1 h. Cleavage products were detected by anti-Flag immunoblot. Cleavage of recombinant GSDME protein by recombinant GzmB was analysed by SDS-PAGE and Coomassie staining after incubation in buffer A at 37 °C for 1 h. Statistics A Student's t-test (two-tailed) or Mann-Whitney test was used to determine differences between two groups. Multiple comparisons between two groups were performed by multiple t-test with type I error correction. One-or two-way ANOVA was used to calculate differences among multiple populations. Differences between tumour growth curves and SYTOX green uptake curves were compared by first calculating the area-under-curve values for each sample and then comparing different groups using the Student's t-test or one-way ANOVA. Type I errors were corrected by the Holm-Sidak method. P values of less than 0.05 were considered significant. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Statistics For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section. n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section. A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted Give P values as exact values whenever suitable. For Bayesian analysis, information on the choice of priors and Markov chain Monte Carlo settings For hierarchical and complex designs, identification of the appropriate level for tests and full reporting of outcomes Estimates of effect sizes (e.g. Cohen's d, Pearson's r), indicating how they were calculated Our web collection on statistics for biologists contains articles on many of the points above. Software and code Policy information about availability of computer code Data collection SWISS-MODEL was used to model the GSDME-NT structure. Breast cancer and colon cancer RNA-seq expression data were obtained from The Cancer Genome Atlas (TCGA) using the University of California Santa Cruz (UCSC) Xena bioinformatic tool. Data analysis Pymol was used to analyze the modeled GSDME-NT structure . Graph design and statistical analysis was performed using Prism V6.0. Protein and DNA sequence analysis were performed by Blast. Images and videos were processed and analyzed by ImageJ Fiji. Flow cytometry data were analyzed by FlowJo v.10. For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors/reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information. Data Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A list of figures that have associated raw data -A description of any restrictions on data availability All data generated or analysed during this study are included in this manuscript and its supplementary information. Source data and uncropped blot images are provided with the paper.
2020-03-12T10:27:40.733Z
2020-01-22T00:00:00.000
{ "year": 2020, "sha1": "6409eead6cf56822c044ff1cb21801a77e90caea", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7123794", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "7dbb4a671a610130169687bb38522b4c522eeabd", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
12045674
pes2o/s2orc
v3-fos-license
"Your click decides your fate": Leveraging clickstream patterns from MOOC videos to infer students' information processing&attrition behavior With an expansive and ubiquitously available gold mine of educational data, Massive Open Online courses (MOOCs) have become the an important foci of learning analytics research. The hope is that this new surge of development will bring the vision of equitable access to lifelong learning opportunities within practical reach. MOOCs offer many valuable learning experiences to students, from video lectures, readings, assignments and exams, to opportunities to connect and collaborate with others through threaded discussion forums and other Web 2.0 technologies. Nevertheless, despite all this potential, MOOCs have so far failed to produce evidence that this potential is being realized in the current instantiation of MOOCs. In this work, we primarily explore video lecture interaction in Massive Open Online Courses (MOOCs), which is central to student learning experience on these educational platforms. As a research contribution, we operationalize video lecture clickstreams of students into behavioral actions, and construct a quantitative information processing index, that can aid instructors to better understand MOOC hurdles and reason about unsatisfactory learning outcomes. Our results illuminate the effectiveness of developing such a metric inspired by cognitive psychology, towards answering critical questions regarding students' engagement, their future click interactions and participation trajectories that lead to in-video dropouts. We leverage recurring click behaviors to differentiate distinct video watching profiles for students in MOOCs. Additionally, we discuss about prediction of complete course dropouts, incorporating diverse perspectives from statistics and machine learning, to offer a more nuanced view into how the second generation of MOOCs be benefited, if course instructors were to better comprehend factors that lead to student attrition. LIST OF TABLES Title Page : Fuzzy string similarity weights for the sample behavioral action P("PlSfPaSf"). Weight(P, A/B) represents the similarity of the pattern P w.r.t string A/B. LIST OF FIGURES Title Page Fig 1 : Q-gram based cosine distance measure. v(s; q) is a nonnegative integer vector whose coefficients represent the number of occurrences of every possible q-gram in s. ABSTRACT With an expansive and ubiquitously available gold mine of educational data, Massive Open Online courses (MOOCs) have become the an important foci of learning analytics research. The hope is that this new surge of development will bring the vision of equitable access to lifelong learning opportunities within practical reach. MOOCs offer many valuable learning experiences to students, from video lectures, readings, assignments and exams, to opportunities to connect and collaborate with others through threaded discussion forums and other Web 2.0 technologies. Nevertheless, despite all this potential, MOOCs have so far failed to produce evidence that this potential is being realized in the current instantiation of MOOCs. In this work, we primarily explore video lecture interaction in Massive Open Online Courses (MOOCs), which is central to student learning experience on these educational platforms. As a research contribution, we operationalize video lecture clickstreams of students into behavioral actions, and construct a quantitative information processing index, that can aid instructors to better understand MOOC hurdles and reason about unsatisfactory learning outcomes. Our results illuminate the effectiveness of developing such a metric inspired by cognitive psychology, towards answering critical questions regarding students' engagement, their future click interactions and participation trajectories that lead to in-video dropouts. We leverage recurring click behaviors to differentiate distinct video watching profiles for students in MOOCs. Additionally, we discuss about prediction of complete course dropouts, incorporating diverse perspectives from statistics and machine learning, to offer a more nuanced view into how the second generation of MOOCs be benefited, if course instructors were to better comprehend factors that lead to student attrition. Implications for research and practice are discussed. Massive Open Online Courses(MOOCs) Mushrooming as a scalable lifelong learning paradigm, Massive Open Online Courses (MOOCs) have enjoyed significant limelight in recent years, both in industry and academia (Haggard et al., 2013). The rationale behind the design of these MOOCs is the underlying theory of connectivism (Kop and Adrian, 2008), which stresses more on interaction with other participants and on the student-information relationship. As MOOCs continue to proliferate in the realm of online education, we expect such a form of lifelong learning holding tremendous potential, to provide students with cognitive surplus beyond traditional forms of tutelage. The euphoria is about the transformative potential of MOOCs to revolutionize online education (North et al., 2014), by connecting and fostering interaction among millions of learners who otherwise would never have met, and providing autonomy to these learners to grapple with the course instruction at their own pace of understanding. However, despite this expediency, there is also considerable skepticism in the learning analytics research community about MOOC productiveness (Nawrot and Antoine, 2014), primarily because of unsatisfactory learning outcomes that plague these educational platforms and induce a funnel of participation (Clow, 2013). Because of the option of free and open registration, a publicly shared curriculum and open ended outcomes, very often it has been observed that there is massive enrollment in these digitalized MOOC courses. However, participation in a MOOC is "emergent, fragmented, diffuse, and diverse" (McAulay et al., 2010). The extremely high rates of attrition that have been reported for this first generation of MOOCs, is of great concern. Motivation With a "one size fits all" approach that MOOCs follow, scaled up class sizes, and lack of face to face interaction coupled with such high student teacher ratios (Guo and Katharina, 2014), students' motivation to follow the course oscillates (Davis et al., 2014). This is comprehensibly reflected in escalating attrition rates in MOOCs, ever since they have started maturing (Belanger and Jessica, 2013;Schmidt and Zach, 2013;Yang et al., 2013). Supporting the participation of these struggling students may be the first low hanging fruit for increasing the success rate of courses. Because it is not feasible for MOOC instructors to manually provide individualized attention that caters to different backgrounds, diverse skill levels, learning goals and preferences of students, there is an increasing need make directed efforts towards automatically providing better personalized content in e-learning Lie et al., 2014;Sinha, 2014a). The provision of guidance with regard to the organization of the study and regulation of learning is a domain that also needs to be addressed. A prerequisite for such an undertaking is that we, as MOOC researchers, understand how diverse ecologies of participation develop as students interact with the course material (Fischer, 2011), and how learners distribute their attention with multiple forms of computer mediated inputs in MOOCs. This would help to better their experience of participation along the way as they struggle and then ultimately drop out, for example by examining participation rates of collaborations through group mirrors and metacognitive tools that dynamically display students' progress & help in interaction regulation (Jermann and Dillenbourg, 2008). Our Current Research Overview Video lectures form a primary and an extremely crucial part of MOOC instruction design. They serve as gateways to draw students into the course. Concept discussions, demos and tutorials that are held within these short video lectures, not only guide learners to complete course assignments, but also encourage them to discuss the taught syllabus on MOOC discussion forums. Prior work has investigated how video production style (slides, code, classroom, khan academy style etc) relates to students' engagement and examined what features of the video lecture and instruction delivery, such as slide transitions (change in visual content), instructor changing topic (topic modeling and ngram analysis) or variations in instructor's acoustic stream (volume, pitch, speaking rate), lead to peaks in viewership activity (Kim et al., 2014).There has been increasing focus on analyzing raw click-level interactions resulting from student activities within individual MOOC videos (Guo et al., 2014b). However, to the best of our knowledge, we present the first study that describes usage of such detailed clickstream information to form cognitive video watching states that summarize student clickstream. Instead of using summative features that express student engagement, we leverage recurring click behaviors of students interacting with MOOC video lectures, to construct their video watching profile. To an extent, clickstreams simplify computing student retention, since a large variety of interactions could potentially indicate continued interest in a course. Based on these richly logged interactions of students, we develop computational methods that answer critical questions such as a)how long will students grapple with the course material and what will their engagement trajectory look like, B)what future click interactions will characterize their behavior, C)whether students are ultimately going to survive through the end of the video. As an effort to improve the second generation of MOOC offerings, we perform a hierarchical three level clickstream analysis, deeply rooted in foundations of cognitive psychology. Incidentally, we explore at a micro level whether, and how, cognitive mind states govern the formation and occurrence of micro level click patterns. Towards this end, we also develop a quantitative information processing index and monitor its variations among different student partitions that we define for the MOOC. Such an operationalization can help course instructors to reason how students' navigational style reflects cognitive resource allocation for meaning processing and retention of concepts taught in the MOOC. Furthermore, we delineate a methodology to group students and unveil distinct patterns of video lecture viewing. Study Context The data for our current study in this thesis comes from an introductory Level 2 (Behavioral Actions) Existing literature on web usage mining says that representing clicks using higher level categories/concepts, instead of raw clicks, better exposes the browsing pattern of users. This might be because high level categories have better noise tolerance than naive clickstream logs. The results obtained from grouping clickstream sequences at per click resolution are often difficult to interpret, as such a fine resolution leads to a wide variety of sequences, many of which are semantically equivalent. To tackle this problem and get more insights into student behavior in MOOCs, the clicks can be first grouped into categories based on suitable metadata information, and then the sequences can be formed from the concept category of the click events present in the sequences. Doing this would reduce the sequence length that would be more easily interpretable. There is some existing literature (Banerjee and Ghosh, 2000;Wang et al., 2013), that just considers click as a binary event (yes/no) and discusses formation of concept based categories based on the area/sub area of the stimulus where the click was made. However, in our MOOC data, because of absence of metadata about the clicks, it would be more meaningful to form such behavioral categories from these click categories itself, which are encoded at very fine granularity. Therefore, to summarize a students' clickstream, we obtain the n-grams with maximum frequency from the clickstream sequence (a contiguous sequence of 'n' click actions). Such a simple n-gram representation convincingly captures the most frequently occurring click actions that students make in conjunction with each other (n=4 was empirically determined as a good limit on clickstream subsequence over specificity). Then, we construct seven semantically meaningful behavioral categories using these n-grams, selecting representative click groups that occur within top 'k' most frequent ngrams (k=100). Each behavioral category acts like a latent variable, which is difficult to measure from data directly. We exclude the n-gram sequences having only 'play' and 'pause' click actions in the clickstream. In an attempt to quantify the importance of each behavioral action in characterizing the clickstream, we adopt a fuzzy string matching approach. The advantage with such an approach over simple "vector of proportions" representation, is that a weight (based on similarity of click groups present in each behavioral category, with the full clickstream sequence) is assigned to each of the grouped behavioral patterns for a given students' video watching state sequence. The fuzzy string method (Van, 2014) is justified because it caters to the noise that might be present in raw clickstream logs of students, in six different ways, as mentioned in Table 1. After identifying these cases and meticulous experimental evaluation, we apply the following distance metrics and tuning parameters: Cosine similarity metric (1-Cosine distance; figure 1) between the vector of counts of n-gram (n=4) occurrences for Cases 1 and 2, Levenshtein similarity metric (1-Levenshtein distance; figure 2) for Cases 3 (weight for deletion=0, weight for insertion and substitution=1), 4, 5, 6 (weight for deletion=0.1, weight for insertion, substitution=1) capture all these six intuitions. As a next step, all subcategories of click groups that lie within each behavioral category, are aggregated by summing up the individual fuzzy string similarity weights. Then, we perform a discretization of these summed up weights, for each behavioral category, by equal frequency (High/Low). The concern of adding up two distance metrics that do not lie in the same range, is thus alleviated, because the dichotomization automatically places highly negative values in the "Low" category and positive values closer to 0 in the "High" category. This results in a clickstream vector, where every element of the vector tells us about the weight (importance) of a behavioral category for characterizing the clickstream. Thus, the output from Level 2 is such a summarized clickstream vector. For e.g: (Skipping=High, Fast Watching=High, Checkback Reference=Low, Rewatch=Low, ....). Level 3 (Information Processing) Watching MOOC videos is an interaction between the student and the medium, and therefore the conceptualization of higher-order thinking eventually leading to knowledge acquisition (Chi, 2000), is under control of both the student (who decides what video segment to watch, when/in what order to watch, how hard an effort be made to try and understand a specific video segment) and the medium/video lecture (the content/features of which decides what capacity allocation is required by the student to fully process the information contained). Research has consistently found that the level of cognitive engagement is an important aspect of student participation (Carini et al., 2006). The cognitive processing is influenced by the appetitive (approach) and aversive (avoidance) motivational systems of a student, which activate in response to motivationally relevant stimuli in the environment (Cacioppo and Gardner, 1999). In the context of MOOCs, the appetitive system's goal is in-depth exploration and information intake, while the aversive system primarily serves as a motivator for not attending to certain MOOC video segments. Thus, click behaviors representative of appetitive motivational system are rewatch/clear concept/slow watching, while click behaviors representative of aversive motivational system are skipping/fast watching. Condition In this work, we try to construct students' information processing index, based on the "Limited Capacity Information Processing Approach" (Basil, 1994;Lang et al., 1996;Lang, 2000), which asserts that people independently allocate limited amount of cognitive resources to tasks from a shared pool. Before explaining the dynamic process of human cognition through the lens of this model, we must be aware of the following two assumptions: A)People are limited capacity information processors. In case of cognitive overload, processing suffers, B)The sub-processes involved in information processing pipeline occur constantly, continuously and simultaneously. When students build a mental representation of the information presented in a MOOC video lecture segment, it is not precise. To what extent different subprocesses in the pipeline share information, or, which subprocesses make the largest resource demands, depends on students' prior knowledge/skill level, motivations for joining the course and outcomes sought. Because students choose bits of information (specific content) to process and encode, therefore they navigate the videos in non linear fashion. Moreover, students in MOOCs can adjust the speed of information processing (by pausing, seeking forward/backward, ratechange clicks). Therefore, time sensitive subprocesses in the pipeline (depicted in figure 3) seem compatible with this notion. Video watching in MOOCs requires students to recall facts that they already know, so as to follow and comprehend the concept being currently taught. So, depending on the a)expertise level, which decides how available the past knowledge is and how hard is it to retrieve the previously known facts, b)perception of video lecture as difficult or simple to understand, c)motivation to learn or just have a look at the video lecture, cognitive resource allocation would vary among these various subprocesses. This in turn, would be reflected by the underlying nature of clicks students make, which serve as responses to the stimuli. Consider an example of students who watch the MOOC lecture, primarily because of reasons such as gaining familiarity with the topic. Such students would purposely not allocate their processing resources to "memory" part of the information processing pipeline (encode, store, retrieve). Additionally, they will decode and process minimal information that is required to follow the story. On the contrary, students who watch the MOOC lecture, with the aim of scoring well in post-tests (MOOC quizzes and assignments), would allocate high cognitive processing to understand, learn and retain information from the lecture. Thus, such students would process information more fully and thoroughly, despite a possibility of cognitive overload. In order to relate our behavioral actions constructed from the raw clickstream with this rich and informative stream of literature, we buttress our "Information Processing Index (IPI)" development, on the following arguments. Figure 3 summarizes the clarifications described below:  When students perceive certain video lecture segment as difficult, they allocate more capacity (cognitive resources) to repeatedly decode, process and store information. Students such as these who "rewatch" or try to "clear their concept" are more likely go through the pipeline stages sequentially, rather than simultaneously (high information processing involved)  When students perceive certain video lecture segment as easy/boring/uninteresting, they allocate very minimal/no capacity to process, decode and store information in memory. Such "skipping" behavior involves low information processing. Students switch back & forth between the stages while processing information in MOOC videos  When students perceive certain video lecture segment as simple to understand (perhaps because they are already familiar with concept being taught), they allocate comparatively lesser capacity than normal/regular watching, and comparatively more capacity than completely skipping video segment to process, decode, encode and store the video segment information. Such students who exhibit "fast watching" in their clickstream are likely to do low information processing overall.  When students perceive certain video lecture segment as difficult to understand (perhaps because some tough concept is being taught), they need to allocate comparatively higher capacity (processing resources) than normal watching to process, decode, encode and store the video segment information. Such students who exhibit "slow watching" in their clickstream are likely to do high information processing overall.  Students might check back for reference in the following two cases in MOOCs. For both these cases, "meaning processing" (Stage 1) part of the pipeline is likely to be processed normally. Thus, the problem is more probable to occur in "memory" (Stage 2) part of the pipeline (i.e., not in the information processing, but the outcome of the information processing). So, cognitive resource allocation should be comparatively higher than skipping/fast watching (because Stage 1 of information processing has been successfully done), but because information processing is still low in Stage 2, this action should be weighted negative overall. Such students who exhibit "checking back for reference" in their clickstream are likely to do low information processing. o A)If a previously taught concept is referred, and student had not paid sufficient attention previously, but is aware of such a concept being mentioned earlier, he has to refer back (problem in encoding/recognition stage, less resources allocated to this step, therefore poor memory for detail) o B)If a previously taught concept is referred which happens to be, for example, some complex formulae, it is not expected of a student to exactly remember the formulae. Therefore even though he might have paid high attention to encode the information earlier, storage would have been shorted at that time (shared resource pool). Therefore, the student might not be able to concurrently retrieve the information now and has to refer back (problem in storage and retrieval stages, less resources allocated to these steps, therefore information poorly stored) (Lang and Basil, 1998)  Students can adjust and get to their comfort level of video watching speed while watching video lectures in MOOCs. Though some amount of cognitive processing is involved to determine the pace at which the MOOC instruction and students' understanding will be coherent, this group of click behavior is not directly related to the actual processing of information content. A "playrate transition" just determines the speed at which a student wants to process information. So, such a behavioral action could be considered neutral. In order to relate our behavioral actions constructed from the raw clickstream with this rich and informative stream of literature, we create a taxonomy of behavioral actions exhibited in the clickstream to construct a quantitative "Information Processing Index (IPI)".The above established hierarchy of information processing is summarized in Figure 4. Negative weights are necessary to distinguish between the "high" and "low" weights for each behavioral action. For example, if skipping=high is weighted -3, skipping=low will be weighted +3 on the information processing index. Using these linear weight assignments, we define students' information processing index as follows: Information Processing Index (IPI)=(-1) j 7 1 i  WeightAssign (Behavioral Action i) , j=1,2 depending on whether the behavioral action is weighted low or high. One of the focal utilities of developing such a quantitative index is that meaningful intervention could be provided in real time to students, as they steadily build up their video watching profile while interacting with MOOC video lectures. When IPI > 0, it can be inferred that high information processing is being done by students. Therefore MOOC instructors need to check for coherency in pace of instruction delivery and students' understanding. This might also hint towards redesigning specific video lecture segments and simplifying them so that they become easier to follow. On the contrary, when IPI < 0, low information processing is being done by students. Therefore MOOC instructors need to help students better engage with the course, by providing them additional interesting reading/assignment material, or fixing video lecture content such that it captures students' attention. The neutral case of IPI = 0 occurs when students' locally exhibited high and low information processing needs in their evolving clickstream sequence counterbalance each other. So, interventions need to made depending on the video lecture segment, where IPI was >0 or <0. CHAPTER 3 VALIDATION EXPERIMENTS 1: MACHINE LEARNING We use machine learning to validate the methodology developed in Section 2.1 and 2.2 for summarizing students' clickstream. The motivation behind setting up these experiments is to automatically measure students' length of interaction with MOOC video lectures, understand how they develop their video watching profile and discern what viewing profile of students leads to in-video dropouts. Furthermore, we validate the methodology developed in section 3.3 by statistically analyzing variations of IPI and testing its sensitivity to student attrition using survival models. Preliminaries on Machine Learning Machine learning, a branch of artificial intelligence, concerns the construction and study of systems that can learn from data. The core of machine learning deals with representation and generalization. Representation of data instances and functions evaluated on these instances are part of all machine learning systems. Generalization is the property that the system will perform well on unseen data instances; the conditions under which this can be guaranteed are a key object of study in the subfield of computational learning theory. A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. One of the popular supervised learning algorithms that has been found to work well with text data is Logistic Regression (figure 5). It is a discriminitive and probabilistic classification model. By discriminative, we mean that the algorithm assumes some functional form for P(Y|X) or for the decision boundary, and estimates parameters of P(Y|X) directly from training data. This is unlike the generative Naive Bayes model, that assumes some functional form for P(X|Y) and P(Y), estimates parameters of P(X|Y), P(Y) directly from training data and uses Bayes rule to calculate P(Y|X). High coefficient weights in Logistic Regression may lead to overfitting. So, we also apply L2 regularization in our approach. Regularization works by adding penalty associated with high coefficient values. L1 usually corresponds to setting a Laplacean prior on the regression coefficients and picking a maximum a posteriori hypothesis. L2 similarly corresponds to Gaussian prior. L2 regularization is expected to do better for our case because it is directly related to minimizing the VC dimension of the learned classifier (capacity/complexity of a classifier: no. of pts that can be shattered) As students progress through the video, they slowly build up their video watching profile, by interacting with the stimulus in different proportions, which in turn depend on their click action sequences. This motivates our next machine learning experiment, which seeks to derive utility from the first two experiments. Navigating away from the video without completing it fully is an outcome of low student engagement. A student is more likely to watch till the end of a video lecture, if the presentation activates his thinking. Thus, it would be interesting to see, whether the nature of students' interaction provide us a hint about such invideo dropouts. Prior work has made a preliminary study on how in-video dropout is correlated with length of the video, and how in-video dropout varies among first time watchers and rewatchers (Guo et al., 2014b). However, we consider video interaction features at a much finer granularity, representative of how students progress through the video. In doing so, we use detailed clickstream information, including 'Seekfw', 'Seekbw' and 'RateChange' behavior, in addition to merely play/pause information. High/Low). In the changed setup, we consider summarized behavioral category vectors (output from level 2) as column features. CHAPTER 4 VALIDATION EXPERIMENTS 2: IPI VARIATIONS To see how IPI fluctuates among different student partitions, and validate whether our operationalization produces meaningful results, we do extensive statistical analysis, specifically z tests (to test significance of difference between mean of 2 samples drawn from same population{population standard deviation known}), computing one way anova measures (to test significance of difference between more than 2 sample means) and performing posthoc tests. where 1 x is the mean of sample 1, 2 x is the mean of sample 2,  is the population standard deviation, while n1 and n2 are sizes of sample 1 and 2. is the mean square output from the ANOVA computed, n is the total number of data points for a particular group, and q is the studentized range statistic. Figure 6 depicts the variation of IPI, among high versus low engagers and in-video dropouts versus non dropouts, in the same video lecture 4-6 from the course, that we have been performing our experiments on. Similar findings were also confirmed with other randomly chosen course videos. Figure 7 shows the frequency distribution of IPI. These figures concur with our intuitions. The average IPI is significantly higher for students with "High" engagement (|z|=8.296, p<0.01) and "Non dropouts" (|z|=22.54, p<0.01). This is also reflected in the histogram, which clearly shows that many non dropouts have positive IPI that pushes up the average. In order to generalize these findings, we also look at the variations of IPI among some other student partitions that we made for the whole course. "Viewers" are students who have watched or interacted with some video lecture but have not done the exercises; the "Active" students additionally turn in homework also. Among the "Active" participants, some students get the "Statement" of accomplishment. Students who achieve a grade (weighted sum of quizzes and assignments completed) of 80% or more achieve a "distinction"; students who achieve a grade from 60-80% are classified into "Normal" category; students having grade less than 60% qualify into "None" achievement category. MOOC dropouts are those students who cease to actively participate in the MOOC (we are concerned with video lecture viewing only) before the last week, i.e., students who do not finish the course. An important observation in figure 8 is that IPI is clearly able to distinguish between Non-dropouts and Dropouts (|z|=9.06, p<0.01). This is also reflected in the histogram in figure 9, which verifies that more proportion of "Non dropouts" have positive IPI. More is the information processing done by students, greater is the video lecture involvement, higher are the chances to derive true utility from video lecture and remain excited and motivated to stay in the course. In addition, we also obtain striking differences between "Active" versus "Viewers" (|z|=10.45, p<0.01). Intuitively too, we expect "Viewers" to have higher IPI than "Active" class, because as their primary MOOC activity, "Viewers" grapple more with the video lecture. In figure 8, we observe that students who do not achieve a statement have significantly higher IPI than the ones who get a statement (|z|=4.58, p<0.01). However, achieving a statement requires students to compulsorily complete course quizzes and assignments, in addition to watching MOOC video lectures (which of course is not compulsory and carries no credit). Therefore IPI alone is not a very good measure to distinguish "Statement" versus "No Statement" student groups. Similar argument holds for the three classes of "Achievement". Though the differences in mean values for "Distinction", "None" and "Normal" groups are significant (F(2,19525)=11.16, p<0.0001), we must be careful in interpretation, because the definitions for this partition are based fully on course grades (which of course will be partly affected by video lecture viewing). While (DeBoer et al., 2013) have studied how diversity in MOOC students' demographics and behaviors is correlated to course performance and success, explicit background data on students was collected via an exit survey, rather than developing an implicit metric to measure performance. Based on the sequence of events, a transition probability matrix is formed, and the next sequence of states can be predicted from the current state, using the formula: , where 'P' is the transition matrix derived from fitting the Markov chain of a particular order. For example, the following figure 11 represents the 1 st order transition matrix (P) for some students' clickstream sequence. The AIC or BIC for a model is usually written in the form [-2logL + kp], where L is the likelihood function, p is the number of parameters in the model, and k is 2 for AIC and log(n) for BIC. B)K Means Clustering: Clustering is an unsupervised machine learning problem where the objective is to find hidden structure in unlabeled data. K means clustering is a popular centroid based clustering algorithm. Concretely, given a set of observations (x 1 , x 2 , …, x n ), where each observation is a 'ddimensional' real vector, k-means clustering aims to partition the 'n' observations into 'k' sets (k ≤ n), S = {S 1 , S 2 , …, S k }, so as to minimize the within-cluster sum of squares (WCSS). This is represented in figure 12. The algorithm is described in figure 13. This chain has the maximum log likelihood and minimum value of AIC, BIC (Dziak et al., 2012) when compared to Markov chains from order 2 to 5. The output is a transition probability matrix for each clickstream sequence. In the next step, we present these markov matrices as input to K-means clustering algorithm. The motivating intuition is to group similar matrices, having lot of click overlap (accounts for order and number) and similar transition probabilities. On varying 'k' from 4 to 9, k=8 gives minimum within cluster sum of squares and maximum between cluster sum of squares. The proportion of clicks belonging to each raw click category is presented in figure 14. Cluster attributes such as the time spent on seek forward, seek backward and pause are depicted in figure 15. C1 and C2 represent normal watchers who primarily play and pause without doing much activities. However, the average clickstream sequence length for C1 is four times C2, and that is why these two clusters are differentiated. Cluster C3 represents watchers with low proportion of seek/scroll forward and seek/scroll backward clicks, while cluster C7 Preliminaries for Approach 2 Statistics provide useful tools for summarizing large amounts of social network information, and for treating observations as stochastic, rather than deterministic outcomes of social processes. When we use statistics to describe network data, we are describing properties of the distribution of relations or ties among actors, rather than properties of the distribution of attributes across actors. Standard statistical tools for the analysis of variables cannot be directly applied to inferential questions, hypothesis or significance tests, because the individuals embedded in a network are not independent observations drawn at random from some large population. The "boot-strapping" approach (estimating the variation of estimates of the parameter of interest from large numbers of random sub-samples of actors) is used to get more correct estimates of the reliability and stability of estimates (i.e. standard errors). Concretely, to examine relations between 2 types of ties in a network, essentially, we have two adjacency matrices, one for Type X ties and one for Type Y ties, and we would like to correlate them. We cannot do this using a standard statistical package for two reasons. First, statistical packages are set up to correlate vectors, not matrices. This is not a very serious problem, however, because we could just reshape the matrices so that all the values in each matrix were lined up in a single column with NxN values. We could then correlate the columns corresponding to each matrix. Second, the significance test in a standard statistical package makes a number of assumptions about the data which are violated by network data. For example, standard inferential tests assume that the data observations are statistically independent, which, in the case of matrices, they are not. To see this, consider that all the values along one row of an adjacency matrix pertain to a single node. If that node has a special quality, such as being very anti-social, it will affect all of their relations with others, introducing a lack of independence of all those cells in the matrix. Another typical assumption of classical tests is that variables are drawn from a population with a particular distribution, such as a normal distribution. Often times in network data, the distribution of the population variables is not normal or is simply unknown. Moreover, the data is probably not a random sample or even a sample at all ; all we have is a population The QAP correlation/regression technique correlates two or more adjacency matrices by effectively reshaping them into two long columns and calculating an ordinary measure of statistical association such as Pearson's r. We call this the observed correlation. To calculate the significance of the observed correlation, the method compares the observed correlation to the correlations between thousands of pairs of matrices that are just like the data matrices, but are known to be independent of each other. To construct a pvalue, it simply counts the proportion of these correlations among independent matrices that were as large as the observed correlation. As usual, we typically consider a p-value of less than 5%/1% to be significant (i.e., supporting the hypothesis that the two matrices are related). QAP Regression allows us to model the values of a dependent variable (such as Type X ties) using multiple independent variables (such as Type Y ties and some other relations such as Type Z ties). The randomly generated pairs of adjacency matrices for each permutation are done by randomly rearranging rows and columns (therefore independent), rather than changing individual matrix entries. It has 2 advantages: Old and new matrices have same properties such as mean, standard deviation (s.d) etc. More subtle and auto-correlational properties of matrices are preserved, so when we compare the observed correlation against our distribution of correlations, we can be sure we are comparing apples with apples. Approach 2: Social Network Analysis based modeling In order to gain better visibility into how students in MOOCs are informally connected through a common pattern of clickstream interaction, we now present a social network analysis based student modeling. Specific questions that guide our work going forward include, a study on the significance of influencing relations (similarity in the proportion of video watched, engagement with the video, average playing rate or difficulty rating for the video) and video interaction attributes (number of seeks/pause, time spent on pause/seek) that affect the relationship between students having similar clickstream sequences. The data for this social network based student modeling comes from video lecture 4-6 (6th lecture in the 4th week of the course). Our steps to set up this analysis are as follows: 1)Firstly, we discretize various video interaction features:  Engagement attribute = (summation of time spent on pause, seekFw and seekBw) * average play rate. This is discretized by equal frequency into 2 bins: 1(Low or <=1112 secs), 2(High or >1112 secs)  Video played proportion attribute = (played length/total video length) * average play rate * 100. This is discretized by equal width into 4 bins: 1(<51.105%), 2(51.105%, 100.737%), 3(100.737%, 150.369%), 4(>150.369%)  Average play rate attribute. This is discretized by equal frequency into 2 bins: 1(Low or <= 1), 2(High or >=1) 2)Then, we form 4 different kinds of network from the clickstream data for our experimentation purposes:  1st network (VWSS): 2 students connected if their video watching state sequences (VWSS) are similar/belong to same cluster (density: 0.45)  2nd network (VPP): 2 students connected if their video play proportion is similar/belongs to same cluster (density: 0.49)  3rd network (ET): 2 students connected if their engagement with the video is similar/belongs to same cluster (density: 0.49)  4th network (APR): 2 students connected if their avg playing rate is similar/belongs to same cluster (density: 0.56) {To quantitatively define similarity of VWSS, we represent each VWSS using 8 numeric metrics such as proportion of Pl/Pa/Sf/Sb/Rc clicks, timeonPause/SeekFw/SeekBw. Then, K-Means clustering is applied to find similar VWSS (Distance metric: Euclidean, Scoring metric: Distance to Centroids). After optimization, we group VWSS into 4 clusters (k=4). So, two students will be connected, if their VWSS belong to the same cluster} 3) To motivate our Dyadic Hypothesis, firstly we combine the individual networks into multiplex relations, and examining the overall density and density within groups. We form the multiplex relation quantitatively using boolean combinations. For e.g: If there was a link between student A and student B because of having similar VWSS (1), AND there was also a link between student A and student B because of having similar ET (1), then the multiplex relation adjacency matrix would also have a 1 in the (ij) th entry corresponding to (student A, student B). All the 3 combinations in table 4 below were constructed similarly. The results summarized in Figure 16, conform with the motivation presented in Variation of dropouts in the MOOC We may expect that when students find the course too tough to follow/uninteresting/boring, they will not engage with future videos, or, when students seem very interested in understanding the video and exhibit lot of rewatching behavior, we might expect them to stay on through the course end video lecture. Therefore, students who do not stay till the last week of the course (exhibit any video lecture viewing), are considered as complete course dropouts. We plot figures 18 and 19 to depict the proportion of course dropouts by weeks and by different student groups as discussed in previous chapters. Figure 18 concurs with our intuitions. Student groups "Active", "Statement" and "Distinction" have significantly lower dropout proportion ( If we analyze figure 18 together with figure 8 (variation of average information processing indices), we can observe that average IPI for "Viewers" is much higher than "Active " class, despite more dropout proportion. This indicates that though "Viewers" put higher effort and more cognitive processing to follow the video lectures, there is insufficiency in understanding the course instruction, as well as getting in sync with the instruction delivery method and its pace. Similar correspondence can be seen between the average IPI for "No Statement" class and their dropout proportion, as compared to "Statement" class of students. The peaks (local maximas) in figure 19 highlight video lectures that are "not easy to follow" or are "unable to hold students' attention", because we lose comparatively higher students after these lectures. In this figure, we also notice that very high number of students drop out after the 1st week (introductory set of lectures). One possible explanation for this might be that such students register for the course, to just see what is the course is about, without having any actual intention to follow the course. Information about such students is very helpful for a course instructor to design motivating interventions to help them to follow the course. One principal utility of detecting dropouts early is recommendation of selected future video lectures for students to watch (for example, where an interesting concept/case study/application is going to be discussed) Preliminaries for Dropout prediction Survival analysis (Miller, 2011) is a statistical modeling technique used to model the effect of one or more indicator variables at a time point on the probability of an event occurring on the next time point. In our case, we are modeling the effect of certain video interaction attributes (such as summarized clickstream behavior, information processing index, average playing rate etc) on probability that a student drops out of the video lecture participation on the next time point. Survival models are a form of proportional odds logistic regression, and they are known to provide less biased estimates than simpler techniques (e.g., standard least squares linear regression) that do not take into account the potentially truncated nature of time-to-event data (e.g., students who had not yet ceased their participation at the time of the analysis but might at some point subsequently). In a survival model, a prediction about the probability of an event occurring is made at each time point based on the presence of some set of predictors. The estimated weights on the predictors are referred to as hazard ratios. The hazard ratio of a predictor indicates how the relative likelihood of the failure (in our case, student dropout) occurring increases or decreases with an increase or decrease in the associated predictor. A hazard ratio of 1 means the factor has no effect. If the hazard ratio is a fraction, then the factor decreases the probability of the event. For example, if the hazard ratio was a number n of value .4, it would mean that for every standard deviation greater than average the predictor variable is, the event is 60% less likely to occur (i.e., 1 -n). If the hazard ratio is instead greater than 1, that would mean that the factor has a positive effect on the probability of the event. In particular, if the hazard ratio is 1.25, then for every standard deviation greater than average the predictor variable is, the event is 25% more likely to occur (i.e., n -1). Machine Learning Approach Having seen how dropout proportion across weeks and across different student partitions in the MOOC, we now seek to understand more about how participation trajectories of complete course dropouts differs from non dropouts. Therefore, it is interesting to investigate, the extent to which engagement, video play proportion and IPI trajectories influence attrition behavior. The development of trajectories is indicated in IPI Trajectory VH L VL H... regularized Logistic Regression is used as the training algorithm (with 5 fold cross validation annotated by student-id and rare feature extraction threshold being 5). The dependent variable is the binary variable, complete course dropout (0/1). Dropout variable is 1 on the students' last week of active participation, and is 0 for all other weeks. If a students' final participation week is the last course week, dropout variable will remain 0 for that student for all weeks (the student is a non-dropout). To extract the interaction footprint of a student before he drops out of the course, we extract the following features: A)Transition features from "Engagement trajectory", "Video Play Proportion trajectory" and "IPI trajectories" of students for the videos watched (N-grams of length 4,5 and string length) from 0 to (n-1) th instant, B)Engagement, Video Play Proportion and IPI trajectories for the n th instance (attribute for the last video lecture watched before dropping out), C)Proportion of different symbol representations in the trajectories (for example, in a trajectory such as HLLHH, proportion(H)=60%, proportion(L)=40%. Results: We achieve an accuracy of 0.80 and a kappa of 0.57 (Random baseline performance is 0.5). The false negative rate is 0.143. Survival Analysis Using the statistical programming language R, we perform Survival analysis on our MOOC dataset. The variables we use are our quantitative IPI index, discretized engagement (high/low), discretized videoplayprop (low/medium/high/very high), jumped length forward (in secs), jumped length backward (in secs), summarized and discretized clickstream action vectors (rewatch, skipping, playratetransition, clearconcept, fastwatching, slowwatching, checkbackreference) and actual engagement (in secs). As an input, we standardize all the numeric variables (by computing z-scores). We transform the representation for "low" and "high" engagement to binary variable 0 and 1 to provide as an input to the survival model. Also, we transform "low", "medium", "high" and "very high" video play proportion categories into 0, 1, 2, 3. We remove all correlated variables, keeping only variables having less than 0.5 correlation for our analysis, to prevent multicollinearity problems. The results are summarized in Table 7. Effects are reported in terms of the hazard ratio (HR), which is the effect of an explanatory variable on the risk or probability of participants drop out from the course, based on video lecture participation. Because all the explanatory variables except engagement/video play proportion have been standardized, the hazard rate here is the predicted change in the probability of dropout from the course forum for a unit increase in the predictor variable (i.e., Engagement changing from 0 to 1, or, Video play proportion changing by 1 unit (for example, from 0 to 1, 2, 3) or, the continuous variable increasing by a standard deviation when all the other variables are at their mean levels). Independent Variable The hazard ratio for IPI means that students' dropout in the MOOC is 37% (100%-(100%*0.63)) less likely, if they have one standard deviation greater IPI than average. Such students grapple more with the course material (as reflected by their video lecture participation). Because video played proportion is a categorical variable, its hazard ratio tells us that increasing the video play proportion by 1 unit decreases the likelihood of student dropout by 37% (100%-(100%*0.63)). As students start watching more proportion of the video, this is indicative of their interest; as a result, they are less likely to dropout of the MOOC. Among other interesting results are the hazard ratios for rewatch and playrate-transition behavioral action. If students' rewatching behavior changes by 1 unit (from low to high), they are 33% (100%-(100%*0.67)) less likely to dropout. If students' playrate-transition behavior changes by 1 unit (from low to high), they are 35% ((100%*1.35)-100%) more likely to dropout. This indicates that such students have severe problems in coping up with the instruction pace and there is a definite lack of coherency between instruction pace and understanding. In contrast to regular courses where students engage with class materials in a structured and monitored way, and instructors directly observe student behavior and provide feedback, in MOOCs, it is important to target the limited instructor's attention to students who need it most (Ramesh et al., 2013). By identifying students who are likely to end up not completing the class before it is too late, we can perform targeted interventions (e.g., sending encouraging emails, posting reminders, allocating limited tutoring resources, etc.) to try to improve the engagement of these students. For example, our prediction model could be used to improve targeting of limited instructor's attention to users who are motivated in general but are experiencing a temporary lack of motivation that might threaten their continued participation, in particular, those who have shown serious intention of finishing the course by interacting with a couple of video lectures. CONCLUSION AND FUTURE WORK In this thesis work, we have begun to lay a foundation for research investigating students' information processing behavior while interacting with MOOC video lectures. The cognitive video watching model that we applied to develop a simple, yet potent IPI using linear weight assignments, can be effectively used as an operationalization for making predictions regarding critical learner behavior. As a next step, we plan on constructing a gradient function that captures the information processing hierarchy in a more robust manner. An additional challenge is to fuse video clickstreams with page-view clickstream gathered from the MOOC, to better understand students' interests during their interaction. In our work going forward, we seek to understand how perceived difficulty of students (gathered in the form of a rating via an explicit questionnaire) is reflective of their engagement in the video and how it relates to high and low overall MOOC performance. Highlighting video lectures that are "not easy to follow" or are "unable to hold students' attention", would be helpful for a course instructor to design motivating interventions for students to follow the course. Another interesting enhancement to our work will include comparative analysis of the currently studied introductory level MOOC course, with intermediate and advanced level courses to contrast and generalize the findings. We will draw from work integrating statistical approaches such as survival models and social network analysis techniques, in order to form combined representations of video lecture and page-view clickstream behavior as well as discussion forum footprint. This will help us to gain better visibility into how students participate in these MOOCs as a whole. Combining such students inputs with more granular behaviors such as eye tracking (Schneider et al., 2013) would help us to investigate deeply, the factors that influence students' interaction.
2014-07-26T10:53:58.000Z
2014-07-26T00:00:00.000
{ "year": 2014, "sha1": "46aee44df83784cc2a7b894f4ff7c3f7b52057f7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "46aee44df83784cc2a7b894f4ff7c3f7b52057f7", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
239628240
pes2o/s2orc
v3-fos-license
Impact of the use of sexual material and online sexual activity during preventive social isolation due to COVID-19 Introduction. Preventive social isolation due to coronavirus disease (COVID-19) has represented one of the greatest health challenges of the last decades worldwide. As a result of social isolation, the consumption of information in digital media, such as the use of online sexual material, has increased, leading to risky sexual behavior in young people. Objective. To quantify the impact on the use and type of online sexual material and to determine the predictors of online sexual activity in people in preventive social isolation due to COVID-19. Method. Multivariate cross-sectional study; 385 participants were studied and contacted through an online survey. Results. Internet pages and social networks are the main platforms for the use of online sexual material, and its consumption was more frequent in those who had more days of preventive social isolation. Predictors of sexual activity were cybersex (β = .38), excitation (β = .36), masturbation (β = .34), and adven ture (β = .33), which were found to be statistically significant ( p ˂ .001). Discussion and conclusion. Privacy plays an important role in the use of online sexual material and activities, and greater consumption can be found in intimacy. It is important to be alert to the effects of the pandemic on sexual risk behavior and further research is needed. INTRODUCTION The world has changed because of the coronavirus disease or COVID-19. This pandemic changed the dynamics in the lives of millions of people as a result of preventive social isolation enforced as a universal action to prevent the rapid spread of the virus (World Health Organization [WHO], 2019) so today the use of the internet has become an ally to reduce geographical and economic gaps. With preventive social isolation, the use of social networks such as YouTube, Facebook, WhatsApp, audiovisual content platforms, as well as access to online pages, have been the main showcases for peopleʼs entertainment during the COVID-19 confinement, especially in the 15-49 age group (Subía-Arellano, Muñoz, & Navarrete, 2020). Today, due to this overexposure to digital media, people may be in greater contact with the use of online sexual material, which is defined as an interaction with various digital materials such as internet pages, email, chat, online video channels, video calls, social networks, online forums for the purpose of sexual activity, mainly masturbation, stimulation, excitation, seeking sexual adventures, gathering people for sexual encounters, sharing images with sexual content and the practice of cybersex (Valdez Montero, 2011). It is surprising how anyone who is in front of a device with an internet connection can reach with just a few clicks explicit sex content such as pornography, but what is really worrying is that some viewers, mostly young people, manipulate what they see on a screen to practice assuming it as reality (Hernández Torres, Benavides Torres, González y González, & Onofre Rodríguez, 2019). In this atypical situation, the use of online sexual material may be subject to uncontrolled measures. On the one hand, people facing the risk of contagion by COVID-19 take shelter at home to prevent the disease; on the other, they are exposed to online content that may produce a risk to their sexual health (Folch, Álvarez, Casabona, Brotons, & Castellsagué, 2015;Shallo & Mengesha, 2019; Valdez Montero, Benavides Torres, González y González, Onofre Rodríguez, & Castillo Arcos, 2015). New studies have found that during the COVID-19 pandemic, the use of sexual material and online sexual activities are increasing due to confinement, Pornhub, the worldʼs leading pornography website, reports a meteoric rise during the pandemic, the site receives 37 million visits per year with 47% more online traffic than normal. Likewise, sexual activities such as masturbation, excitation, and sharing images with sexual content were frequent activities of internet users during preventive social isolation due to COVID-19 (Ibarra et al., 2020). It was also found that there are variables that are positively related to these behaviors such as masturbation, excitation and cybersex (Subía-Arellano et al., 2020;Uzieblo & Prescott, 2020). The consumption of online sexual material and the practice of online sexual activities may encourage risky sexual behaviors in users because the materials they watch and the high predisposition of their use due to the COVID-19 pandemic suggest a risk to peopleʼs sexual health (Valdez Montero et al., 2015). It is important to quantify its use in order to identify risk behaviors in the population in a timely manner and in this way take care of peopleʼs sexual health. In this sense, the World Health Organization refers to sexual health as a state of physical, mental, and social well-being in relation to sexuality; it is not merely the absence of disease, dysfunction or infirmity. Sexual health requires a positive and respectful approach to sexuality and sexual relationships, as well as the possibility of having pleasurable and safe sexual experiences, free of coercion, discrimination, and violence (Fernández Velasco, 2018). Part of the theoretical explanation concerning the risk of using online sexual material, Fishbein and Ajzen in their theory of reasoned action postulate that, if a person perceives that there is a benefit in engaging in a certain observed behavior, he or she is more likely to carry it out (Fishbein & Ajzen, 2011;Yzer, 2012). At this point Albert Bandura in his social cognitive model mentions that sexual behavior is acquired and learned through observation and that this has a significant impact on learning and modeling behavior (Bandura, 2011;Glanz, Rimer, & Viswanath, 2008). In turn, Prochaska and Di Clemente refer in their trans-theoretical model that when people have an addictive or problematic behavior, sometimes the first stage of those people can be the denial or even be in the unconsciousness of having it, because the human being to make a change of habit first requires being aware of having a problem, therefore, the person may be in a pre-contemplation phase (Glanz et al., 2008;Prochaska et al., 1994). Social networks such as YouTube, Facebook, WhatsApp, and others spread and promote the use of online sexual material in video, audio, or text format. Among the most common forms, video calls, e-mails, online forums are means by which people commonly come in contact with online sexual material. Pornography, nudity, sexual conversations, cybersex, etc. are all stimuli for people to engage in risky sexual behaviour in order to acquire an STIs or even HIV. Similarly, adventure is an activity that takes place on-line for the purpose of having sex usually with casual partners (Castillo-Arcos, Benavides-Torres, & López-Rosales 2012;García-Vega, Menéndez, Fernández, & Cuesta, 2012;Hernández Torres et al., 2018;Valdez Montero et al., 2015). According to the Organización de Estados Iberoamericanos (OEI, Organization of Ibero-American States), by its Spanish acronym, recently conducted a census in Mexico in which it quantified that nine out of ten Mexicans said they had consulted pornography of various types on the internet in search of movies or images. Similary, more than 40% of the participants reported maintaining erotic contact by chat and 35% by webcam with strangers (Organización de Estados Iberoamericanos [OEI], 2016). In addition, the Asociación Mexicana de Internet (AMIPCI, Mexican Association of Internet) by its Spanish acronym, refers that the number of internet users is increasing, and it has been quantified that there are more than 58 million cybernauts and that on average each person spends more than eight hours a day to be surfing the internet, most of them do it from their smart phone because in this way they have greater ease of access and privacy when surfing the internet (AMIPCI, 2016). Therefore, the purpose of the present study was to quantify the impact on the use and type of online sexual material and to determine the predictors of online sexual activity in people in preventive social isolation due to COVID-19. Design of the study A multivariate cross-sectional study was carried out between March and June 2020 in the city of Torreón Coahuila, Mexico. There were 385 participants in the sample and they were calculated with the statistical program nQuery Advisor 7.0 for Windows, with a 95% confidence level and an acceptable error of 5% for a design effect of 1.0 and power of 90% (Cohen, 2013;Grove, Burns, & Gray, 2013). Participants had to meet the following inclusion criteria: be over 18 years old, navigate the internet with a mobile or fixed device, and be in preventive social isolation in their homes due to COVID-19 at the time of data collection. No participants were eliminated from the study. In the present study the following research hypothesis was posed: youth used social networks as the main type of online sexual material, masturbation and excitation, which were the main predictors of online sexual activities during preventive social isolation due to COVID-19. Measurements A data sheet was used to meet the inclusion criteria, and some questions were added to the data sheet to find out about the general demographics of the participants. The use of online sexual material and sexual activity were measured with an automated web-linked instrument composed of 43 items, which measured the type of sexual material, coercive use, problematic use, and online sexual activities. For the purposes of this study, only two dimensions of the instrument were used: the type of sexual material online with eight ítems; an example of a question is: Do you think that watching sexual material online is not as bad as doing it live? The response options for this dimension ranged from completely disagree = 1 to completely agree = 5. It is worth mentioning in this dimension that participants were able to select more than one type of online sexual material, which means that each participant was able to choose more than one response option which will be reflected in the frequencies and percentages of the type of online sexual material, and the online sexual activities dimension counts 16 items, an example question is: Have you masturbated using internet pages with sexual material? The response options for this dimension ranged from never = 0 to always = 4, with both dimensions having an ordinal measurement level. This instrument has been previously validated in Mexican population with Cronbachʼs alpha .72 (Benavides, Valdez Moreno, González, & Onofre Rodríguez, 2012;Valdez Montero, 2011). Procedure The SurveyMonkey tool was used to apply the instrument through a web link starting in March 2020. As a strategy for data collection and in accordance with the recommendations of the COVID-19, the general population was invited to participate in the study through the website of the Universidad Autonoma de Coahuilaʼs newsletter and social networks were also used to extend the invitation to potential participants, in those platforms appeared the web link that redirected them to the survey in the SurveyMonkey platform. The present study complied with the General Health Law of the Secretaría de Salud (SSA, Secretary of Health), in the area of Health Research, in attention to Chapter I, referring to the ethical aspects of research on human beings, articles 13 and 14 specifically to the anonymity of the participants and the informed consent of participation (Secretaría de Salud [SSA], 1987). Statistical analysis Once the data was collected through the SurveyMonkey platform, it was exported to the Statistical Package for the Social Sciences (SPSS) version 23 program. Kolmogorov Smirnovʼs test was performed to determine if the distribution of the data followed a normal pattern. At this point, the reliability calculation of Cronbachʼs Alpha was also performed to calculate the adjustment of the applied instrument. For the data management, central tendency measures, percentages and frequencies were used to quantify the impact on the use and type of sexual material online. It should be noted that in the dimension that measures this variable, the participant could choose more than one answer option and that was particularly consistent with the objective of quantifying the use and type of online sexual material. For this reason, a regrouping of this variable with multiple responses was not used, that is, the final quantification of frequencies and percentages will then be higher than the sample. To determine the predictors of online sexual activities, multiple linear regression models were carried out with the method of successive steps and as a credibility criterion the assumptions of independence of the residues with the Durbin Watson statistic with values close to 2 and the assumption of normality of the residues for each of the online sexual activities were considered. In addition, for external validity the cross validation of the model was used by calculating the mean square error and the criterion of the sample size per input variable (Cohen, 2013;Miles, Huberman, & Saldana, 2013). Ethical considerations This research was carried out in accordance with ethical standards from the Universidad Autónoma de Coahuila of the Escuela de Licenciatura en Enfermería -Unidad Torreón, of the institutional review protocol (20CEI024201141127), in Mexico City. RESULTS The average age of the participants was 27 years (SD = 8.31; min = 18, max = 58) 68.3% were female and 31.7% were male, 41.6% were single, while 23.1% were married. Regarding the days of preventive social isolation by COVID-19, participants reported an average of 15.5 days (SD = 1.80; min = 1, max = 40), the lowest percentage was obtained by those who had less than ten days in preventive social isolation with 7.5% and most of them had more than 20 days of quarantine with 22.6%. Forty-one percent of the participants were single, and 23.1% were married and in a dating relationship (Table 1). It is worth mentioning that 41% were alone at the time of accessing online sexual material during preventive social isolation. Kolmogorov Smirnovʼs normality test was carried out in the construction of the indices of the final scale, also the normality test was included in the multiple linear regression models where a value of the statistic was obtained lower than the significance of .005, which demonstrated that the data were not distributed normally. On the other hand calculated the reliability of the instrument of use of sexual material online, reported that the global scale presents an acceptable Cronbachʼs alpha coefficient in this study α = .82. which translates into a good fit. Table 2 shows the frequencies and percentages of the use and type of online sexual material and the days of quar-antine in two periods. The first was from 1 to 14 days, finding that the type of online sexual material of the Mexicans studied was mainly from internet pages with 47.5% use, followed by mobile chat applications with 41.5% and social networks with 23.2%. In a smaller percentage were the types of online sexual material in video calls, video portals, online sex forums, and email. In the period of the second group of 15 days or more there was a considerable increase in consumption of the type of sexual material online, where internet pages reached 83.8% of use, mobile chat applications had an increase of 121.8%, social networks with 56.7% and with a lower use were video calls, video portals, online sex forums, and email. Table 3 shows the online sexual activities that Mexicans carried out in the period of 1 to 14 days of preventive social isolation due to COVID-19, masturbation was the main one with 43.1%, followed by excitement with 34.7%, and stimulation with 19.7%. The sexual activities with the lowest percentage of practice in the studied sample were meeting people, adventure, images, and cybersex. After 15 days or more of quarantine, Mexicans most frequently performed the sexual activities of masturbation with 87.2%, excitation with 75.8%, and stimulation with 47.2%. It can be seen that all increased according to the days of social isolation; the least frequent sexual activities online were meeting people, adventure, images, and cybersex. The multiple linear regression models through the method of successive steps it was obtained that the selection of the variables for each model was in a random way and according to its value of significant probability this method guarantees that the exploratory selection of introducing and extracting variables of the regression models follows a purely mathematical logic for what it was considered the most suitable technique for the present study. Table 4 shows the results of the multiple linear regression models. With it, was possible to determine the relationships between the variables and their probability of prediction, in addition to showing the standardized coefficients. In the first instance, the multiple linear regression model showed the existence of a predictive relationship between the variables that is explained by the equation Y= masturbation, X1, excitation (β = .35, p ˂ .001), X2 adventure (β = .25, p ˂ .001) and X3 cybersex (β = .21, p ˂ .001). The coefficient of determination was .55 and the mean square error was 8.99. DISCUSSION AND CONCLUSION Before starting to discuss the findings obtained it is necessary to point out some limitations of the present study. The results should be considered an approximation to the reality of internet users during preventive isolation. However, the condition of preventive isolation requires perhaps a longitudinal scope to quantify the use of sexual material online and make the pertinent comparisons. The other limitation is that while sexual risk exists from the consumption of sexual material online, online sexual activities are an approximation of sexual risk but do not measure behavior as such by persons. Therefore, it is necessary for future research to conduct a scrutiny of sexual behavior facilitated by the use of sexual material and sexual activities online. Finally, it can be said that for future research, systematic random sampling is necessary to reduce selection bias. In this study, applying an online survey may not guarantee control of participants. However, it is important to point out that on issues of sexuality when handling sensitive information, online surveys are particularly recommended so that the participant answers without any risk of being identified after the study (Grove et al., 2013). In this study most of the participants were women; the marital status of the participants was mostly single and the average age was 27. As for the days of preventive social isolation that the participants had was more than 20 days that represented the majority with 22.6% of the sample. It is important to highlight that participants were alone when interacting with online sexual material and sexual activities online 41% of the sample did it this way. This can be explained because users found a greater privacy in their activities, besides that, due to the general recommendations of preventive social isolation by COVID-19, people had more time to spend on online activities. Non-essential activities was the indicator for the suspension of presence activities and this includes school activities, which may also explain that most of the participants of this study were young people (WHO, 2019). For the sake of clarity we want to mention here again that most had more than 20 days of social confinement, especially people who have a high consumption of internet and use of sexual material and sexual activities online, young people were the main participants of this study, a similar result was found the age group of 15 to 49 years in studies of organizations that found the habits of internet combined with the use of sexual material and sexual activities online (AMIPCI, 2016;OEI, 2016). Privacy plays an important role in the use of online sexual material and activities; in intimacy it is possible to find greater consumption. Something that was also reported by Valdez Montero (2011). With respect to the objective of the present study which was to quantify the impact on the use and type of online sexual material and to determine the predictors of online sexual activities in people in preventive social isolation due to COVID-19, the type of sexual material at 14 days it was mainly of internet pages, mobile chat applications and social networks, this has a simple explanation, since internet pages as well as social networks enjoy a particularity in the dissemination of online sexual material and is that these platforms provide simple access to information, in some cases in an anonymous way which generates a sense of privacy in users, in addition they are very popular especially in the young population. Otherwise what is observed in the less frequent type of sexual material such as video calls, video portals, online sex forums, and email, since the diffusion of online sexual material in these last platforms the identity of the users is mostly exposed which would explain its less frequent use, this was consistent with what was reported by Uzieblo & Prescott (2020) they found a growth in the use of websites to access online sexual material during the social isolation due to COVID-19. A similar situation was found in users with 15 days or more of preventive social isolation, resulting in a higher consumption of the same types of online sexual material; internet pages, mobile chat applications, and social networks, the least frequent were video calls, video portals, online sex forums, and email. This result can be explained by the fact that in internet pages and social networks such as WhatsApp it is relatively easy to find sexually explicit audiovisual material that is shared among users, and so the potential reach of online sexual material and people is very high because with just a few clicks this information reaches anyone connected to one of these platforms (Koletić et al., 2019). This happens even more so with preventive social isolation when people spend more hours navigating through social networks (AMIPCI, 2016). As is the case in the social network Facebook, it is common to find videos of explicit sex even in combination with alcohol consumption likewise you can find groups in this social network where users are invited to receive pornographic material through messages in private or through a link these practices pose a risk because the viewer can assume what he sees in videos or images as a modeling of sexual behavior to be assumed in their personal lives and therefore could adopt risky sexual behavior as has been shown in other studies (Bontempi, Mugno, Bulmer, Danvers, & Vancour, 2009;Shallo & Mengesha, 2019). Finally, the predictors of online sexual activities with the greatest statistical weight represented by typed beta values equal to or greater than (β = .33), and in order of their predictive ability were the variables (cybersex, excitation, masturbation, and adventure) all with a statistical significance of p ˂ .001. This can be explained by the fact that some of these variables are related to the main sexual activities and to the type of sexual material that participants performed online during the health contingency period for COVID-19, these predictors can be understood as high sexual risk factors where users of perform in real life, this was consistent with what was mentioned in other studies (Fernández Velasco, 2018;Ibarra et al., 2020;Subía-Arellano et al., 2020). These studies agreed that the use of online sexual material and online sexual activities are a projection and encourage the replication of observed behaviors as a means of learning. Furthermore, for future research it is important to take up these predictive variables to explain sexual risk behavior in cybernauts. The findings of this study have an impact on the dynamics of our affected society and in an atypical context, so it will be necessary to further investigate the effects of the pandemic on sexual behavior, including psychological variables such as stress, the post-traumatic effect of the pandemic, and the impact produced as such on the sexual behavior of individuals, An important aspect in the prevention of sexual risk behavior due to preventive social isolation is to ultimately reduce the number of hours spent surfing the internet. The performance of activities not related to technology or internet use will be a protective factor in reducing the effects of the pandemic on sexual behavior. The COVID-19 pandemic has modified the dynamics of the lives of millions of people world wide, as a result of this change the use of the internet is part of everyday life, the use of online sexual material is particularly common among young people, internet pages and social networks such as Facebook, WhatsApp, and Periscope, among other platforms is feasible to access explicit sex content and this could be a risk in modeling the sexual behavior of its viewers, online sexual practices such as masturbation, excitation, and stimulation are frequent in Mexicans isolated by the pandemic. The variables that predicted these practices were cybersex, excitation, masturbation, and adventure. It is important to be alert to the effects that the pandemic causes in relation to sexual risk behaviors, the magnitude of the pandemic on the sexual behavior of Mexicans is still unknown, however the results presented are generalizable in the population with characteristics similar to those of the present study and it is required to continue investigating the effects of the pandemic on the sexual health of Mexicans.
2021-09-24T15:39:10.552Z
2021-08-30T00:00:00.000
{ "year": 2021, "sha1": "7f8b27388da30a848d1f71d267da72c8b77ed522", "oa_license": null, "oa_url": "https://doi.org/10.17711/sm.0185-3325.2021.024", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "190d2d82c29132b2dd490d9d1f984813cb7e6d86", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
212690722
pes2o/s2orc
v3-fos-license
Circulation of pantropic canine coronavirus in autochthonous and imported dogs, Italy Abstract Canine coronavirus (CCoV) strains with the ability to spread to internal organs, also known as pantropic CCoVs (pCCoVs), have been detected in domestic dogs and wild carnivores. Our study focused on the detection and molecular characterization of pCCoV strains circulating in Italy during the period 2014–2017 in autochthonous dogs, in dogs imported from eastern Europe or illegally imported from an unknown country. Samples from the gut and internal organs of 352 dogs were screened for CCoV; putative pCCoV strains, belonging to subtype CCoV‐IIa, were identified in the internal organs of 35 of the examined dogs. Fifteen pCCoV strains were subjected to sequence and phylogenetic analyses, showing that three strains (98960‐1/2016, 98960‐3/2016, 98960‐4/2016) did not cluster either with Italian or European CCoVs, being more closely related to alphacoronaviruses circulating in Asia with which they displayed a 94%–96% nucleotide identity in partial spike protein gene sequences. The pCCoV‐positive samples were also tested for other canine viruses, showing co‐infections mainly with canine parvovirus. able to spread to extraintestinal tissues. This variant was associated with a fatal disease of dogs, characterized by leukopenia, gastroenteritis and severe lesions in the major organs Chen et al., 2019;Pinto et al., 2014), and subsequent studies have proved its impact on canine immune response (Marinaro et al., 2010). Taking into account the scarce information existing about the actual circulation of pCCoV in the dog population, the aims of our study were the following: (a) to conduct an epidemiological survey for this virus in autochthonous and imported dogs in Italy during years 2014-2017; (b) to investigate the genetic relatedness of the detected pCCoV strains to extant coronaviruses; and (c) to evaluate the presence of co-infections in pCCoV positive and negative samples. | Samples collection During the period 2014-2017, 2,112 necropsy samples collected from different tissues (brain, heart, intestine, liver, spleen, lungs, kidney) of 352 dogs were submitted to molecular analysis to investigate possible viral causes of disease. The sampled animals included 141 clientowned, 151 stray and 60 imported dogs. Two hundred ninety-two of these dogs were from Italy, additional 56 animals had been imported by Hungary, while 4 dogs had been illegally imported from an unknown country. None of these dogs had undergone euthanasia, since their death was caused by illness or accident, but clinical signs occurring intravitam were not reported. At post-mortem examination, the analysed dogs showed catarrhal or hemorrhagic enteritis (n = 137), enlargement of the mesenteric lymph nodes (n = 79), pneumonia or other pulmonary lesions (n = 70), and meningeal and/or encephalic hyperaemia (n = 49). For 55 animals, post-mortem findings were not reported or were not fitting with those observed in pCCoV-infected dogs. | Nucleic acid extraction Samples collected for molecular investigations were homogenized with phosphate-buffered saline (PBS), and subsequently, RNA/ DNA extraction was performed using the automated extractor QIAsymphony (Qiagen) and the QIAsymphony DSP Virus/Pathogen Kit (Qiagen), following the manufacturer's instructions. | CCoV genotyping and subtyping The detected CCoV strains were characterized by means of two distinct real-time RT-PCR assays, specific for the genotypes CCoV-I and CCoV-II, targeting a fragment of the S gene (Decaro et al., 2013). Samples that tested positive for CCoV-II were subjected to subtype-specific CCoV-IIa and CCoV-IIb gel-based RT-PCR assays targeting the S gene (Table 1) (Decaro et al., 2013). The PCR products were detected using the TapeStation 2,200 (Agilent Technologies) according to the manufacturer's protocol. | Molecular characterization of pCCoV and CPV Lung samples from the pCCoV-infected dogs were used for the molecular characterization of pCCoV. The spike protein gene (ORF2) of the putative pCCoV strains was sequenced and analysed using the protocol reported by Alfano et al. (2019). The sequences were analysed using BioEdit software package and the NCBI and EMBL analysis tools. Samples that tested positive for CPV were further characterized by type-specific minor groove binder probe assays Decaro et al., 2007; and sequence analysis of partial VP2 gene (Buonavoglia et al., 2001). | Sequence analysis and phylogeny CCoV sequences were manually edited and analysed using the For the construction of phylogenetic trees, a multiple alignment of all target sequences was performed, using MAFTT Multiple Sequence Alignment software version 7 (Katoh & Standley, 2013) and Geneious software, using the neighbour-joining method, with the p-distance model, 1,000 bootstrap replicates, and, otherwise, the default parameters in Geneious (version 10.1.3). | Nucleotide sequence accession number The nucleotide sequences of the analysed pCCoV strains were deposited in GenBank under the following accession numbers: | Data analysis The comparison between positive and negative pCCoV dogs was carried out by examining the data with a chi-squared test, considering the statistically significant values p < .05, using the IBM SPSS Statistics 25 software. The confidence interval (CI 95%) was calculated using the prevalence parameter estimate with the Excel software. (Table 2). | Molecular characterization of putative pCCoV strains Fifteen pCCoV strains were sequenced: 6 were from dogs of Italy and 9 from animals imported from other countries (6 from Hungary and 3 from an unknown country). The sequenced pCCoV strains presented neither the deletion in the genes of the accessory 3abc proteins nor the D125N mutation that had been suggested as potential markers for the pantropic behaviour (Decaro et al., , 2013. The phylogenetic tree, generated from partial ORF2 gene sequences, based on neighbour-joining method ( Figure 1 Abbreviations: J, juvenile (6-to 12-month-old); UKN, unknown; Y, young (0-to 6-month-old). | D ISCUSS I ON To date, pCCoV has been detected only sporadically in Italy and other countries Chen et al., 2019;Decaro et al., 2012Decaro et al., , 2013Ntafis et al., 2012;Pinto et al., 2014;Zicola et al., F I G U R E 1 Phylogenetic tree generated with the neighbour-joining method from partial spike protein gene (ORF2) sequences of the putative pantropic canine coronavirus strains and reference carnivore alphacoronaviruses In fact, previous studies have demonstrated that this virus is frequently associated to subclinical infections and impairment of the lymphocyte counts, rather than to severe clinical signs and death of the infected dogs (Marinaro et al., 2010). Most of the pCCoVpositive animals also displayed post-mortem findings accounting for a systemic involvement, but at which extent those lesions were induced by pCCoV or by other co-pathogens, including the highly pathogenic CPV, CAdV-1 and CDV, infecting the same dogs could not be assessed. TA B L E 3 Prevalence of viral co-pathogens in dogs with and without pCCoV infection Remarkably, most of the pCCoV-infected dogs had been recently imported from Hungary, which may account for a wider circulation of this virus in eastern Europe. This finding is in contrast with those of a previous study aiming to assess the pCCoV circulation in Europe, which reported similar prevalences in Italy and Hungary, with detection rates of 8.69% and 9.33%, respectively (Decaro et al., 2013). Currently, no test is available to differentiate the pantropic from the enteric CCoV strains, since no specific genetic markers have been identified in pCCoV so far. As a consequence, only the detection of a CCoV-IIa strain in extraintestinal tissues accounts for a possible pCCoV infection in dogs. This situation is similar to that observed in calicivirus infections in cats, where markers of pathogenicity have been not yet detected in highly virulent strains, so that diagnosis of systemic calicivirosis is obtained when the virus is detected in extrarespiratory tissues (Caringella et al., 2019). Different from coronavirus infections in dogs, in cats potential genetic signatures were recently detected, which are able to discriminate between feline infectious peritonitis and feline enteric coronavirus strains (Felten et al., 2017). According to previous observations (Decaro et al., 2013) (Wang, Ma, Lu, & Wen, 2006). These strains were from dogs illegally imported from an unknown country, which highlights the role of illegal trade of dogs in the introduction of pathogens into Italy (Decaro et al., 2007;Mira et al., 2018). An additional finding of the present study is the high frequency of co-infections with pCCoV and other viruses. Enteric CCoV infections in dogs are very frequent Pratelli et al., 2003;Priestnall, Pratelli, Brownlie, & Erles, 2007). Co-infection by CPV and CCoV in dogs is known to enhance the severity of clinical signs Evermann, Abbott, & Han, 2005;Pratelli, 2006), with fatal outcomes being frequently reported in pups . A high frequency of co-infections with pCCoV and other pathogens has been previously reported (Alfano et al., 2019;Decaro et al., 2013;Ntafis et al., 2012;Pinto et al., 2014;Zicola et al., 2012). In the present study, we found a significant association of pCCoV with CPV and CAdV-2 infections. Pantropic CCoV is able to affect lymphocyte counts, thus causing a prolonged lymphopenia, so that it has been postulated that pCCoV infection may predispose to the increase in virulence of other pathogens by inducing immunosuppression in the infected dogs (Marinaro et al., 2010). The large population of unvaccinated free-ranging dogs present in Italy (Corrain et al., 2007;Verardi, Lucchini, & Randi, 2006) considerably increases the density of susceptible hosts, and may thus importantly impact the spread and maintenance of canine pathogens in the environment. Therefore, it is strongly recommended to vaccinate not only private-owned dogs but also stray dogs whenever possible. In addition, the epidemiological risk related to the legal and illegal trade of carnivores from Asian countries must be taken into account, since this trade may represent a source of several emerging pathogens in domestic and wild canids (Mira et al., 2018(Mira et al., , 2019. | CON CLUS ION The present study demonstrates an increasing circulation of pCCoV in Italy, which reinforces the need for intensive and continuous surveillance on the importation and illegal trade in animals and the need for increased controls on both autochthonous and imported dogs. ACK N OWLED G EM ENTS We thank Dr Gianvito Lanave for his help in the deposit of the sequences in the GenBank database, Dr Lorena Cardillo, Dr Lucia Vangone and Dr Antonella De Angelis for their assistance with part of the experimental work. We are also grateful to Dr Loredana Baldi for her excellent support in the epidemiological and statistical investigation. E TH I C A L A PPROVA L Ethical statement is not applicable since sample collection was obtained from dead animals that were submitted to diagnostic investigations upon request of the owners or public authorities. CO N FLI C T O F I NTE R E S T The authors declare that they have no conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are openly available in the GenBank database at https://www.ncbi.nlm.nih.gov/nucle otide / under accession numbers MN086803-MN086817.
2020-03-14T13:04:12.961Z
2020-03-12T00:00:00.000
{ "year": 2020, "sha1": "3a0bd64176609d3aa9fced66b7cc5d5e2440ebab", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc7228320?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "9b4a6897c9594394628dbf6e2a5ff561e0191830", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235391071
pes2o/s2orc
v3-fos-license
Generation of entanglement between quantum dot molecule with the presence of phonon effects in a voltage-controlled junction We investigate the generation of entanglement through a quantum dot molecule under the influence of vibrational phonon modes in a bias voltage junction. The molecular quantum dot system is realized by coupled quantum dots inside a suspended carbon nanotube. We consider the dynamical entanglement as a function of bias voltage and temperature by taking into account the electron-phonon interaction. In order to generate the robust entanglement between quantum dots and preserve it to reach the maximal achievable amount steadily, we introduce an asymmetric coupling protocol and apply the easy tunable bias voltage-driven field. For an oscillating bias voltage, the time-varying entanglement can periodically reach the maximum revival. In thermal entanglement dynamics, the phenomena of thermal entanglement degradation and thermal entanglement revival are observed which are intensively affected by the strength of phonon decoherence. The revival of entanglement shows a larger value for a higher phonon coupling. Introduction Quantum entanglement has been considered as a crucial resource in quantum information processing which utilizes the non-local correlations without any classical analogs [1,2,3,4]. Generation and control of the entanglement have attracted a great deal of attention in different physics fields, such as atomic structures [5,6], photonic systems [7,8], and semiconductor quantum dots [9,10,11]. Quantum dots(QDs) as artificial atoms play prominent roles in quantum information studies where their discrete energy levels can be easily tuned by applying gate voltages [12,13]. Entanglement generation of an electron was studied by using a single-level quantum dot which was connected to one input and two output leads [14]. In this system, the process of electron entanglement formation was obtained as an analog to the entanglement of photons produced by a parametric amplifier. Also, quantum dot molecules(QDMs) formed by coupled QDs are taken into account as promising candidates for investigating and generating the quantum entanglement [15,16,17]. The process of entanglement generation in double quantum dots(DQDs) is studied under the influence of the environment and is controlled by an external driving field [9]. Particularly, the dynamical formation of entangled states in DQDs is explored through an external potential difference to demonstrate how this system can be controlled by an applied electric field [10]. For realizing DQD systems, carbon nanotube (CNT) provides a potential setting [18,19]. In this structure, a carbon nanotube bridges from one reservoir to another one, and allows electrons to be confined by applying gate voltage. This procedure provides constructing a DQD. A hybrid quantum system including a carbon nanotube double quantum dot and two nitrogen-vacancy(NV) centers is proposed for the generation of entanglement [20]. In order to obtain steady-state entanglement, a carbon nanotube double quantum dot is considered as an environment of NV center. Furthermore, a suspended CNT which can oscillate freely is employed for constructing quantum dots [21,22]. In this case, the suspension of CNT leads to the existence of vibrational phonons. These phonons play the role of the environment and have a profound effect on QDs [23]. Real quantum systems due to contacting with surrounding environments, inevitably suffer from decoherence. These kinds of interactions lead to decreasing quantum correlations. For quantum entanglement correlations, loss of coherence is considered as a major obstacle which leads to entanglement degradation [24]. Applying control techniques allows systems to suppress decoherence [25,26]. On the other hand, there are some situations in which the interactions between the quantum system and environment can produce quantum correlations such as entanglement revival [27,28]. In other words, environments may operate as control elements for these cases [29]. The effect of the environment has a significant role in the generation and maximization of entanglement. For example, it is shown that due to the interaction of two qubits with a common heat bath and without any direct interaction, the entanglement can be created [30]. Among various approaches considering the systems with an electron environment, the coupling of electron-phonon can be described through the quantum master equation in both Markovian [31,9,10] and non-Markovian [32,33] regimes. Moreover, exploring the dynamics of quantum entanglement in a noisy environment shows that the coupling of a qubit with a non-Markovian environment can induce damping which resulted in the entanglement revival [29]. In the present contribution, we propose a molecular quantum dot system inside a suspended carbon nanotube with oscillatory phonon modes that is coupled with two metallic contacts. We study this structure to investigate the generation of steady entanglement and preserve it through a voltagebiased junction. The dynamic of entanglement in the introduced QDM as an open quantum system is studied via a Markovian master equation under the effect of phonon decoherence. To engineer the influence of electron-phonon interaction on the entanglement evolution, we employ the strategy of asymmetric coupling for the coupled quantum dots. In addition, to control this correlation feature, the external bias voltage as a tunable device is applied in both types of constant and periodic time-varying fields. We first calculate the density matrix of our QDM system and then achieve the dynamical entanglement between the coupled quantum dots including the phonon influence. To characterize the entanglement of the QDs, we calculate the concurrence and explore its dependence on some physical parameters. Particularly, we study the dependence of concurrence on the external bias voltage and analyze the influence of temperature on the concurrence. Through the time evolution of concurrence due to the effect of the applied oscillating bias voltage, the entanglement revival reveals periodically. The behavior of concurrence in response to varying temperature causes the phenomena of thermal entanglement degradation and revival. The outline of this paper is organized as follows: In Sec. 2, we propose the model describing the quantum dot molecule with phonon modes in a bias voltage junction, and also describe the relevant phonon transformation. In Sec. 3, we derive the Markovian master equation and obtain the concurrence quantity by using the definition of the asymmetric factor for the present QDM setup. In Sec. 4, we observe and discuss the results for the dynamics of concurrence against the bias voltage and temperature changes. Finally, we briefly conclude in Sec. 5. Model The physical system under study is a suspended carbon nanotube double quantum dot with vibrational phonon modes as an open quantum system in contact with voltage-biased reservoirs that is schematically shown in Fig.(1). In the actual experiment, the energy states of a CNT with finite length are quantized. Therefore, the finite length CNT has energy levels like a quantum dot with discrete energy levels [34,35]. Here, a finite length CNT is considered and connected with two metallic electrodes L and R. To create a double quantum dot(DQD) within the carbon nanotube, a local gate is applied in the middle of CNT to build a center barrier. This tunnel barrier works as an inter-dot coupling, t AB , that can separate two quantum dots QD A and QD B (shown in Fig.(1)). In addition, the energy levels of each quantum dot can be controlled by individual gates G A and G B . To have vibrational phonon modes in QDs, a suspended CNT is required. For this purpose, the center barrier of CNT is kept fixed while its two lateral sides are allowed to oscillate freely using the etching technique [36]. In this case, the suspended lateral parts of CNT located between the center tunnel barrier and metallic contacts would be considered as quantum dots with vibrational phonons [21,22]. These characteristics of quantum dot molecule are introduced by the Hamiltonian of quantum dot molecule H QDM . Also, in figure (1) the normal metal reservoirs L and R are held at chemical potentials µ L and µ R (µ L > µ R ) to provide the bias voltage V = µ L − µ R . The total Hamiltonian for the proposed system is given by: To define the Hamiltonian of quantum dot molecule H QDM , we use the Anderson-Holstein model [37,38] as: (2) Figure 1: Carbon nanotube double quantum dot molecule: A QDM system consists of quantum dots A and B are located in a suspended CNT. Each QD is coupled to local phonon modes with frequency ω ph and is connected to the reservoirs L and R with tunneling amplitudes of QD α to reservoir ν as t AL , t AR , t BR , t BL . Two quantum dots are coupled together with the inter-dot tunneling amplitude t AB . Energy levels of QDs, ε A and ε B are tuned by gate voltages G A and G B , respectively. Reservoirs are held in the potential difference V . Here, H QDM consists of the Hamiltonian of quantum dots , Hamiltonian of phonons H ph = α=A,B ω ph b † α b α and the electron-phonon interaction Hamiltonian H el−ph = α=A,B t phnα (b † α + b α ). In Eq.(2), ε α denotes the electronic energy level of quantum dot α, ω ph is the local phonon frequency, t ph shows the strength of the electron-phonon coupling, and t AB is the inter-dot hopping amplitude which with no loss of generality is taken as a real parameter. Moreover, d † α (b † α ) indicates the electron(phonon) creation operator, andn α = d † α d α is the occupation operator for quantum dot α = A, B. It should be noted that the fermionic operators d α fulfills the fermionic anti-commutator whereas b α is bosonic in nature. The Hamiltonian of reservoirs can be written as: which contains the non-interacting electrons where ǫ kν denotes the energy level of reservoir ν = L, R and c † kν creates an electron with momentum k in lead ν. Here, reservoirs are assumed completely spin-polarized in which the spin of electrons can not be distinguished. The interaction Hamiltonian in Eq.(1) corresponds to tunneling between QDs and electrodes which is described as: In which, t αν denotes the tunneling amplitude between QD α and reservoir ν which is assumed energy and momentum independent. The tunneling rates of the reservoir ν is characterized by Γ αν = 2πN 0 ν |t αν | 2 . This parameter can be defined in the wide-band limit (WBL) [39] where, both parameters N 0 ν , the density of states of the lead ν and t αν are assumed constant without energy dependency. This assumption allows us to have the tunneling rates as an energy-independent feature. Here, we consider the total tunnel-coupling strength for our QDM system as Γ = α,ν Γ αν . Moreover, for weak tunneling to the electronic reservoirs, the tunneling rates assumed as the smallest energy scale in this system, Γ αν ≪ k B T . This condition provides the lowest order of tunnel-coupling and also, it allows the reservoirs to stay in thermal equilibrium. In order to proceed further, the H QDM in Eq.(2) can be diagonalized by applying the polaron Lang-Firsov transformation which is given through the following relation [38,40]: Here, g ph = t ph ω ph denotes the coupling of the local phonon modes with energy ω ph . Since the phonons with the small energy scale are considered as acoustic phonons [41], so the phonons in our QDM system with the energy range of a few meV treated as acoustic modes. Also, it is reported that in semiconducting systems, the longitudinal modes are more dominant for acoustic phonons [42]. Therefore, in the present suspended CNT-DQD system, we deal with the phonons in the type of longitudinal acoustic modes. The applied transformation eliminates the electron-phonon coupling term in H QDM (Eq.(2)). So, the transformed QDM Hamiltonian is obtained as: where H.c. represents the Hermitian conjugate, X α = e g ph (bα−b † α ) shows the polaron operator andε α = ε α − g 2 ph ω ph indicates the renormalized QD energy levels. As a consequence of transformation introduced in Eq.(6), the Hamiltonian of the uncoupled reservoirs remains unchanged while the tunneling Hamiltonian is transformed as bellow: To study the dynamics of the system governs by the transformed Hamiltonian, in the next section we first calculate the quantum master equation(QME). Then, we investigate the dynamics of concurrence as a measure of entanglement by using the asymmetric factor definition. Dynamics To generate the steady entanglement in the present molecular quantum dot system and investigate the dynamics of this quantum correlation, we start from the Liouville-von Neumann equation of the whole system in the interaction picture [43]. The complete system consists of a central double quantum dot, electronic reservoirs, and oscillating phonon bath systems with Hilbert spaces H dots , H res , and H ph respectively. Consequently, the total Hilbert space of the whole system is defined as H tot = H dots ⊗ H res ⊗ H ph . For suspended carbon nanotubes, it is reported that the electron-phonon interaction is a strong coupling [44]. Therefore, we suppose that the present system is considered in the high enough phonon frequency to obtain the strong electron-phonon coupling. Also, for the assumption of weakly coupling of quantum dots with reservoirs, we can apply the Born-Markov approximation. Therefore, the density matrix of the total system is approximately characterized by ρ tot (t) ≈ ρ dots (t) ⊗ ρ res (t) ⊗ ρ ph (t) for thermal equilibrium baths. By tracing out the bath degrees of freedom for both electronic reservoirs and phonon baths, the quantum master equation of the central QDM system is obtained as: The first term corresponds to the coherent evolution of the system while the second one raises due to the different sources of dissipation. We ignore the first term in the condition of t AB ≤ t αν ≤ t ph which means that electronphonon coupling is larger than the coupling of dots with reservoirs, so the coherent dynamics of dots is negligible in the presence of phonon interaction. By substituting the transformed Hamiltonians,H QDM andH int into Eq. (8), the reduced density matrix of QDM subsystem fulfills the following equation: In this equation, M ± αν describe the tunneling rates as: and In equations (10) and (11), the correlation function of reservoirs is defined as c ν c † ν = T r res [ρ res c ν c † ν ] which gives us the Fermi distribution function of lead ν by f ν (ω + ǫ ν ) = 1 e βe(ω+ǫν ) +1 . The temperature of electrons in both reservoirs is assumed the same and is shown by T e which provides β e = 1 k B Te . The phonon-assisted correlation functions are demonstrated by G ± α (ω). Here, the negative correlation function is given by G − α (t) = T r ph [ρ ph X α (t)X † α ] [38], and its Fourier transform which is G − α (ω) = ∞ 0 dte iωt G − α (t) can be calculated as: Here, the Bose-Einstein distribution function of phonon is N ph = 1 e β ph ω ph −1 with β ph = 1 k B T ph . Since phonons inside the CNT are close to reservoirs, phonon temperature is assumed equal to the electron temperature of electrodes, T ph = T e = T . I l (z) denotes the modified Bessel function of the first kind of order l. The positive and negative correlation functions are related to each other by the relation G + α (ω) = G − α (−ω). Now with the achieved phonon-assisted correlation function of the system, we can proceed further to calculate concurrence as an entanglement measure in our QDM system. Concurrence To quantify entanglement between two qubits, Wootters proposed the measure of concurrence for both pure and mixed states [45,46]. In condensed matter systems, the entanglement of fermions is evaluated by fermionic concurrence [47,48,49,50,51] in analog with Wootter's formula. Therefore, the entanglement between quantum dots including electrons as indistinguishable fermions can be calculated in the formalism of fermionic concurrence that we discussed in the previous paper [51]. Studying the concurrence of quantum dot qubits has attracted considerable attention [52] and particularly, this measure was determined for double quantum dot systems [9,10,51]. Here, for our carbon nanotube quantum dot molecule, we define the concurrence as: where, λ i , (i = 1, 2, 3, 4) are the non-negative eigenvalues of matrix R in decreasing order, with the density matrix of the system, ρ, andρ = (σ y ⊗ σ y )ρ * (σ y ⊗ σ y ). Here, ρ * shows the complex conjugate of the density matrix and σ y represents the y element of Pauli matrices. The concurrence takes value in the interval between zero for the separable states and one unit magnitude for the maximally entangled states. The basis states of the present QDM system with quantum dots A and B (Fig.(1)) can be selected as |ψ AB = |Φ A ⊗ |Φ B . In which, |Φ A and |Φ B denote the states of quantum dot A and B, respectively. We suppose that each dot as a single energy level contains unoccupied |0 α and occupied |1 α states with the energies 0 and ε α (α = A, B), respectively. In other words, double occupancy is forbidden. Therefore, we can present the total form of the occupation states as |0 A , 1 A , 0 B , 1 B = |0 A , 1 A ⊗ |0 B , 1 B . Here, to obtain the concurrence of the present QDM system, we assume that QDs are not entangled initially and we consider the initial state of two coupled unentangled QDs as: In the following, we define an asymmetric factor which quantifies the coupling of QDs asymmetrically and leads to providing the maximum achievable entanglement. Asymmetric Factor Phonon decoherence in quantum systems can destroy the quantum correlation and causes entanglement degradation. One technique that can be used to reduce the environmental dissipation in the quantum dot system is employing the coupling of quantum dots to reservoirs asymmetrically [9]. Recently, we showed that by using quantum dots with asymmetric coupling coefficients, robust entanglement is achieved for a QDM system in a Josephson junction [51]. Here, to generate entanglement with a remarkable value and preserve it steadily, we define an asymmetric factor in which the coupling coefficients of QDs-metal leads are determined unequal. In this case, we assume that each quantum dot is coupled to the left and right reservoirs with different strengths that can be tuned by applying gate voltages. For this proposed structure, there is a quantum dot in the middle of quantum dotreservoir coupling. To produce these kinds of setups, it was reported that the partial capacitors were used in a parallel double quantum dot system [53]. In the coupled quantum dot systems, the partial capacitors mean that the tunneling barriers should not be built with the same heights for creating quantum dots. It provides the relevant capacitors locating not absolutely in the parallel plane. In this case, the interaction between one dot and reservoir is assisted with an intermediate dot. This shows that by arranging the partial capacitors, the structure of our proposed system can be fabricated. Therefore, we suppose that QDs can be connected with both near and far reservoirs with various non-zero couplings. Then, the tunnel-coupling strength for each reservoir would be written as T ν = T Aν +T Bν 2 . In which T Aν (T Bν ) is the coupling strength of QD A (QD B ) with the reservoir ν. It means that our system needs four tunnel-coupling coefficients of T AL , T BL , T AR , and T BR are shown in Fig.(1). We introduce the asymmetric factor as [51]: where The amount of asymmetric factor ranges from zero for symmetric coupling coefficients, to one for the complete asymmetric situation. The symmetric structure is defined when each QD is coupled to the left and right leads with the same coupling coefficient, T αL = T αR . This situation provides a minimum magnitude of asymmetric factor, κ = 0. The asymmetric structure refers to the left-right different coupling coefficients, T αL = T αR , with 0 < κ ≤ 1 amount. Completely asymmetric configuration with κ ≃ 1 can be realized for the specific physical properties when the strength of coupling for the near-lead is much larger than the far-lead. This means that In the next section, with the calculated density matrix and considering the required asymmetry factor, we present the results for the behavior of entanglement dynamics. Results In this section, we demonstrate the results of investigating dynamical entanglement for our QDM system against the bias voltage changes, and temperature-varying in Concurrence-Voltage and Concurrence-Temperature parts, respectively. For simplicity and with no loss of generality, we assume the frequency of phonons for each QD is the same, ω ph . In all results, the order of energy is supposed meV , and also the order of temperature is assumed mK. Concurrence-Voltage For this QDM system, the effect of bias voltage on the concurrence is originated from the reservoir distribution function in the dissipation expression Γ. When the reservoirs are derived asymmetrically by the external bias voltage, their chemical potentials are changed as µ L = µ 0 + eV and µ R = µ 0 . Therefore, the concurrence is affected by the bias voltage indirectly. To explore the behavior of concurrence as a function of bias voltage, we study the concurrence-voltage curves(C-V characteristic) when the bias voltage is constant(dc) as well as time-dependent(ac). Concurrence-dc Voltage To observe the concurrence dependency on the constant bias voltage, we illustrate C-V characteristic curve for asymmetrically coupled quantum dots with asymmetric factor, κ = 0.55 in figure (2). Panel a of this figure demonstrates the influence of electron-phonon coupling strength on the concurrence. In very low bias voltage(ε/ ω ph < 1), quantum dots are not completely excited and consequently the measure of entanglement is very small. By increasing voltage, concurrence shows increment until ε/ ω ph = 2. Refer to the energy levels of quantum dots A and B(assumed ε A / ω ph = 2 and ε B / ω ph = 5 respectively), concurrence in resonant with quantum dots energy levels shows behavior-changing which is similar to steps through the increment of bias voltage. All curves of Fig.(2)a are under the effect of resonant with quantum dots energy levels and exhibit changes in concurrence behavior. The main difference between these curves is the presence of phonon coupling. For the solid line with no phonon coupling, the steps due to the resonance are in complete shape while for the dashed and dot-dashed lines with the phonon coupling, the steps become smoother. With further increasing bias voltage, concurrence behaves steadily. This leads to observe the steady-state entanglement for high bias voltage. In panel (b) of Fig.(2), we plot the C-V curve for a fixed phonon coupling g ph = 0.1 in some certain temperatures. This figure shows that although the phonon coupling is fixed for all C-V curves, concurrence observes more decrement in larger temperatures. In other words, due to the increase of temperature, thermal decoherence is raised which causes more concurrence decline. Two panels of Fig.(2) express that by increasing the electron-phonon coupling strength in panel a and raising the temperature in panel b, the entanglement shows degradation because of phonon and thermal decoherences respectively. An interesting point in this figure is that despite these two dissipative features, the concurrence amount is kept at a significantly steady amount. The main reason for this behavior of the system in preserving the entanglement is due to applying the coupling coefficients asymmetrically with non-zero asymmetric factors. Concurrence-ac Voltage To further investigate the parameters which affect the entanglement of the QDM setup, we apply time-dependent voltage. Usually, the time evolution of entanglement under the periodically driven voltage is studied for structures with multi subsystems [54,55]. Here, we study the dynamics of entanglement for our bipartite system when it is driven periodically with an ac periodic voltage as: where V dc indicates the amplitude of constant voltage while V 0 ac and ω ac denote the amplitude and frequency of oscillating voltage, respectively. Applying a time-dependent voltage to the system leads to having some parameters depending on time such as the energy level of QDs, the electrochemical potential of reservoirs, and the tunnel-coupling coefficients. As mentioned before, we suppose our system in the WBL regime so the tunnel-coupling parameters are assumed constant and energy-independent. In figure(3), we show the dynamics of concurrence for a fixed phonon coupling g ph = 0.1 under the harmonic bias voltage in two panels (a) and (b). In panel (a), the time evolution of concurrence exhibits periodic time dependence. The oscillating pattern of concurrence displays increasing behavior for each driving cycle and it revives with a higher magnitude. As time elapses enough(t → ∞), it reaches the stationary periodic manner. Here, the periodic revival concurrence strongly depends on the amount of bias voltage amplitude. Such that for higher bias voltage amplitude, the shape of revival for each cycle tends to verify from the cosine oscillation shape. However, in this case, the revival concurrence can achieve higher amounts due to the larger driving voltage amplitudes. This leads to the maximum entanglement for V 0 ac V dc = 4 with an oscillatory behavior. To observe the C-V characteristic against time-dependent voltage, we show the behavior of the time-averaged concurrence as a function of bias voltage in Fig.(3)b. In this figure, the average concurrence increases more quickly with higher maximum amounts for more driving bias voltage amplitudes V 0 ac /V dc . This behavior originates from the treatment of electrons with respect to the periodic bias voltage. In other words, due to the increase of voltage, electrons can oscillate faster which leads to raising the magnitude of average entanglement. Furthermore, by applying larger time-dependent driving fields, the average concurrence reaches the stationary state with a robust amount. Totally, figure (3) demonstrates that the introduced QDM system can protect the entanglement between quantum dots under the control of time-varying bias voltage despite the presence of phonon decoherence. Concurrence-Temperature To obtain more information during the evolution of entanglement in the present system, we evaluate the behavior of the concurrence with respect to the temperature. The temperature-dependent nature of concurrence is originated from the temperature dependency of phonon Bose-Einstein distribution function N ph and fermionic distribution function of leads f ν . To observe the behavior of entanglement against temperature-varying, we plot the concurrence-temperature curve(C-T characteristic) which is shown in Fig.(4). This figure illustrates that in the beginning, due to the constant driven bias voltage, the unentangled electrons in the initial ground states are excited to the higher levels. After that, electrons find more opportunities to be entangled through the increase of temperature. This leads to an entanglement increment toward the maximum achievable value. Increasing more temperature causes the separation of the entangled electrons which is observed as a decreasing manner of concurrence. This decline behavior continues through a finite temperature interval to reach zero amount. At this point, entanglement disappears completely which means that all electrons are unentangled. Here, we call the complete disappearance of entanglement against temperature a thermal entanglement degradation(TED). In TED parts of figure (4), it is obvious that for larger phonon coupling strengths, concurrence reaches the lower maximum value and then collapses with a shorter TED interval. This kind of entanglement degradation was reported as a significant property for quantum dot systems [56]. Applying more temperature provides electrons to move faster and consequently find more possibilities to be entangled again. Fig.(4) illustrates that the entanglement can reappear after a period of temperature absence. We name the phenomenon of entanglement rebirth after the complete disappearance versus temperature as a thermal entanglement revival (TER). In Fig.(4) for TER parts, it is shown that concurrence can be revived with a larger magnitude for a stronger phonon coupling coefficient. After the revival, entanglement raises to reach a robust value and steadily continues through the temperature increment. In this figure, for C-T cure without any phonon coupling g ph = 0(solid line), the entanglement can not be revived after its disappearance. This behavior of entanglement reveals that the key reason for the revival phenomenon in the present system would be originated from the decoherence of surrounding phonons. In semiconductor systems, electron-phonon interaction is considered as a dominant term over the other relevant correlations [57]. Moreover, researches show that phonon interaction is strongly affected by the value of temperature [58]. It means that by increasing the phonon strength, the absorption of this parameter from the thermal bath can be improved. Therefore, we believe that in our QDM system, the electron-phonon effect plays a crucial role in the generation of the thermal entanglement revival. Also, the influence of this element is so strong that the steady-state TER is obtained with a robust amount for the higher temperature. To demonstrate the importance of phonon effect on the behavior of entan-glement in the larger value of temperature, we investigate the temperature dependency of concurrence on the high-temperature(HT) approximation in the following. High-Temperature Approximation For high temperature limit(k B T ≫ ω ph ), the phonon distribution function is approximated as N ph ≃ k B T ω ph . In this case, the modified Bessel function for large argument is expressed as Applying this expression into the equations (10) and (11) Using these equations, we obtain the concurrence in the high-temperature limit. We present it in a comparison scheme between HT approximation and the exact result in Fig.(5). This figure introduces a characteristic temperature depending on the frequency of phonon. In this temperature which should be at least 40 times larger than the phonon energy, concurrence in HT approximation can completely reproduce the exact result of entanglement. This analysis confirms that the entanglement in our proposed QDM model is strongly influenced by the phonon effects which in turn depends on the surrounding temperature. The main advantage of our proposed QDM setup is that the steady-state entanglement can be generated in a simple tunable junction. Also, the entanglement with the robust amount can be easily preserved by engineering the coupling of QDs-leads asymmetrically and manipulating the driven bias voltage. In the future, it will be useful to investigate the stability of entanglement and entanglement revival in an array of quantum dot molecules under the effect of temperature dissipation to improve the building quantum computers. Figure 5: Concurrence-Temperature behavior shows the characteristic temperature k B T ≥ 40 ω ph that is due to the comparison between the exact result and high-temperature approximation for eV ω ph = 0.1 and κ = 0.63. Conclusions We studied the dynamics of entanglement formation in a carbon nanotube quantum dot molecule including phonon interactions in a biased junction. It was shown that the generated steady-entanglement between the coupled quantum dots could be controlled and preserved to achieve a robust value. This procedure was performed by implementing an asymmetric coupling strategy and also by applying a tunable external bias voltage. In response to an applied time-dependent bias voltage, the time evolution of entanglement exhibited periodic revival which could be reached the maximum magnitude. We could characterize the thermal entanglement degradation and revival phenomena through temperature evolution. In this case, the strength of phonon coupling influenced the rebirth of thermal entanglement comprehensively such that the revival occurred stronger for a higher phonon coupling amount.
2021-06-11T01:16:16.366Z
2021-06-10T00:00:00.000
{ "year": 2021, "sha1": "362b65b19a28ab415af560dfe0fdba6473157510", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2106.05614", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "362b65b19a28ab415af560dfe0fdba6473157510", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256447387
pes2o/s2orc
v3-fos-license
Pharmacological Treatments and Natural Biocompounds in Weight Management The obesity pandemic is one of society’s most urgent public health concerns. One-third of the global adult population may fall under obese or overweight by 2025, suggesting a rising demand for medical care and an exorbitant cost of healthcare expenditure in the coming years. Generally, the treatment strategy for obese patients is largely patient-centric and needs dietary, behavioral, pharmacological, and sometimes even surgical interventions. Given that obesity cases are rising in adults and children and lifestyle modifications have failed to produce the desired results, the need for medical therapy adjunct to lifestyle modifications is vital for better managing obesity. Most existing or past drugs for obesity treatment target satiety or monoamine pathways and induce a feeling of fullness in patients, while drugs such as orlistat are targeted against intestinal lipases. However, many medications targeted against neurotransmitters showed adverse events in patients, thus being withdrawn from the market. Alternatively, the combination of some drugs has been successfully tested in obesity management. However, the demand for novel, safer, and more efficacious pharmaceutical medicines for weight management does exist. The present review elucidates the current understanding of the available anti-obesity medicines of synthetic and natural origin, their main mechanisms of action, and the shortcomings associated with current weight management drugs. Introduction Obesity, a metabolic complication, was initially considered a disease of positive energy balance due to overeating and a sedentary lifestyle. This perception led to the belief that dietary and short-term pharmacological interventions can easily control the obesity pandemic [1]. However, the scientific community in 1985 recognized obesity as a chronic disorder. The first approved class of drugs for obesity was amphetamines, which were subsequently removed from the market due to addiction and adverse events associated with the long-term use of amphetamines. This highlighted the need to design and develop safer alternatives for long-term use in obesity management [2]. The treatment of obesity is highly patient-centric, and various pharmacological and surgical approaches may require depending on the affected individuals. Effective strategies for treating obesity involve lifestyle intervention, complementary medicine, and alternative therapy, including drug treatment or bariatric surgery [1]. Thus, acupuncture is a good example of an effectively used alternative therapy in obesity coping due to its positive effect on hypothalamus functioning [3]. Considering the chronic nature of obesity, a three-step weight management practice is recommended. The first stage involves the use of dietary intervention, alteration in lifestyle and behavior, and increased physical activity [1]. For instance, a low-fat and high-protein diet in the Mediterranean region could help achieve weight loss and prevent muscle loss and osteoporosis [4]. Maintaining a balanced diet with a sufficient consumption of vitamins and trace elements and limiting the consumption of high-calorie foods, along with the proper intake of drinking water and physical activity, can prevent the development of metabolic syndrome and adiposity [5,6]. In the second stage, pharmacological treatment options are recommended alongside lifestyle modifications. Drug therapy is considered the most effective way to lose weight [1]. In the final step, surgical procedures such as bariatric surgery are recommended for obese patients who are unresponsive to non-surgical intervention and have uncontrolled and morbid obesity [7]. The pharmacological treatment for obesity becomes necessary given that the latest data from the World Obesity Federation suggests that 2.7 billion adults will fall under the obese category by 2025; thus, a huge demand for medical care and therapeutic interventions will arise in the near future. The rising global epidemic will result in huge medical costs pegged at USD 1.2 trillion per annum unless various interventions control the epidemic. Several anti-obesity drugs have been approved in the past. For instance, phentermine, diethylpropion, rimonabant, taranabant, sibutramine, orlistat, lorcaserin, and tesofensine are some of the anti-obesity medications for weight management [8]. Several anti-obesity drugs target monoamine neurotransmitter pathways, producing a feeling of fullness [9]. However, several anti-obesity drugs targeted towards neurotransmitters were withdrawn due to adverse psychiatric side effects and thus should not be prescribed for obese patients with psychological disorders [10]. Importantly, very limited behavioral data are available for existing anti-obesity drugs, making it essential to collect behavioral data during the early stages of drug development. Moreover, a lack of understanding of the molecular, cellular, and physiological targets of anti-obesity drugs may hamper the progress of drug development and consequently delay finding novel and effective anti-obesity molecules [11]. The present review elucidates the current understanding of the available anti-obesity drugs of synthetic and natural origin, their main mechanisms of action, and the shortcomings associated with current weight management medicines. Drugs-Induced Obesity Weight gain is one of the most commonly observed side effects of drugs used in the treatment/management of a few disorders. Some medicines can lead to weight gain as high as 10% of the total body weight and thus increases the health risk of individuals on medications. Since weight gain is a common underlying cause of several metabolic disorders, strict and regular monitoring of the patients for any adverse outcomes associated with the drug treatment is required [12]. For instance, glucocorticoids and anti-HIV (Human Immunodeficiency Virus) drugs are associated with weight gain and lipodystrophy. Similarly, anti-sense apo-B oligonucleotides and MTTP (Microsomal triglyceride transfer protein) inhibitors employed in managing lipid disorders can cause the ectopic deposition of fat, especially in the liver, and changes the fat distribution in the body [12]. It is well established that antidepressants, atypical antipsychotics, and mood-stabilizing agents induce weight gain by increasing appetite [13]. A UK study in 136,762 men and 157,957 women demonstrated that the use of antidepressants was associated with weight gain at the population level and suggested that the risk for weight gain must be considered during the antidepressant treatment [14]. Olanzapine, one of the most efficacious drugs for schizophre-nia, is associated with significant weight gain and metabolic dysfunctions such as insulin insensitivity. A recent study has shown that ALKS 3831, a combination of Olanzapine and Samidorphan (an opioid receptor agonist), can reduce weight gain and metabolic complications associated with olanzapine [15]. In another observation, the weight gain induced by second-generation antipsychotics, which include aripiprazole, olanzapine, risperidone, and clozapine, can be controlled by metformin [16]. Pisano et al. [17] demonstrated that youths taking antipsychotics show significantly higher levels of C-peptide, glucose-dependent insulinotropic polypeptide, and adipsin, indicating β-cell stress and higher risk for insulin resistance in youths showing antipsychotic-induced weight gain. Thus, the use of medications that cause weight gain requires extensive monitoring to avoid adverse metabolic complications of the drugs. It is pertinent to mention that the fat reserved (subcutaneous or visceral) affected by the drug must also be considered while considering the side effects of the drug. For example, oral antidiabetic drugs thiazolidinediones (TZDs) are insulin sensitizers and commonly prescribed for glycemic control. Interestingly, TZD-induced weight gain is limited to subcutaneous adipose tissue only, and TZDs do not alter the fat deposition in visceral tissues. This suggests that TZD-induced weight gain is less likely to induce insulin resistance and other metabolic complications associated with increased visceral fat mass [18]. A recent study [19] showed that the active use of antidepressants in therapy continues and even grows in England, despite controversy about the effectiveness of these drugs and the potential harm to the patient if use is discontinued. Attention is drawn to the fact that one of the side effects of antidepressant therapy is weight increase [19]. Selective serotonin reuptake inhibitors are often used to treat depression. However, research shows that using these drugs leads to weight growth [20]. Patients taking antipsychotic drugs-in particular, olanzapine-face the problem of metabolic disorders and significant weight gain. In this regard, a study on the effect of miricorilant, a selective glucocorticoid receptor modulator, on olanzapine-associated weight gain suggested that miricorilant could potentially be an option to mitigate the harmful effects of olanzapine. Still, research needs to be continued [21]. New research confirms that glucocorticoid therapy can cause several negative effects, including weight gain, dyslipidemia, and hyperglycemia [22]. Long-term studies of obese patients have shown that their cortisol values are in a high physiological range. Elevated levels of glucocorticoids are directly related to weight gain. Therefore, it is advisable to analyze the mechanisms of this relationship to develop a therapeutic correction to achieve sustainable weight loss [23]. Novel Drugs in Obesity Treatment Obesity is a complex metabolic disorder affecting several body organs and tissues. Obesity severely disturbs the metabolic processes in the system and causes hyperglycemia, impaired glucose tolerance, dyslipidemia, and gastrointestinal abnormalities [24]. Currently, obesity is managed with the help of anti-obesity or weight loss medications that help reduce body weight gain. The US Food and Drug Administration (FDA) has currently approved five drugs: orlistat (Xenical and Alli), lorcaserin (Belviq), phentermine-topiramate (Qsymia), naltrexone-bupropion (Contrave), and liraglutide (Saxenda) for the treatment of obesity. As per the FDA guidelines, an obese individual can take anti-obesity drugs as long as they cause benefits without any unpleasant side effects. Importantly, anti-obesity medications must not be taken by pregnant women or women who are planning pregnancy, as they may harm the fetus. Moreover, using anti-obesity medications in obese subjects can also cause side effects such as oily stool, incontinence, headaches, gastrointestinal upset, acute pancreatitis, nausea, and fecal urgency [25]. The present section explores some commonly prescribed anti-obesity drugs and the clinical evidence supporting their usefulness in managing obesity. Although researchers point to significant progress in the treatment of several pathologies caused by being overweight, the treatment of obesity itself remains currently problematic. Anti-obesity drugs often do not give the expected effect and have an insufficient level of safety. Recent advances in the analysis of the molecular connection between the brain and the intestine will help create a new generation of anti-obesity drugs that can provide stable weight loss for the patient [26]. The old weight control medications contributed to only minor additional weight loss. The modern drug semaglutide makes possible a much greater loss of body weight-an average of 15% weight loss in 1 year. This efficacy of semaglutide can potentially increase the use of pharmacotherapy by clinicians to correct patient weight. A positive result from a cardiovascular outcome study to test the effectiveness of an anti-obesity drug would further the importance of weight management in controlling cardiometabolic disease [27]. Similar optimistic conclusions in the treatment of overweight were obtained in the study of the effectiveness of phentermine-topiramate and GLP-1 receptor agonists [28]. The anti-obesity drugs have been well tolerated in randomized controlled experiences, although the findings may need to be more generalizable to clinical practice. Studies have also shown that anti-obesity medications did not cause severe complications but only some mild to moderate ones. However, despite the effectiveness of anti-obesity drugs, they need to be more evidence-based [29]. Orlistat Orlistat is one of the most commonly prescribed anti-obesity drugs sold under the trade name Xenical or Alli. Additionally, known as tetrahydrolipstatin, orlistat is a saturated derivative of lipstatin, a natural molecule isolated from Streptomyces toxytricini. Like lipstatin, Orlistat also irreversibly inhibits the activity of pancreatic and gastric lipases. These lipases convert dietary fats into free fatty acids, thus making them essential for digestion and the subsequent absorption of dietary fats. Orlistat inhibits the activity of lipases by binding with the enzyme's serine residue, which inhibits the hydrolysis of triglycerides into free fatty acids. This undigested fat is secreted into the feces, thus reducing the overall calorie intake of an obese patient. Importantly, the inhibitory activity of orlistat is specific to only lipases, and it does not inhibit the activity of other digestive enzymes such as trypsin, amylase, chymotrypsin, and phospholipases [30]. Since orlistat inhibits fat absorption, it inhibits the absorption of fat-soluble vitamins such as vitamins A, D, E, and K. Hence, patients taking orlistat therapy may need vitamin supplements. Orlistat is generally prescribed with a mildly hypocaloric diet. Per the guidelines, orlistat treatment must be prescribed only for obese patients who have lost at least 2.5 kg weight in 4 weeks with the help of dietary interventions alone. Moreover, orlistat is not recommended for patients who lose less than 5% of body weight during a 12-week treatment cycle. Finally, orlistat must be discontinued in cases where weight loss is less than 5% of the initial weight after 12 weeks of treatment. As per the European guidelines, the duration of orlistat treatment should be, at most, two years [31]. However, studies have shown that weight loss due to orlistat may not always be substantial; a variability in weight loss has been observed, with several patients showing no weight loss at all. Additionally, there is a high attrition rate associated with orlistat use due to several unpleasant side effects associated with its use [32]. A study with 500 subjects showed that treatment with orlistat (n = 400) and liraglutide (n = 100) during a 7-month follow-up study significantly reduced body weight, helped manage the plasma glucose and lipid profiles, and reduced the systolic blood pressure. However, patients taking liraglutide lost significantly more weight (−7.7 kg) than the orlistat group (−3.3 kg) [33]. It has been observed that orlistat acts via multiple pathways that include lipase inhibition, the modulation of neurotransmitters such as glutamate and dopamine levels, and elevation of the glycogen levels [34]. Despite active research aimed at creating drugs for correcting and controlling body weight, orlistat is one of the few approved in Germany for treating obesity [35]. Obesity increases the risk of hyperuricemia, which contributes to the development of gout and cardiovascular disease. Currently, scientists continue to conduct active orlistat research on certain aspects of its use in treating ocular, which concerns monitoring issues, possible side effects, pharmacodynamics, pharmacokinetics, and related interactions [36]. There is new information about creating an oral capsule with a modified release of orlistat and acarbose (MR-OA). All doses tested were safe and well tolerated, with no serious side effects. The delay in increasing the concentration of orlistat in plasma indicates the effectiveness of the properties of the modified release of the MR-OA composition [37]. A separate area of research is now the analysis of the effect of orlistat on uric acid in blood serum in adults [38]. Liraglutide 3.0 Liraglutide 3.0 is a recently approved anti-obesity drug marketed under the trade names Saxenda ®® (Novo Nordisk, Copenhagen, Denmark) and Victoza ®® (Novo Nordisk, Copenhagen, Denmark). In 2010, the FDA approved a 1.8 mg daily subcutaneous injection of Liraglutide as an adjunct therapy for managing type 2 diabetes mellitus. However, recently, the FDA approved a daily dose of 3 mg of Liraglutide for chronic weight management in obese individuals with a Body Mass Index (BMI) ≥ 27 kg/m 2 and suffering from other comorbidities [39]. Liraglutide 3.0 has been clinically evaluated for its efficacy in weight management. A randomized control trial in 198 patients showed that Liraglutide 3.0, in combination with intensive behavioral therapy, helped obese patients lose significantly more weight (−5.8%) compared with the placebo group (−1.5%). Moreover, 51.8% of participants on Liraglutide achieved a ≥5% reduction in weight compared with the placebo group (24%). Liraglutide 3.0 mg treated patients showing significantly lower glycated hemoglobin levels and glucose values than the placebo group. No safety and tolerability issues were observed in the Liraglutide-treated group [40]. In a 56-week clinical trial, obese patients receiving Liraglutide lost 8.4 ± 7.3 kg body weight, while the placebo-treated group lost only 2.8 ± 6.5 kg. Moreover, 63.2% of patients in the Liraglutide-treated group showed at least 5% loss in body weight, while, in the placebo group, only 27.1% of subjects lost at least 5% weight. Similarly, more patients in the Liraglutide group showed >10% loss in body weight than in the placebo group (33.1% vs. 10.6%). However, Liraglutide was associated with side effects such as diarrhea or nausea [41]. In another interventional study, obese adults prescribed a combination of Liraglutide 3.0 and lifestyle therapy lost significantly more weight than the placebo and lifestyle therapy group. However, more participants taking Liraglutide displayed adverse gastrointestinal effects, leading to the trial treatment discontinuation [42]. At the molecular level, Liraglutide acts by reducing appetite and lowering energy intake, and these actions of Liraglutide are independent of the hypoglycemia effects. Interestingly, Liraglutide also reduces the risk of cardiovascular disorders in subjects with type 2 diabetes mellitus. However, its cost and requirement for daily injections are some limiting factors in its use [43]. Modern studies have confirmed the positive effect of Liraglutide in treating patients who have regained weight after bariatric surgery [44]. Recent studies confirm the positive effect of Liraglutide 3.0 mg on weight loss in patients and indicate a positive effect of the drug on the metabolism of myocardial cells [45]. New findings indicate the efficacy of Liraglutide 3.0 mg subcutaneously in obese patients [46]. There is further information on significant weight loss in patients treated with Liraglutide during an initial 4-month period [47]. Phentermine/Topiramate A combination of phentermine and topiramate (delayed-release) has been sold for treating obesity under the brand name Qsymia in the United States since September 2012. Phentermine is recommended for short-term weight loss due to its anorexigenic properties, and topiramate is primarily recommended to prevent seizures and migraine. However, a combination of these two drugs synergistically reduced weight gain, and the weight loss achieved by the fixed-dose combination was higher than the weight loss achieved when either drug was given alone. At the molecular level, phentermine promotes weight loss by reducing food intake by enhancing the release of the neurotransmitter norepinephrine and possibly blocking its reuptake, thus increasing its levels. Topiramate is derived from fructose, an FDA-approved drug for treating seizure disorders (400 mg/day) or migraines (100 mg/day). Several clinical trials have demonstrated that a fixed-dose combination of phentermine and topiramate helped patients achieve a sustained and robust weight loss maintained for up to 2 years in more than 50% of the study participants [48]. In a 28-week, randomized, controlled trial, Aronne et al. [49] found that the combination of PHEN/TPM ER (phentermine/topiramate extended-release) was more effective in weight loss than in the placebo group and groups that received monotherapies. For instance, only 15.5% of subjects from the placebo group achieved ≥5% weight loss, while 62.1% and 66% of participants achieved ≥5% weight loss when given a fixed-dose combination of PHEN/TPM ER 7.5/46 and PHEN/TPM ER 15/92, respectively. The participants tolerated the combination well, and no serious cognitive impairment, except impairment in attention, was observed in the study participants [49]. A study involving obese adolescents between the age group of 12 and 17 years showed that a higher number of adolescents taking mid and top doses of the combination achieved ≥5% weight loss than the placebo group. The eight-week fixed-drug combination of PHEN/TPM promoted significant weight loss with no side effects and tolerability issues. The study highlighted the mid-and long-term safety profiles of PHEN/TPM in managing obesity [50]. A study involving 866 subjects showed that participants taking PHEN/TPM showed significant weight loss at the end of the 108th week. At each dose, significantly more participants achieved ≥5%, ≥10%, ≥15%, and ≥20% weight loss than the respective placebo groups, indicating the effectiveness of the combination therapy in weight management. Interestingly, the PHEN/TPM combination also improved the cardiovascular and metabolic health of the participants and lowered the incidence of diabetes in the participants [51]. Recently, phentermine/topiramate was approved in the USA for chronic weight control in pediatric patients ≥ 12 years of age in combination with increased physical activity and a low-calorie diet. In addition, phentermine/topiramate is being clinically developed in the United States for treating type 2 diabetes in obese patients and sleep apnea [52]. Phentermine/Diethylpropion Diethylpropion is an amphetamine analog that suppresses appetite and has been approved as a short-term (<12 weeks) anti-obesity drug in the United States since 1959. It shows lesser effects on the central nervous system than amphetamine and, hence, has a lesser risk for drug abuse [53]. A study by Vallé-Jones et al. [54] compared the effectiveness and tolerance of phentermine and diethylpropion in obese subjects. A daily dose of 30 mg phentermine (n = 50) or 75 mg diethylpropion dose (n = 49), in combination with calorie restriction, was given to obese patients for 12 weeks. It was observed that the phentermine-treated group showed increased weight loss compared to the diethylpropiongiven group [54]. In a clinical trial involving 69 obese healthy subjects on a hypocaloric diet, 50 mg of diethylpropion was given to 37 subjects for six months. The placebo group (n = 32) did not receive any therapeutic intervention. After a 6-month intervention, the diethylpropion group lost 9.8% of their initial body weight compared to the placebo group (3.2%). After six months, both groups received diethylpropion treatment for another six months, thus making the total study period one year. Interestingly, the diethylpropion group given the intervention for 12 months lost 10.6% of their initial body weight. However, the placebo group that switched to diethylpropion after six months lost 7% body weight. The difference in weight loss was not significant at 12 months. Importantly, the study reported no psychiatric or cardiovascular adverse effects of diethylpropion, suggesting that its use is safe for a longer term [55]. Recently, a combination of diethylpropion and topiramate was tested on rats for anorectic effects. It was observed that a combination of lower doses of diethylpropion + topiramate synergistically increased the anorectic behavior of individual drugs without any safety concerns [56]. Interestingly, the efficacy of diethylpropion can be enhanced by carefully selecting the administration time. A recent study has demonstrated that the administration of diethylpropion to rats during their active phase promoted greater weight loss than in their inactive phase. Moreover, diethylpropion-induced weight loss significantly improved under high-fat (HF) diet restriction compared to weight loss observed under ad libitum availability to the HF diet. The study showed for the first time that the careful selection of administration timing could significantly improve the anti-obesity properties of diethylpropion and probably for other appetite suppressants [57]. In a survey of the Mexican population, the combination of diethylpropion with diet and exercise (DEP + DaE) was more effective in weight loss than diet and exercise alone (DaE). The study concluded that DEP + DaE is a cost-effective solution for managing obesity in risk populations [58]. There are new studies on the pharmacological effects of some drugs that lead to abuse. The pharmacokinetics of cathinones was studied in experimental and prospective clinical studies. This study showed that several drugs, including diethylpropion, lead to undesirable effects that can cause dependence and abuse. The authors point to the need for future research to prevent negative manifestations in the treatment of patients [59]. Lorcaserin Lorcaserin is an FDA-approved drug for the long-term management of obesity in obese individuals with BMI >30 kg/m 2 or obese subjects with BMI > 27 kg/m 2 and suffering from at least one obesity-associated metabolic complication. Lorcaserin is an agonist to serotonin receptors and specifically target 5HT2C receptors. Its safety and efficacy have been proven to manage obesity and related comorbidities such as cardiovascular disorders, kidney disease, and type 2 diabetes mellitus. New findings suggest that lorcaserin modulates dopaminergic pathways and helps glucose homeostasis [60]. A large clinical trial in 12,000 overweight or obese subjects with cardiovascular disease showed that 38.7% of patients (1986/5135) receiving lorcaserin (10 mg twice daily) for one year showed at least 5% weight loss in comparison with 17.4% of obese patients that received the placebo. Moreover, patients given lorcaserin showed a lower risk for cardiovascular risk factors, as evidenced by slightly better values of factors such as the lipid profile, glucose values, blood pressure, and heart rate than the placebo group. However, more patients in the lorcaserin group displayed serious hypoglycemia than in the placebo group. The study concluded that lorcaserin is a safe intervention for sustained weight loss without any adverse cardiovascular events in high-risk obese or overweight patients [61]. However, contradicting results on the efficacy of lorcaserin have been reported. It has been observed that obese patients taking lorcaserin lost only 3 kg extra weight in comparison with the placebo group. Moreover, lorcaserininduced weight loss was not sustained, and individuals gained weight after discontinuing lorcaserin. Some commonly observed side effects of lorcaserin were dry mouth, nausea, headache, fatigue, dizziness, and euphoria. Importantly, lorcaserin increased the risk of cardiac valve disorders more than the placebo group. The clinical trials with lorcaserin were also conducted for a short duration and, hence, do not exclude the risk for various cancers such as breast cancer and astrocytoma. The use of lorcaserin to manage obesity failed to prevent obesity-associated complications and weight management. Thus, its usefulness as a weight loss drug cannot be justified [62]. It is pertinent to highlight that recent findings of a large clinical trial conducted on 12,000 subjects showed that lorcaserin increased the risk for cancer in study participants, and subjects given lorcaserin showed a higher incidence of cancer than the placebo group. Due to the increased risk of cancer with lorcaserin use, the FDA has requested drug makers to withdraw lorcaserin from the market. New studies point to the ability of lorcaserin to inhibit glucose-stimulated insulin secretion and calcium influx in the mouse pancreatic islet. This further information on the signaling mechanism of lorcaserin is a stimulus for continued research on the functions of 5-HT2CR in β-cell biology [63]. A study was conducted on the combined treatment of lorcaserin and betahistine for cognitive dysfunction caused by obesity. The study's results confirmed the ability of both drugs to improve cognitive function through the mechanism of action on dopaminergic signals in the prefrontal cortex [64]. Lorcaserin is considered among several anti-obesity medicines that can treat non-alcoholic hepatic lipidosis. However, the studies carried out in this direction are insufficient and must be continued [65]. Bupropion Bupropion was introduced in the US market in 1989 as an antidepressant drug. It is a weak receptor antagonist of nicotinic acetylcholine receptors and also helps in smoking cessation and seasonal affective disorders. It inhibits the reuptake of norepinephrine and dopamine neurotransmitters without inducing any changes in the levels of serotonin neurotransmission. The efficacy of bupropion is comparable with other serotonin reuptake inhibitors and antidepressants. However, bupropion is associated with side effects such as nausea, constipation, insomnia, dry mouth, headache, and dizziness [66,67]. In a randomized, double-blind, placebo-controlled trial on 50 overweight/obese women patients with BMI between 28.0 and 52.6 kg/m 2 , patients received 100 mg/d bupropion for the initial eight weeks and later switched to 200 mg twice daily. All participants were kept on a balanced diet (1600 kcal/d) and maintained food dairies. Responders continued the same treatment in a double-blind manner for an additional 16 weeks, for a total of 24 weeks. Follow-up studies showed that subjects receiving bupropion displayed a higher mean weight loss than the placebo group. Studies in animal models have demonstrated that bupropion increased oxygen consumption in animal models via the β3-adrenoceptor and dopamine D2/D1 receptors. Moreover, more participants lost over 5% body weight than the placebo group (67% vs. 15%) [68]. The weight-reducing effects of bupropion were mainly attributed to increased thermogenesis due to increased activity of the β3-adrenergic and dopamine D2/D1 receptors. Since bupropion does not alter/reduce the food intake, the anti-obesity effects are primarily due to increased thermogenesis [69]. In September 2014, a combination of bupropion and naltrexone was approved by the FDA for obesity management. Naltrexone is an opioid antagonist and an approved antidepressant drug. The combination is sold under the trade name Contrave and has shown clinical safety and efficacy in clinical trials. It is an extended-release formulation, and the combination may promote weight loss by inducing satiety and increasing energy expenditure [70]. Clinical studies have demonstrated that a combination of 360 mg bupropion and 32 mg naltrexone, along with lifestyle and dietary interventions, was more effective at six months and one year than individual medicines alone. Importantly, the combination was associated with some serious side effects, thus necessitating a careful selection of patients to lower the risk of adverse events and increase the possibility of positive health outcomes [71]. For instance, patients taking a combination of naltrexone and bupropion showed side effects such as anxiety, sleep-related health issues, and depression. However, the drug combination did not increase suicidal behavior in patients [72]. Recent research has shown that the combination of naltrexone and bupropion may effectively control metabolic changes. Other studies suggest that this drug combination modulates dopaminergic expression [73]. Based on the safety and clinical efficacy analysis of drugs for treating obesity, bupropion is on the list of drugs that can effectively reduce body weight [74]. The anti-obesity effects of some synthetic drugs are presented in Figures 1 and 2. The Use of Antidiabetic Drugs and Natural Constituents in the Prevention and Treatment of Obesity Obesity is considered the most significant risk factor for the development of type 2 diabetes [75]. In this regard, antidiabetic drugs usually have an effect on body weight control. As it is known, metformin is the first-choice therapy for type 2 diabetes [76,77]. It also has other benefits for health besides its antihyperglycemic properties. Metabolic consequences of its consumption include a reduction in hepatic gluconeogenesis and inhibition of insulin production, as well as weight loss due to the modulation of appetite regulatory centers in the hypothalamus, management of hepatic steatosis, and alteration in the gut microbiome [76]. According to Seifarth et al. [78], metformin effectively reduced weight in insulin-resistant and insulin-sensitive overweight patients. Due to its excellent safety profile, tolerability, and efficacy, it was considered first in the line of treatment of type 2 diabetes (in conjunction with modifications of the lifestyle) [79]. These promising health effects have made it an attractive opportunity for disorders associated with obesity, type 2 diabetes, and aging [76]. Zhang et al. [80] found that beinaglutide was effective in glycemic control and weight loss in treating type 2 diabetes. Recently, Gao et al. [81] found that beinaglutide was more efficient than metformin at reducing a fat mass in patients of the Chinese population who were overweight and nondiabetic. The effective daily doses of beinaglutide were in the range of 0.24-0.30 mg [82]. Herbal substances are regarded as an important target for drug development because of the wide variety of phytoconstituents and their few adverse effects [83]. A huge number of bioactive compounds from medicinal plants are beneficial in obesity coping. Among the secondary metabolites of plants, mainly polyphenols and terpenoids (Figure 3), they have demonstrated effective weight management properties [83][84][85]. Some of the alkaloids have good potential in the treatment of obesity, but the significant toxicity of most of them narrows the range of their application [86]. The anti-obesity effects of phytoconstituents manifest in different ways: through inhibiting the lipid/carbohydrate-metabolizing enzymes, suppressing the appetite and adipogenesis, and inhibition of lipid absorption, as well as enhancing energy metabolism [84,87]. The modern "omics" technologies (genomics, transcriptomics, proteomics, and metabolomics) effectively evaluate the traditional healthcare phytosubstances as sources of new natural biocompounds as potential anti-obesity agents [88]. Recently, experimental research demonstrated that polyphenols as strong antioxidants were effective prebiotics in managing obesity induced by a high-fat diet [89]. As oxidative stress is crucial in the pathophysiology of obesity, modifying the concentration of inflammation mediators is associated with the number and size of adipocytes, lipogenesis, regulating appetite through the hypothalamic neurons, etc. [90]. Polyphenols could reduce body weight through different mechanisms [85,91,92]. Randomized controlled clinical trials conducted by Moorthy et al. [89] assessed the effect of polyphenols on body composition in the overweight, obese population. These studies showed some decrease in body weight by a mean of 1.47 ± 0.58 kg. Additionally, polyphenols could effectively prevent increases in weight [89]. The polyphenol-rich extract of the Vaccinium corymbosum leaves modified with arginine demonstrated its effectiveness in managing the metabolic syndrome [93]. Suzuki et al. [94] found a positive effect of black and green tea catechins on obesity. Quercetin and other flavonoids with antioxidant, anti-inflammatory, and hepatoprotective effects can effectively prevent metabolic diseases [95][96][97]. The intake of flavonoids can reduce the risk of metabolic syndrome disorders and rare side effects [95]. Nani et al. [98] concluded that the overproduction of reactive oxygen species is associated with the inflammatory process in obesity mediated through nuclear factor-κB. Chen et al. [99] reported that polyphenols could enhance the energy consumption and weight loss due to increased fat oxidation. Polyphenols are regarded as being very effective in the inactivation of reactive oxidant species [100]. Polyphenolic compounds from fruits and vegetables reduce lipid accumulation and enhance intestinal microflora [99]. The fruits and leaves of some Ericaceae species (Vaccinium corymbosum, Vaccinium myrtillus, etc.) [93,101,102] possess significant lipid-lowering properties and anti-obesity potential due to the presence of valuable phenolic compounds. Polyphenols of marine algae can transform problematic 'white' adipose tissue into 'brown' (rich in mitochondria) and, in this way, enhance energy consumption [103]. In the last decade, several researchers [104,105] have investigated the therapeutic effect of Ginkgo biloba extract in treating obesity and related disorders. The long-term therapy using an excerpt from Ginkgo biloba leaves showed an anti-obesogenic effect on rats [104]. Thomaz et al. [105] revealed that Ginkgo biloba extract mainly consists of flavonoids (25.21%). The chromatographic analysis revealed that flavonoids such as quercetin, kaempferol, rutin, and isorhamnetin were its predominant components [105]. The discovery of leptin at the end of the 20th century created hopes for an effective treatment of obesity, as this peptide hormone effectively regulates the body mass and lipolysis [106]. However, the development of resistance to the leptin influence, which is characterized by the overconsumption of nutrients due to reduced satiety, has been a big obstacle [107]. Liu et al. [107] discovered that the pentacyclic triterpene celastrol isolated from the roots of Tripterygium wilfordi possesses a significant anti-obesity effect as a leptin sensitizer. It can suppress food intake and causes up to 45% weight loss in obese mice by increasing the leptin sensitivity. In addition to the ability to regulate leptin sensitivity and lipid metabolism, it also positively influences the gut microbiota [108]. The weight management effect of carotenoids was found by Mounien et al. [111]. Gammone and D'Orazio [112] found the anti-obesity effect of fucoxanthin (carotenoid from marine algae). Carotenoid lycopene, which accumulates significantly in ripe tomatoes, demonstrated protection against diabetes and obesity [113]. Bjørklund et al. [114] revealed the ability of the other carotenoid astaxanthin, synthesized by numerous microalgae, yeasts, and bacteria, to manage the overweight outcome. Radice et al. [115] found that supplementation of the experimental animals with astaxanthin had positive effects on a variety of symptoms associated with obesity through the hypoglycemic and lipid-lowering capacity, as well as mitigating the immune system. Calanus oil, a natural product from marine crustacean Calanus finmarchicus, which is rich in astaxanthin, has a noticeable effect in treating low-grade inflammation related to obesity [116]. It should be noted that seaweeds were regarded as promising sources of various anti-obesity agents, such as phlorotannins, alginates, fucoxanthin, and fucoidans [87]. Fucoxanthin and fucoidans could inhibit lipid absorption and metabolism, as well as the differentiation of adipocytes, alginates reduce the feeling of hunger, and polyphenol phlorotannin possesses significant antioxidant and anti-inflammatory properties [87]. Cannabidiol from Cannabis sativa, which is widely known for its neurological effects, also has been considered an anti-inflammatory, antitumor, and anti-obesity agent [117]. As cannabinoid receptors regulate food consumption, thermogenesis, and inflammation, the intake of cannabinoids could help to reduce food intake and body weight [118]. Recently, De Blasio et al. [119] demonstrated that essential oils as multicomponent mixtures of volatile terpenoids and other bioactive compounds promote the decrease of fat mass and exert a positive weight management effect. It should be mentioned that essential oils exert these health-promoting effects when inhaled or taken with the diet [119]. Artemisinin, a sesquiterpenoid from the Artemisia annua, is a famous antimalarial drug [120]. In addition to the anti-parasite activity, artemisinin has also displayed antitumor, anti-inflammatory, and anti-obesity properties. Its anti-inflammatory and immunoregulatory effects are valuable in obesity coping, since chronic inflammation is implicated in the pathogenesis of metabolic disorders [116,120]. Islam et al. [121] summarized that several diterpenoids exert anti-obesity effects through various mechanisms. Among them, carnosol, carnosic acid, steviol, and andrographolide could be examples of effective weight management agents. Experimental evidence was obtained of the anti-obesity effects of Ananas comosus juice [122] and papain (proteolytic enzyme) from Carica papaya fruits [123]. The anti-obesity properties of sulforaphane from broccoli (Brassica oleracea var. italica) were revealed by Ranaweera et al. [124]. As it is known, many health disorders, such as diabetes, chronic inflammatory diseases, and obesity, are associated with uncontrolled sugar consumption [127]. As an artificial sweetener, Xylitol effectively prevents metabolic syndrome and obesity [127,128]. It can reduce the increased blood glucose level, body weight, and other unhealthy syndromes [127]. Some vitamins also possess substantial anti-obesity potential [6,129]. The antioxidant and hepatoprotective activity of tocopherol helps in preventing metabolic syndrome [130]. A deficiency of some vitamins in the body can cause excess weight. Thus, Thomas-Valdés et al. [6] concluded that most vitamins were deficient in obese persons, especially the fat-soluble vitamins, vitamin B12, folic acid, and ascorbic acid. Limits in the Pharmacological Treatment of Obesity Although several anti-obesity treatment options are available, the use of pharmacological intervention to combat obesity has several limitations. For instance, most anti-obesity drugs target satiety signaling in the brain. However, the long-term use of synthetic medications that target satiety signaling is unsafe and causes chronic disorders and side effects. This limitation suggests that novel anti-obesity treatment options must focus on signaling pathways that increase energy expenditure and create a negative energy balance [131]. According to an estimate, 25 anti-obesity medications were withdrawn from the market between 1964 and 2009. Importantly, 23 of them were targeted against monoamine neurotransmitters. Anti-obesity medications were associated with psychiatric disorders, cardiotoxicity, and drug dependence. Thus, anti-obesity drugs that target neurotransmitters raise serious safety concerns, and greater transparency and scrutiny in clinical trials are warranted before a drug is approved for clinical use [132]. According to an observation, the weight loss induced by most of the anti-obesity drugs is only <4 kg compared to the control group. However, the adverse effects associated with long-term use do not justify their usefulness in obesity management. Moreover, most anti-obesity medications are suggested as adjunct therapy to lifestyle and dietary interventions. Several obese patients do not respond well to anti-obesity medications; hence, no significant weight loss (>5%) is observed. In non-responding patients, discontinuing anti-obesity medication helps reduce safety concerns and treatment costs. Therefore, there is an urgent need to develop novel and effective anti-obesity drugs that can induce more weight loss with the least side effects. This also requires the study and discovery of novel pathways and new molecular targets so that the obesity pandemic can be handled effectively [133,134]. A randomized clinical trial's findings showed that most anti-obesity drugs demonstrated an average weight loss between 3% and 9% after 1-year treatment compared with the placebo group. However, enough data related to race, ethnicity, and gender are unavailable. Additionally, a high drop-out rate observed in anti-obesity clinical trials is another major limitation and prevents the generalization of clinical findings. It has been observed that anti-obesity medications only lower the glycemia and do not cause a significant reduction in the lipid profile and blood pressure. Finally, limited studies are available to address the safety issue of antiobesity drugs in children and patients taking medications post-bariatric surgery [135]. Concentrating efforts to fill in the knowledge gap is necessary to increase the drug efficacy and improve the safety profile of anti-obesity medications [136]. Pharmacotherapy is an important tool in the fight against obesity. Still, very few drugs are currently approved for the treatment of obesity due to several limiting factors, including possible toxic effects on the patient's body. This fact encourages the active development of alternative pharmaceutical drugs against obesity. Effective pharmacotherapy can only be after a deep analysis of the pathogenesis of obesity [137]. Special attention should be paid to developing specific guidelines for the dosage of radiopharmaceuticals. This task remains relevant due to the negative effects of an incorrect dosage, including toxicity due to the high radioactivity of such drugs. The need for such studies is due to the increase in obese people among the Western population [138]. The potential side effects of pharmacotherapy and surgical methods in treating obesity prompt the search for other ways to treat this pathology. One of these methods is electroacupuncture, during which electrical stimulation is transmitted through acupuncture needles to the body. The authors of the studies provided evidence of the effectiveness of the use of electroacupuncture in the treatment of obesity [139]. The study authors reported that synthetic drugs are gaining ground in anti-obesity therapy. This fact should cause concern, since such drugs' positive effects and safety have not been studied enough. In particular, the study results for antiobesity medicines Aplex and Venera indicate their negative side effects on the kidneys and liver, confirmed by the physiological and biochemical parameters [140]. The prevalence of obesity among children and adolescents remains an important problem that complicates the reluctance of many adolescents to change their lifestyles. The European Medicines Agency (EMA) for a long time did not approve pharmacological drugs for the treatment of obesity in childhood and only in 2021 allowed the use of Liraglutide for the treatment of obesity in persons aged 12-17 years. In the study, the body mass index decreased by 5% after 56 weeks in 43.3% of participants in the Liraglutide group [141]. The study of the pathophysiological mechanisms of obesity was the basis for creating new strategies for treating this disease. Currently, the number of anti-obesity pharmaceuticals remains insignificant, so there is still a need to develop new anti-obesity drugs with a high safety profile and clinical efficacy. Further analysis of the pathophysiological mechanisms of the development of obesity will contribute to a personalized approach to the treatment of obesity and the safe achievement of a sustainable weight [142]. Conclusions Obesity is a chronic metabolic complication, and its management requires long-term medication, lifestyle modifications, and dietary interventions. Patients taking anti-obesity medications may suffer from side effects such as psychiatric disorders, anxiety, depression, and vitamin deficiency. As several anti-obesity medications have been withdrawn in the past owing to serious side effects, the discovery of new metabolic pathways has opened up many opportunities for new drug molecules that may have lesser imperfections and more therapeutic efficacy. The current knowledge of the available anti-obesity medicines that have synthetic or natural origins, their mechanisms of action, and their possible shortcomings was summarized in the study. Various classes of natural compounds are promising agents to combat the obesity pandemic. Developing safer drugs may require polytherapeutic strategies to combat the global obesity pandemic.
2023-02-01T16:04:07.913Z
2023-01-30T00:00:00.000
{ "year": 2023, "sha1": "651703b33d4479521bed66daa3842b9fcd46a060", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/16/2/212/pdf?version=1675090501", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1cc5a04c05e6d14cb6153ec742c22c6c84a83914", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259439167
pes2o/s2orc
v3-fos-license
Corona Pandemic and Internet Comedy in the Egyptian Society (An Analytical Study) : The current study's problem is represented in the attempt to identify internet comedy during the Corona pandemic and its role in the awareness of the pandemic or warning against it, by analyzing a sample of the most popular comic "figures" on Facebook. The study proceeds from Bergson’s Comedy Theory, and Risk Society Theory of Ulrich Beck. Methodologically, the study uses the descriptive method. The qualitative semiological analysis has been used for 13 comic “posts” related to the Corona pandemic, which are selected purposively. In addition to an electronic questionnaire to detect about The positive and negative effects of comedy from the viewpoint of the general public applying on a sample of 320 persons. The results of the study have showed the importance of comedy as a tool for correcting some wrong behaviors in society in order to change and get rid of them, as Bergson assumes. It is a tool for portraying reality through satire, sarcasm, simulation, or highlighting existing defects and problems. This has been evident from the spread out of comic “posts” that have expressed the extent of fear of the Corona virus and its new mutants, and have criticized the Egyptian people’s disdain for the Corona virus, considering it a normal flu case. This has been with the aim of awareness and caution. The vast majority of the research sample has emphasized the role of comedy as a mechanism for social coping with the pandemic, and for increasing knowledge and awareness during the pandemic. Introduction People always resort to comedy as a way to express daily pressures and crises, which is known in sociology as "the sociology of humor", where the joke is a mirror that reflects reality, its crises and problems, which is explained by the French philosopher "Henri Bergson" as an attempt to conquer oppression.The joke is a means of social expression; it is a means of criticism or expressing the rejection of certain practices, because it relieves the individual of fear and is an outlet for pressure.It is also a means of resistance, through which individuals, groups and peoples seek to create a symbolic balance.In human culture, it is called the "weapon of the silent".Joke is the most common means of popular expression among people, and it is a satirical critical outlet shrouded in something of imagination, lightness, and beauty.When people' crises and pains intensify, they flee to weave jokes and produce funny stories that make them laugh and alleviate the hardships they suffer from these crises and troubles.The source of the joke is unknown because it is produced in transient human manifestations as a reaction to actions and situations that are rejected or cannot be confronted directly.At the present time, many websites are active in providing satirical news, exaggerating through messages, tweets, (Hashtags), poems, comments, composite and imaginary pictures, and video clips, in their mockery of the political and social reality of people. Historically, comedy has been associated with political crises and disasters.This is due, as indicated by Sigmund Freud's theory, to the therapeutic power of comedy and its ability to reduce tension, provide relief, and increase trust in the relationship between the client and the therapist.Hence, comedy appears in a crisis situation as a response to stressful events as a behavioral and cognitive strategy, that contributes to providing hope in different ways.In anthropological studies, the functions of comedy are discussed as a strategy to confront the threats posed by changes in the environment and climate; this is such as nuclear disasters that have had severe effects at the local level, and long-term health and environmental effects.The Chernobyl disaster in Ukraine in 1986 brought a new form of folklore called "Chernobyl folklore" which consists of various narrative forms in the aftermath of the crisis such as hearsay, personal narratives, parodies of folk songs as well as jokes.Hence, comedy is linked to the social context in which it exists.In times of crises, comedy brings hope, and in oppressive political regimes it is an essential tool for expressing anger and frustration [22]. The Study Problem and Questions With the escalation of the spread of the Corona pandemic, the use of sarcastic spirit worsened, and many Arab peoples tended to make jokes through video clips, "comics" and comic stories, particularly, towards widespread behaviors such as hugs, handshakes, and kisses, as well as towards the precautionary procedures imposed by the World Health Organization on all countries to prevent infection.The spread out of the Corona pandemic and limiting its prevalence, as well as later after the emergence of the vaccine and the start of receiving preventive doses to limit the spread of the virus.This was helped by the presence of cyberspace and the increase in the number of its followers, whether through Facebook pages, Twitter, Instagram, YouTube programs, TikTok, and other social media sites.A number of tweets spread out all over Arab countries at the beginning of the pandemic (COVID-19).As an example, people in Syria ridiculed the idea that the country was free from the virus and that it should be protected "from the eye of Canada, from the eye of France, and from the eye of Italy".In Jordan, the saying was repeated, "the virus had pity on us".In Gaza, the state of siege was reminded that "the virus did not know how to enter the state". In Egypt, which had the largest share of the circulation of satirical content, and with the continued spread of the virus and the start of imposing curfews, that content began to gradually shift from a content that reflects the initial shock phase in societies to a content that reflects the general mood on ways to deal with the epidemic.For example, "Oh, Lord, return the football and coffee to the men.They stayed at home and brought nothing but trouble" which refers to the period of home quarantine.Many scientific theories have tried to explain the reasons for the spread of the sarcastic spirit and sense of humor in societies and the extent to which this relates to times of crisis, and to find out the reactions of cynics to their reality and to reveal the motives of this type of behavior that activates at the time of crises, calamities and disasters, and the extent of this relationship to the surrounding factors.In anthropology, irony is a psychological projection mechanism that a person uses in an unconscious way, with the aim of protecting himself and attaching faults to others [14]. Several studies, such as the study of Lee & Kleiner (2005), have shown that comedy and laughter contribute to reducing stress, anger, pain, and frustration; as it has positive effects on physical health and the psychological state.It also plays an important role in forming and maintaining social relationships.The study of Bergeron & Vachon (2008) indicated the positive effect of comedy on increasing confidence, quality, satisfaction and purchase intention.Also, the study of (Mendlson et al., 2013) indicated that comedy has five positive benefits: it is an expression of courage, a way to face the misfortunes and inconveniences of life, a reaction to life's contradictions, a stress reliever, and a strategy for perceiving and experiencing life [20]. Hence, the current study problem is crystallized in the attempt to identify the connotations and implicit meanings of the most popular comic posts on Facebook involved in the Corona pandemic, and to reveal the positive and negative effects of using comedy during the Corona pandemic from the viewpoint of the general public and its impact on coping with it. Study Objectives The objectives of the study were as follows: 1) Exposing the connotations and implicit meanings contained in the comic "posts" related to the Corona pandemic, and the patterns of interaction with them.2) Exposing the general public's attitudes towards internet comedy during the Corona pandemic and the factors affecting them.3) Exposing the positive and negative effects of comedy during the Corona pandemic, from the audience's point of view. Study Questions The current study seeks to answer the following questions: 1) What are the types of comedy? 2) To what extent has comedy been historically linked to political and social crises in Egyptian society? 3) What are the connotations and implicit meanings of comedy posts related to the Corona pandemic?4) What are the positive and negative effects of comedy during the Corona pandemic from the viewpoint of the general public?5) What are the factors influencing the use of comedy by the general public during the Corona pandemic? Types of Comedy Comedy consists of two words, "Komos" and "Ode".In ancient Greek, Komos means a celebration or procession, and "Ode" means a song.Comedy became an artistic activity in Athens in 487-486 BC, when those concerned with the theater began showing comedy in Athens along with tragedy in the theater of the god Dionysus.Greek comedy passed through three phases: Old comedy, which flourished from the 5th to the early 4th century BC, was primarily social and political, and had a happy ending.The medium comedy of 404-336 BC was linked to philosophy, literary criticism, myths, and imaginary love, while modern comedies focused on realistic daily life and social stereotypical characters such as the old man, cook, soldier, drunkard, and slave-trader [1].The ancient Egyptians believed that the world was created from laughter and used irony and humor to criticize social and political conditions.Limestone papyrus was one of the most famous materials they used, and its papyrus papers have been preserved in museums all over the world.Humor has been an essential part of Egyptian life since the era of the pharaohs. The Egyptians used to make jokes about Roman judges, Ottoman rulers, and French rulers.During the French occupation, Napoleon Bonaparte made humor a crime punishable by death.During the British occupation, the Egyptians used to meet in cafes to laugh at the occupiers, which prompted the British to close the cafes.Laughter shows the desire for life, and Egyptians used it to express their point of view or evade their problems.Political and economic conditions were largely related to the Egyptians resorting to satire and humor to express their pain, problems, and details of their lives [27]. The types of comedy are multiple, with personality which depnds on the main features of people; situations that interested with funny movements and events, and the comedy of ideas deals with social issues, but moral comedy treatments the social behavior of the upper and middle classes, and romantic comedy which focuses on individuals who fall in love and adoration, while comic comedy focuses on funny movements and events.[1].Martin et al. (2003) identified four types of humor as follows: [20] 1) Affective humor (enhance relationships with others): People with a sense of humor use humor to simplify relationships, remove distance, and make others happy.2) Self-reinforcement humor: Self-reinforcing humor is a healthy defense mechanism that allows the individual to avoid negativity and promotes openness to experience, self-esteem, and psychological comfort.3) Aggressive humor (promoting oneself at the expense of others): it is a humorous style that represented in ridicule, sarcasm, or belittling others and saying funny things that harm or influence them.It promotes oneself at the expense of others.4) Self-defeating humor (enhancing relationships at the expense of oneself): It is a form of defensive denial or a way to hide underlying negative feelings.It is related to low self-esteem, depression and anxiety. The Conceptual Framework "Comedy": Comedy, as Aristotle indicated, "deals with a defect and ugliness that does not cause pain or harm, and it depicts people who are below average; unlike tragedy, which depicts people better than ordinary people".It refers to that theatrical color with a comic and sarcastic theme that it aims to display human shortcomings by portraying them in situations of deficiency and weakness, It also cares about the group, not the individual, and drives at changing who we are. it is a dramatic work with lightness and humor, often with a happy ending [9]. Rayes, Rosse, and Cascaldi (2012), defined humor as the amusing effects such as laughter or a sense of well-being and happiness, recognizable by laughter or a smile.Humor comes from a variety of sources, whether verbal such as jokes or visual source such as cartoons, comedy films, or social situations.According to Martin & Lefcourt (1986), humor is a discourse that combines two ideas, concepts, or situations in a sudden and unexpected way.The language in humor is often symbolic that uses metaphor, ambiguity, and irony in order to communicate a more complex meaning.Humorous language also differs in the digital field (online), as it comes in the form of a discussion or conversation.It also comes in the form of animation or pictures; so, it is the outcome of the interaction of the image and the text [20].Humor is the ability to make others laugh using a variety of forms and categories, each with a different audience.It is subject to personal and public taste, as one category may appeal to one culture or country more than another [26]. Memes: The term meme appeared for the first time in Richard Dawkins' book 'The Selfish Gene' (1976) and is defined as a unit that carries similar cultural ideas and behaviors to transfer genetic information from one generation to another.Davidson (2012), defined it as a piece of culture that is usually a joke gains influence by being transmitted over the internet".This means that memes are too closely associated with a particular culture to be easily identified.Anugrahputra & Triyono (2016) defined memes as a means of transmitting knowledge and ideas, and so, is considered as a cultural maker; referring to them as one of the patterns of digitization of the participatory culture of the Internet.Meme is taken from the Greek word "Mimema" meaning imitation [11].It is also defined as units of popular culture that are circulated, imitated, and transformed by means of social media, and characterized by a sense of humor" [23]. Internet memes are jokes presented through image text or plain text that spread through various internet platforms, and contain religious, cultural, political and social backgrounds.The Oxford English Dictionary defines it as an image macro that connects an image to a key phrase that produces a humorous effect.Dynel (2006) defines it as: "an artifact such as a video film or image appearing on Internet sites and produced through imitation and recombination".Shifman (2014), pointed to the basic characteristics of memes, which are: the gradual spread from individuals to society, and reproduction through imitation and spread [11]. Procedurally, Comedy is a creative intellectual work that aims to laugh and entertain by depicting reality, using various forms of sarcasm, social satire, simulation and criticism.It is transmitted, published and interacted with the pandemic through the social media, taking various forms such as text only, image only, or text accompanied by an image.It aims to confirm awareness of the pandemic and its risks, adherence to precautionary procedures, or reversing a behavior in order to get rid of it and avoid it. Research Heritage Sociology in the 19th century focused on the basic structures and transformations associated with industrialization, modernization, urbanization and secularism.There was no interest in topics related to daily life, such as play, leisure time, personal life, and other topics not directly related to development.Sociology has been interested in comedy since the seventies, focusing on its relationship to social roles, control, culture, and the nature of social relations [21].From a review of these studies it turns out: 1) It has been evident through the review of the research heritage that multiple studies are interested in the positive impact of the comic content in marketing and advertising campaigns, in the press, and in television programs, on society and public opinion, which is represented in increasing opportunities for attraction and influence, achieving high rates of viewing and participation, high marketing rates, and increased profits of marketing companies; in addition to the ability to raise issues of public opinion [7,9,[24][25][26]. 2) The majority of studies have relied methodologically on the electronic questionnaire tool through the samplesocial survey, in order to identify public attitudes; while a few of them have been interested in the qualitative analysis of the internet memes or caricatures related to a specific issue of public interest, such as the issue of Syrian refugees, or special to a specific comedian, in order to explore his intellectual tendencies and the style of art to which he belongs.[6,10,12]. 3) The previous studies that dealt with comedy varied according to the scientific specialization, and were more related to psychology and stemmed from psychological theories, while others belonged to the specialty of linguistics, which focused on analyzing cartoon images and memes using semiotic analysis.Added to that, the media studies that are more interested in the field of journalism and propaganda, marketing, and satirical television programs, using the Theory of Media Frameworks, Uses and Gratifications.However, there has been an absence of sociological studies in this field, which confirms the importance of the current study, especially, with an emphasis on the role of comedy in facing crises and coping with them during previous historical eras; in addition to its role in expressing the identity of society, as confirmed by the study of [6,11,15].4) The results of the previous studies showed the role of the comic content in influencing society and others, in the field of advertising and marketing.This depends on a number of intermediate variables that affect the attitude towards humor, such as the need for humor, the attitude towards the comic character, previous experience with the product, the degree of innovation in advertising, frequency, and the duration of the advertisement [4][5][6][7][8][9]24].5) in addition to its influence on the press, as it enriches participation, personal interaction, discussion and dialogue, and the effect on issues of public opinion, and television program [2,6,24].6) There is a limitation in the studies that deal with comedy and the Corona pandemic, including one study on the Egyptian society specialized in linguistics, which has been concerned with analyzing memes linguistically only.As for the other two studies, they belong to the specialty of psychology, one of which is in Jordan as an Arab country, while the other is a foreign study.These three studies recommend the need for more research on humor. The Theoretical Framework The study launches from Bergson's Theory of Comedy, as well as the phenomenological approach for examining of social phenomena and The Social Satire Theory and the theory of Risk Society of Ulrich Beck. Bergson's Perspective Henry Bergson is one of the first theorists to take an interest in comedy from a functional perspective.He described comedy as a common social basis, and it is used as a means of exclusion and thus a means of social correction and a form of social control [16].Bergson believes that laughter has an important social function that may be considered as a social punishment for a person who performs wrong behavior as a means of deterrence that may push him to quit this behavior and not become an object of ridicule later.In addition, laughter is considered as one of the methods of warning; as society is always averse to sclerosis of all kinds, so laughter is a tool for correcting and restoring a perfect and healthy society [4,28]. Numerous studies have also indicated that the functions of comedy may be psychological and social.There are disaster and crisis jokes that represent a way to deal with unpleasant experiences, and aim to move away the individual from negative feelings such as fear, sadness or shame.Sociologists Peter Berger (1997) and Scheff (1990) confirmed on the psychological effects of comedy and its utilitarian dimensions.Hence, the functions of comedy are not fixed, but depend to a large extent on the type of relationship, the social context, and the content of the joke [16]. The Phenomenological Approach The phenomenological approach depicts comedy as a realistic view of understanding the social world.It is selective in terms of methodology, combines textual analysis with historical data, and requires a certain vision of reality. According to the sociologist Zijderveld (1982,1983), comedy is playing with meanings in various areas of life, and playing with meanings is necessary to build meaning and daily life, because it is capable of social negotiation.It also contributes to people's awareness of the reality of social life itself.Comedy is like a transparent glass that allows us to seeing the world and ourselves in a way that is slightly distorted and therefore disclosing.Davis (1993), also believes that comedy has the ability to expose reality and may be an assault on reality. The Social Satire Theory Satire is a kind of satirical philosophy, and it is an image of a loud view of life, as it is a picture of the society that the satirist mocks.There is not a society that stands without crookedness, defects and social diseases, and comedy could be the safe way to correct this crookedness.Comedy has different connotations, including political and social connotations.Comedy is not intended for laughter and amusing or searching for entertainment and bringing pleasure only; comedy has different connotations, including political and social connotations.Comedy is not for the purpose of laughter and laughter or to search for fun and bring pleasure only, but rather, it is a portrayal of the political situation with a kind of irony, sarcasm, criticism, humor or other elements.People often resort to comedy when they feel pressure on them; so, they fear it, and try to relieve themselves with colors of humor, attempting as well, to correct the governors in order to correct society, treat diseases, or alleviate societal crises [4]. The Theory of Risk Society Ulrich Beck, a German sociologist, indicated in his book "The Sociology of Risk" that the risk society emerged in the middle of the 20 th century, which concerned with how to manage risks, confront them, or adapt to them.He emphasized that the risk society is not limited to environmental and health risks alone, but rather includes a whole series of interrelated changes in all areas of life, such as changing employment patterns, job insecurity, the retreat of the impact of values and customs, the erosion of the traditional family, and others [18]. Regarding the risk society of Anthony Giddens -the contemporary English sociologist -he highlighted the relationship between those risks and the repercussions of globalization.Many of the changes resulting from globalization pose new forms of risks that are different from previous times.He also linked risks to the developments of the industrial society.As a consequence of the expansion of the fields of industrial sociology and the storming of new horizons such as the conquest of space, the need to develop methods for calculating and predicting risks arose.As a result of the expansion of the fields of industrial sociology and the storming of new horizons such as the conquest of space, the need to develop methods of calculating and predicting risks has arisen.Giddens identified two basic types of risks: manufactured risks, which are those in which man intervenes voluntarily and results from insufficiency and lack of experience and natural hazards, which do not interfere with human existence, and represented in: epidemics, floods, droughts, the environment and natural disasters.Giddens concerned much with those man-made risks that entail great environmental and health risks. The Theoretical Issues Oriented to the Study 1) Comedy is a social phenomenon that has apparent and latent social functions. 2) The functions of comedy are not fixed, but rather vary according to the social context.3) Comedy is necessary to build meaning and daily life because it is capable of social negotiation and contributes to people's awareness of social reality through playing with meanings.4) The Corona pandemic is one of the current risks associated with the 20 th century, which must be confronted and adapted to. The Methodological Framework The study belongs to the descriptive-analytical type of studies; as it was interested in revealing the role of comedy and its benefits during the Corona pandemic. Data Collection Tools A qualitative content analysis of 13 comic "figures" that dealt with the Corona pandemic in a comedic way was conducted in the period from October 1, 2021 to December 30, 2021.The selection was deliberate, taking into account the following criteria: 1) high level of participation and admiration, 2) spread of the 'post' on electronic pages, 3) diversity in forms of posts, and sources of the published material.The content analysis relied on the semiological analysis approach of Kres and Van Leeuwen's (2006) [25] The post was analyzed for its nature, source of inspiration, language, elements of attraction, mechanism of influence, purpose, comedy genre, pattern and size of interaction, type of influence, and type of influence.In addition to An electronic questionnaire form was used to identify the positive and negative effects of comedy from the point of view of the Egyptian public.The reliability degree was calculated using Cronbach's alpha coefficient (0.829). The Demographic Characteristic of the Study Sample The electronic questionnaire has been applied to a sample from the public, with a sample size of 325 items.The results show that (40.9%) of the sample from males and (59%) from females.Most respondents have university education and postgraduate education. Discussing the Results The results of the study have been analyzed in the light of the study objectives, the theoretical framework and this will be explained as follows: The Connotations and Implicit Meanings of the Comedic Figuers Figure 1.Explain Corona virus is not a cold. Figure 1 is a scene from a movie with a written text.It is part of a movie and belongs to the style of situation comedy, where the likeness of the Corona virus appears in the body of the artist Yasser Galal and he surprises the artist Ghada Adel.It captured a large number of participation that reached 2300, and 712 comments, of which 659, or 91%, related to the topic.The goal of the post is to laugh a lot and to raise awareness that the Corona virus is not a cold.The form of interactions with the figure varied between laughter, love, sad emotions, supports, and astonishments.Some comments were laughter through "emotion" or cartoon pictures.The figure "Oh, Brother, if they had vaccinated you anise, your throat would have been warmed up" is a personal creativity that aims to criticize and denounce the vaccine used in the Corona virus.The figure witnessed a large number of interactions between 6288 laughter, 989 participations, and 1166 likes.The number of comments reached 254, of which 243 related to the topic at a rate of 95.7%.Some comments were directed in a way that confirms the weak effectiveness of the vaccine, such as "Allah is sufficient for us," "By God, indeed," "You are right," and "There is no need for the second dose."The goal of the figure 3 is to announce and raise awareness about the presence of the new "PA2" mutant in a comic image suggesting that the new mutant is a newborn of the Corona virus.The figure got a number of interactions, with 1536 laughter, 433 likes, 200 shares, and 364 comments, most of which were related to the topic by 99%.Comments such as "Calm down, Uncle COVID", "You are married to the wife of the Devil, but the Chinese version", "May God destroy the strong and destroy you, Sheikh", "How could we spend on all these children?!, God destroys your house, "What is happening?We are we tired of !", "COVID, or a rabbit?Why don't you have some rest !!", "The issue has gone beyond jesting,", "You made us bored,", "God does not bless the father or the newborn, may God cut off your offspring," "May God stop your growth," and "Esther, Lord," all of which stress the excitement of fear, extreme anxiety, and a state of sadness.4 is a personal creativity from the 'admin'.It uses phrases that indicate love, likens the Corona virus to the beloved and the wife, and prays to God to bless her and bless their children from the new mutants.The goal was to make people laugh, and the laughter interactions reached 1512, 227 likes, 113 participations, and 121 comments.The comments were largely negative, but also in a comedic manner, such as "May God take you, the take of the Mighty and the Dear", "May God disperse you and displace your children", "And from love what killed," "God suffices you, Where didi you send my sense of smell?" "May God cut you off," "A calamity has come to take you," "I live and see you as an endangered single, Oh COVID," "Curse of Amon be upon you," and "Even you, Oh Covido, had you been engaged and brought children to us, We got sixty defeats".Figure 5 is a personal creativity from the 'admin'.It is a written text, "Bad weather is back again, Hope me have the same".Slang was used in the figure, and the mechanism of attraction was represented in a 'trend'.The mechanism of influence was represented in criticism and sarcasm.The figure received a large number of interactions, with laughter reaching 10714, admiration reaching 1874, participation reaching 552, and comments reaching 657.The majority of the comments emphasized fear, anxiety, and intense dissatisfaction with the return of the virus such as: "Brother, I hope you will never come back", "Oh, Sheikh, God suffices me, He is the best disposer of affairs in you", "have some rest -I hope you will be destroyed", "Oh God, remove the epidemic and the scourge from us", "O uncle, from you to God ", "Away from evil " "We don't want you".Examples of comments include "It is the nineteen-number that frightens us because the number of the case's owner lies in it and so on", "The second nineteen became more frightening than the corona virus" and "Afsha made us happy by the final blow".comments, most of which were related to the topic.Some of the comments were with the aim of laughter, while others denounced it, such as "God forbids, May God protect us", "with God's help, Egypt is preserved", "the gates of hell are locked on you bro, have mercy on us", "people are fed up", "from you to God", and "there is no power but from God". Figure 8 is a scene from a movie in which Corona is likened to the artist Yasser Galal, who is in a state of confirmation that it is not an ordinary "cold".The figure is for the purpose of awareness and caution, and received 310 likes, 273 shares, and 136 comments, 96.3% of which were related.It belongs to the style of comedy of ideas, and the mechanism of influence was denial and exclamation.such as: "Hushhh shut up, do you know more than the government?it said it is cold", "I swear to our prophet whatever happens, it is cold", "don't be nervous", "be calm COVIDo".Some comments emphasized fear and anxiety, while others suggested recklessness.such as: "don't hurt us our lord with it", "Oh my fear", in addition to others that suggested recklessness, such as: "by God, there is nothing called Corona". Figure 9 is a scene from a movie by Adel Imam, which uses the virus mark on his head as an emphasis that the virus is not cold.It has received 2,111 likes, 18,763 for laughter, 6,500 for shares, and 1,400 for comments, of which 93.5% were related.Most of the comments were aimed at laughing, but some emphasized the seriousness of the virus and aimed to increase awareness such as: "the spread is terrible", "Lord, we have to understand what we are in", "praise be to God, I lost all senses and believed", "we ask God for wellness".Figure 10 looks like a scene from the series (I will not live in my father's robes) and accompanied by the text "Home Isolation in Egypt".It used the method of denunciation and criticism for one of the famous social customs in Egyptian society, with the aim of urging behavior modification and abandonment of this habit during the Corona pandemic.The post has known a large number of interactions, and laugher reached 7102, 829 likes, 1200 shares and 847 comments, of which 94.5% were related to the topic.Some comments confirmed the behavior, by "we are swear to God", "it really happened", "we are ", "when mother is infected by Corona", "our family", "we are ", "I know well hahaha" while others denounced the behavior by "it is a birth not isolation," "By God, it didn't happen".The source of Figure 11 is a real news from a television channel called Cairo "Al-Kahira".The text "infection at first sight" is accompanied by a symbol "heart" to express the virus and the infection that it transmits as if it is a state of love transmitted through the language of the eyes.The mechanism of influence is criticism, ridicule, and simile in the written phrase instead of "love at first sight".The post got a large number of interactions, with the number of laughter reaching 6900, 1212 likes, 212 for exclamation, 138 for sadness, and 1500 comments.Most of the comments were related to the topic by 95.6%, for the purpose of criticism, ridicule and provoking laughter such as: "don't look at him, or it will infect you", "Oh its beauty", "The look of the Omicron hit me and the Lord of the throne saved me", "chapeau Youssef Al-Sharif" with some expressing fear and concern about the spread of the virus such as: "God protects us and keeps the epidemic away from us", "God protects us", "There is no power but from God".The figure witnessed a large number of interactions, with 1500 laughter, 260 likes, 14 sadness, and 242 shares.However, the number of comments decreased to 73 comments, with many expressing sadness and anger from the vaccine.Such as: "don't bother us", "if I was vaccinated with honey, I won't cough as I do now", "Omicron is the official name and the nickname for the Egyptians is "cold", and what will the one who took three doses do!!".The admin used a well-known character in a football match to illustrate the threat of the Corona virus.The figure 13 received a large number of interactions, with laughter reaching 6300, admiration to 615, love to 962, and comments to 387.Most of the comments were going in the same direction, confirming the Egyptian people's distress with the referee and their wish for him to be infected with the virus.These comments like as: "Well done COVID", "hey, come on", "COVID, don't look at him a lot", "get him and save us from his mother", "stick to his nose", "Allah suffices me, and He is the best disposer of affairs", "He really deserves", "Better".The mechanism of influence was the analogy and the style of the threat was clear from the image. The Factors Affecting the Use of Comedia During Corona Pandemic 1) The results of the quantitative analysis showed a high degree of eagerness of the public to pursue social media during the Corona pandemic, with 64%.This highlights the importance of social media for interacting through discussion, dialogue and the establishment of sound social relations.The use of motives of social media during Corona pandemic were multiple, with entertainment and leisure time being the first priority 74,2%, followed by following up news and increased awareness of precautionary procedures.2) With regard to the extent of the spread of comedian posts during Corona pandemic; The results showed a large and significant spread of the virus among 55.4% of the sample, with an arithmetic mean of 3.6 and a standard deviation of 1.2.This was in line with previous studies that have emphasized the role of comedian content in being more influential and attractive to individuals, and thus more widely seen and shared. 3) The need for humor during the pandemic was on top, with 79.7%, and overcoming anxiety and tension 44.9%, and obtaining information 37.8%.This confirms the sociological theories of the use of humor, which assume that the reasons for resort to humor are motivated by the feeling of pressure.it is used to alleviate pressure on the social individual and act as a deterrent to the rulers and society, as it is a calendar and a correcting tool for society.4) The results show that there are attractive factors associated with comedy posts that affect public attitudes towards them, such as being laughable, form of the post, comments of followers, use of celebrities, and more comments.5) The study showed that Internet comedy during the Corona pandemic had both positive and negative effects, with the reduction of stress and tension being the most important.It also increased knowledge and awareness by 44.9%, and understanding the content of the message by 42.2%.However, there were some negative effects, such as boredom (24%), recklessness and indifference, (22.2%), lack of credibility (16%), and fear and anxiety (1.2%).This is consistent with the results of several previous studies that have confirmed the role of comedy in being more influential and attractive to individuals, and therefore, more watched and shared. Conclusions 1) The results of content analysis through semiological analysis showed the importance of comedy as a deterrent and evaluation tool for some wrong behaviors in society with the aim of changing and getting rid of them, as Bergson assumes; it is a tool to portray reality through sarcasm, ridicule and simulation, or to highlight the flaws and problems creatively.This was evident from the spread out of comedy posts that expressed the extent of fear from Corona virus and its new mutants in addition to criticism of the Egyptian people's disregard and carelessness towards the virus and considering it an ordinary cold, and this was for the purpose of raising awareness and caution.2) Comedy is a social phenomenon related to the social context in which it exists; this is shown through interactions and comments with comedy posts that came not only for the purpose of laughing, but to express and preserve social reality as well.The functional theory assumes, comedy has positive effects represented in entertainment, amusement, avoiding anxiety and tension, accepting and adapting to social reality, particularly, during Corona pandemic (as a model), in addition to the alerting purpose in order to stay away from wrong social behaviors and habits, in attempt to change them, increase awareness and caution about Corona pandemic and its repercussions. Suggested Topics for Study The subject of comedy requires further future studies in the field of sociology and anthropology, in order to identify: 1) The role of comedy in exploring the identity of Egyptian society and its culture. 2) The ethical, organizational, and cultural determinants of internet comedy.3) Determinants of the responsible role of comedy content producers through the social media.4) Satirical television programs and their role in facing societal crises.5) The Egyptian joke and its relationship to folklore and community identity. Figure 2 . Figure 2. Denounce the vaccine of Corona virus. Figure 2 Figure2to say that: "Oh, brother, if they had vaccinated you anise, your throat would have been warmed up"The figure "Oh, Brother, if they had vaccinated you anise, your throat would have been warmed up" is a personal creativity that aims to criticize and denounce the vaccine used in the Corona virus.The figure witnessed a large number of interactions between 6288 laughter, 989 participations, and 1166 likes.The number of comments reached 254, of which 243 related to the topic at a rate of 95.7%.Some comments were directed in a way that confirms the weak effectiveness of the vaccine, such as "Allah is sufficient for us," "By God, indeed," "You are right," and "There is no need for the second dose." Figure 3 . Figure 3.The announcment about the presence of the new "PA2". Figure 4 . Figure 4.It expresses the love of Corona and its mutants. Figure Figure 4 is a personal creativity from the 'admin'.It uses phrases that indicate love, likens the Corona virus to the beloved and the wife, and prays to God to bless her and bless their children from the new mutants.The goal was to make people laugh, and the laughter interactions reached 1512, 227 likes, 113 participations, and 121 comments.The comments were largely negative, but also in a comedic manner, such as Figure 5 . Figure 5.The criticism and sarcasm of Corona virus. Figure 6 . Figure 6.The more attention with corona virus. Figure 6 Figure 6 says that:"He is nineteen, COVID, Hey, you nineteen, why you became frightening?"It had 1177 likes, 320 participation, and 236 comments, most of which were related to the topic.The figure was intended for laughter, and used metaphor for the famous football player Mohamed Magdy.The method of attraction was the "trend" and repetition of words, and the use of colloquial language. Figure 7 . Figure 7.The announcing the closure of the State of Netherlands due to the spread of the Corona virus. Figure 7 Figure 7 is a real news from Al-Youm Al-Sabea newspaper announcing the closure of the State of Netherlands due to the spread of the Corona virus.It has received 612 likes, 266 laughter, 321 shares, and 142comments, most of which were related to the topic.Some of the comments were with the aim of laughter, while others denounced it, such as "God forbids, May God protect us", "with God's help, Egypt is preserved", "the gates of hell are locked on you bro, have mercy on us", "people are fed up", "from you to God", and "there is no power but from God". Figure 8 . Figure 8. Awareness and caution about Corona that is not a cold. Figure 9 . Figure 9.It emphasis that the virus is not cold. Figure 11 . Figure 11.The warning of the spread of infection. Figure 12 . Figure 12.The vaccine from the Egyptians' point of view. Figure 12 Figure 12 is a comedy of ideas that expresses the Egyptian people's curiosity at the infection with the Corona virus despite taking two doses of the virus vaccine.It uses a scene from a movie and the Ministry of Health's logo as a means of attraction.The figure witnessed a large number of interactions, with 1500 laughter, 260 likes, 14 sadness, and 242 shares.However, the number of comments decreased to 73 comments, with many expressing sadness and anger from Figure 13 . Figure 13.Explain the threat of the Corona virus.
2023-07-10T23:57:14.225Z
2023-05-29T00:00:00.000
{ "year": 2023, "sha1": "42098e71bf5e56c5d02af90ffed05a7b3b0ea150", "oa_license": "CCBY", "oa_url": "https://article.sciencepublishinggroup.com/pdf/10.11648.j.ash.20230902.15.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "156fff98403e17290cdeeeaf8407db7f699ef400", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
25233465
pes2o/s2orc
v3-fos-license
Atraumatic diplaced bilateral femoral neck fracture in a patient with hypophosphatemic rickets in postpartum period: A missed diagnosis Highlights • There are very few cases reported in the literature about simultaneous bilateral femoral neck fracture.• Simultaneous bilateral femoral neck fractures are seen more frequently in elderly patients because of the reduction of bone quality and osteoporosis.• For patients with bone metabolic diseases and/or the patients in pregnancy and postpartum period, preventive measures should be increased to reduce the risk of pathologic fracture.• Admitting to the hospital we must be more careful about detecting fractures in these patients. Introduction Simultaneous bilateral femoral neck fracture is an uncommon condition where femoral neck fractures are seen more frequently in elderly because of the reduction of bone quality and developing osteoporosis. There are very few cases reported in the literature and most of these cases have underlying bone pathologies such as generalized epilepsy [1], osteomalacia [2], chronic renal failure [3]. In patients without any additional pathology electric shock, electroconvulsive therapy, and high-energy trauma can lead to femoral neck fractures occur [4,5]. In the treatment for bilateral femoral neck fractures open/closed reduction internal fixation or hip arthroplasty are applied [2,6]. In this case report "atraumatic bilateral femoral neck fracture of a woman in postpartum period with hypophosphatemic rickets disease" is presented. The operation was managed in Department of Orthopaedics and Traumatology Clinic in University Hospital. Because of the delayed diagnose we considered total hip arthroplasty instead of fixation. Case report A 26 year old woman who gave birth 40 days ago refered to University Hospital because of both hip pain and difficulty of walking. In her history about 20 days after the birth, she was brought to emergency department because of the complaints of a sudden pain in both hips. She said while she was sitting and breastfeeding suddenly she felt extreme pain in both hips and couldn't move her legs. At the emergency department because of no history of trauma the patient didn't undergo X-ray imaging. The patient's calcium levels in the blood chemistry were measured as 4.9 mg/dL (8.6-10.2) and admitted to endocrinology service for treatment of hipocalcemia. Blood values at endocrinology service were as Parathormone (PTH): 178.3 pg/ml (15-65), Albumin: 4.08 g/dL (3.5-5.2), Inorganic Phosphate (P): 2.9 mg/dL (2.5-4.5), Alkaline Phosphatase (ALP): 258 u/L (35-105), 1-25OH D3: 10 mg/dL (20-50). In bone densitometry (DEXA) osteoporosis has been identified in the lumbar spine and osteopenia was identified in the femur. Any pathology in cranial magnetic resonance imaging (CrMR) and electroencephalogram (EEG) was not observed. After 17 days follow up her complaints didn't decline and she was consulted with orthopeadic. From her past medical history; she has been followed with hypophosphatemic rickets since she was one year old, using 0.25 mcg calcitriol with calcium phosphate; and has no other diseases (epilepsy, etc.). The patient had never broke any part of her body previously and there was no similar family history. In clinical examination; the patient's general condition was good, her lower extremities were externally rotated with deformed appearance of both thigh and there was severe pain when she stood up. Patient was unable to walk because of the pain. Both neurovascular examinations of lower extremities were normal. There was no difference between the diameter and length of limbs. Subcapital bilateral displaced femoral neck fractures were detected by radiography (Fig. 1). Because of the delayed diagnosis about 3 weeks after the fracture, we didn't consider primary fixation. Because both fractured ends were resorbed and the patient had proximal femoral deformity due to the hypophosphatemic rickets; bilateral total hip arthroplasty following corrective osteotomy was planned. Because of the high risk of morbidity and mortality surgical intervention was decided to perform in two sessions. At first the patient underwent cementless total hip replacement surgery for her left hip and two weeks later the same procedure performed to the right hip. The operation time for both surgeries was nearly 2 h. The patient was operated in the lateral decubitus position and corrective closing wedge osteotomy was applied to the deformed femur (Fig. 2). Intraoperatively when we saw that the medullary cavity was narrow and sclerotic we used the ream- ers to extend the medulla. Also to ensure the stability of the distal osteotomy, revision femoral stem was used and supported with the femoral shaft graft (Fig. 3). There was no complication during both operations or early postoperative period as wound complications, infection, dislocations, revision surgeries, but after 12 months follow up delayed union was observed at the right femoral osteotomy side. At the last time follow up the patient did not complain of any pain and also didn't make any complaints during her daily activities with satisfaction. Informed consent was taken from the patient for publication. Discussion Femoral neck fractures are more frequently seen in elderly because of the reduction of bone quality and developing osteoporosis. In the literature generalized epilepsy [1], osteomalacia [2,7], hypovitaminosis D [8] and chronic renal failure [3,9] are shown as the causes of bilateral femoral neck fractures. In patients without an additional pathology electric shock, electroconvulsive therapy [4], and high-energy trauma can lead to femoral neck fractures [5]. In our patient there was also an underlying pathology, she has been followed with hypophosphatemic rickets since she was one year old. Without any trauma while she was sitting, suddenly she felt pain and had syncope. Rickets disease prevalence is 1:20,000. Underlying etiology of the hypophosphatemic rickets is down-regulation of SLC34A1 CYP27B1 and FGF23 gene expression. Renal tubular calcitriol synthesis is suppressed; intestinal absorption of calcium and inorganic phosphate is reduced. In generally, calcium and PTH levels seen as normal [10]. In our case; calcium levels was low and PTH levels were high but the blood phosphat levels were in normal limits. We thought that the normal level of phosphate was related with effect of drugs used by patient. During pregnancy and also in the lactation period, baby's vitamin D and calcium needs are supplied from mother. This can cause mineral loss from the women bone. There are similar studies associated pregnancy and lactation with atraumatic bilateral femoral neck fracture in the literature [2,11,12]. But there was no additional underlying disease in these patients but our patient had hypophosphatemic rickets. The presence of hypocalcemia and occuring time of the fracture (in postpartum period) suggest that rickets induced hypocalcemia and lactation were probable underlying etiology of the fracture. In the treatment of bilateral femoral neck fractures open/closed reduction internal fixation or hip arthroplasty are applied [2,6]. In our case, because of the acute period had past, fractured ends were resorbed and also the patient had proximal femoral deformity, we applied total hip arthroplasty following corrective osteotomy. Because of the high risk of surgical mortality and morbidity surgery was done in two sessions. Conclusion Our patient had a bone metabolic disease and she was in postpartum period so we thought that the cause of fracture was the mineral loss. Because of the delayed diagnosis we had to perform THA instead of internal fixation. For patients with bone metabolic diseases and/or the patients in pregnancy and postpartum period, preventive measures should be increased to reduce the risk of pathologic fracture. Admitting to the hospital we must be more careful about detecting fractures in these patients. This study has been reported in line with the SCARE criteria [13]. *This study presented as a poster presentation at the Turkish National Ortopaedics and Traumatology Congress in 2015. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Conflicts of interest There is no conflict of interest. Sources of funding for your research This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Ethical approval No. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. Author contribution Erdal uzun-all steps. Ali Eray Günay-data analysis or interpretation, writing the paper. Registration of research studies researchregistry1464. Guarantor All the authors of the study.
2018-04-03T02:32:36.234Z
2016-10-17T00:00:00.000
{ "year": 2016, "sha1": "f7661a8c60df258c6f570078f5fd238d344eec69", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2016.10.028", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7661a8c60df258c6f570078f5fd238d344eec69", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236493164
pes2o/s2orc
v3-fos-license
Stratified Radiative Transfer for Multidimensional Fluids New mathematical and numerical results are given for the coupling of the temperature equation of a fluid with Radiative Transfer: existence and uniqueness and a convergent monotone numerical scheme. The technique is shown to be feasible for studying the temperature of lake Leman heated by the sun and for the earth atmosphere to study the effects of greenhouse gases. Introduction Fifty years ago, the second author was admitted to the prestigious Dept of Applied Math. & Theoretical Physics at Cambridge, UK, headed then by Sir James Lighthill. Two ibm card punchers connected to the computing center -also one of the best in the world in those days-had been relegated to the basement; to use them was frowned upon as a threat to the speciality of the lab: clever analytic approximations and other multiple scales expansions of special cases of the Navier-Stokes equations. It took a decade to prove that computer simulations for fluids were not only possible, but also useful to industry. A colleague from the wind tunnels in Modane told us then that an airplane could never be designed and validated by a numerical simulation. True to this wrong prediction however, many ad-hoc turbulence models had to be devised: it was only by a combined theoretical, experimental and computational (TEC) effort that the world's first complete airplane could be simulated at Dassault Aviation in 1979 and that airplanes have since be flown safely without the difficult certification stamps of wind tunnels. It was also a success of the top-down approach to CFD. The "JLL"(Lions) school of applied mathematics had the luck of being taken seriously by a few French high-tech industry labs. This was not the case in the USA where the head of a national research funding agency had ruled out variational methods (leading to finite volumes and finite elements for fluids) as "incomprehensible by aeronautical engineers", thereafter forcing all numerical schemes to be in the class of body fitted structured meshes, an impossible task for airplanes. The top-down approach to a problem could be defined by saying that the mathematical model is defined first, then shown to be well posed and then approximated numerically by convergent algorithms. The bottom-up approach is when the problem is made of several modules, studied independently, and patched together at the algorithmic level. The downside of the top-down approach -from functional analysis to numerical methods -is that it may discard important faster algorithms for which convergence are not known. This was the case for compressible flows in the nineties for which the bottom-up approach pragmatically patched different turbulence and/or numerical models in different zones with the drawback that it was difficult to assert that the computed solution was one of the original problem. In the numerical simulations which fill the supercomputing centers today, CFD is often only one part of a multi-physics model. Such are the combustion and climate computations. Both need, at least, radiative transfer and chemistry modules. While the top-down approach is successful in computational chemistry [CDK + 03], mathematical analysis of climate models is still in progress. The three dimensional Primitive Equations with hydrostatic and geostrophic approximations have been shown to be well posed (see [LTW94], [AG01], [CCT20] and the bibliography therein) and so are the multi-layered Shallow Water equations for the oceans [CLGP13]; but even if the coupled ocean-atmosphere is mathematically well-posed, it is very far from the complete model used in climatology. No doubt when a new numerical climate project is proposed, such as [DDT + 15], a top-down approach is made [EDK19], but soon overwhelmed by the complexity of the task when more modules are added. Radiative transfer -one such module that needs to be added -is essential in astrophysics [Cha50] to derive the composition of stars, in nuclear engineering to predict plasma [DL00], in combustion for engines [ACP + 09], and many other fields like solar panels [ZPS + 21] and even T-shirts [ZPS + 21]! In the eighties, at CEA, R. Dautray [DL00] headed a team of applied mathematicians who used the top-down approach in nuclear engineering. The first author was in close contact with them. But turning his expertise on radiative transfer to climate modeling is not straightforward. Books on radiative transfer for the atmosphere are numerous, such as [GY61], [Boh06]and [ZT03]; but to speed-up codes, the documentation manual of climate models reveal that many approximations are made. For instance LMDZ refers to a model proposed by Fouquart [Fou88][Mor91] which suggests that empirical formulas are used in addition to simplified numerical schemes to speed-up the computations. The formulas for the absorption, scattering and albedo coefficients are complex and adapted to reproduce the experimental data. In other words the gap is wide between practice and fundamentals as seen by Fowler [Fow11] and Chandrasekhar [Cha50], for instance. Coupling radiative transfer to the Navier-Stokes system using the top-down approach is the topic of this article. The problem is shown well posed in the context of a stratified atmosphere and a numerical method -derived from the mathematical proof of well posedness -is proposed. It is accurate in the sense that there are no singular functions or integrals to approximate. It is fast compared to the fluid solver to which it is coupled but of course not as fast as empirical formulas. Radiative transfer and the temperature equation Let us begin with a simple problem: the effect of sunlight on a lake Ω. Let I ν (x, ω, t) be the light intensity of frequency ν at x ∈ Ω, in the direction ω ∈ S 2 , the unit sphere, at time t ∈ (0, T ). Let T, ρ, u be the temperature, density and velocity in the lake. Energy , momentum and mass conservations (see [Pom73],[Fow11]) yields (1),(2),(3): The fundamental equations (2) where ∇, ∆ are with respect to x, , is the Planck function, is the Planck constant, c is the speed of light in the medium and k is the Boltzmann constant. The absorption coefficient κ ν := ρκ ν is the percentage of light absorbed per unit length, a ν ∈ (0, 1) is the scattering albedo, 1 4π p(ω, ω ) is the probability that a ray in the direction ω scatters in the direction ω. The constants κ T and µ F are the thermal and molecular diffusions; g is the gravity. Existence of solution for (3) has been established by P-L. Lions [Lio96]. As c >> 1, in a regime where 1 c ∂ t I ν << 1, integrating (1) in ω leads to an alternative form for (2): As usual, boundary conditions must be given. Dirichlet or Neumann conditions may be prescribed for u and T on ∂Ω. For the light intensity equation, I ν should be given at all times on {(x, ω) ∈ ∂Ω × S 2 : n(x) · ω < 0}, where n is the outer unit normal of ∂Ω. Finally ρ should be specified on on ∂Ω when u · n < 0. Grey Medium When κ ν and a ν are independent of ν -a so-called grey medium (cf. [Fow11], p. 70)-the problem can be written in terms of I = ∞ 0 I ν dν: where B 0 comes from the Boltzmann-Stefan law: Vertically stratified cases: spatial invariance Let (x, y, z) be a cartesian frame with z the altitude/depth. The sun being very far, the light source on the lake is independent of x and y. Then, assuming that T varies slowly with x and y, in the sense that then (1),(2) become [ZT03] where z M (x, y) and z m (x, y) are max and min of z such that (x, y, z) ∈ Ω, µ is the cosine of the angle ω to the vertical axis, Q − (µ) = −µQ cos θ is the sunlight intensity when θ is the latitude, andT S is the temperature of the sun; we have assumed that the sun is a black body and that no light comes back from the bottom of the lake. Here u is given, solenoidal and regular enough for (10) to make sense. • Hypothesis (H) will hold if T varies slowly with x, y. It will be so if u is almost horizontal and the vertical cross sections of Ω depend slowly on x, y. Turbulent flows do not satisfy this criteria. • All terms of (10) must be kept, except maybe, κ T ∂ xx T and κ T ∂ yy T , but neglecting them renders the boundary conditions mathematically difficult. • We shall ignore the mathematical difficulty induced by the boundary condition ∂ n T | ∂Ω = 0 when the intersection of the side of the lake with the water surface is not at right angle. Elimination of I when the scattering is isotropic Denote the exponential integral and the mean light intensity respectively by Then the method of characteristics applied to (11) gives (12) Note that to improve readability, we write indifferently T (z) or T z . 4T S and assume that a = 0, then where T + = max(T, 0). Note that T → −κ T ∆T + T 4 + is a monotone operator for which Newton or fixed point iterations can be applied to solve the PDE. To prove monotone convergence, the following result is needed. {T n } n≥0 generated by Algorithm (14) converges to a solution of (13) and the convergence is monotone: T n+1 (x) > T n (x) for all x and all n. Proof : From (14) and as Remark 1. Generalization of the above result to (P 3 ) is straightforward because the maximum principle holds also for the temperature equation with convection. Consequently it seems doable to extend the above to the system (2),(3). When the density variations with the temperature are small the Boussineq approximation can be used in conjunction with (13): with u, T given at t = 0 and u or ∂ n u or pn + ν T ∂ n u and ∂ n T = 0 or T given on ∂Ω. The kinematic viscosity ν F = µ F /ρ is taken constant; b is a measure of ∂ T ρ and T 0 is the average temperature. See [Att09], for instance, for the mathematical analysis of the Boussinesq-Stefan problem (similar to (P 4 ) without the T 4 terms). A one dimensional test If Ω = (0, 10), we need to solve with Algorithm (14) the integro-differential equation in z: −66T + T 4 = 12.5E 3 (0.1|10 − z|) + 0.05 The results are shown on Figure 1. The convergence is monotone as expected, even though Theorem 1 hasn't been proved when a Dirichlet condition is applied to T on part of ∂Ω. Notice that in absence of sunlight the temperature would be T (0) everywhere. A two dimensional test for a lake Now Ω is half of the vertical cross section of a symmetric lake. The lower right quarter side of the unit circle is stretched by x, z → 30x, 10z. The bottom boundary has an equation named z = z m (x). The same problem is solve in 2D: The same 3 × 10 double iteration loop is used ; the results are shown on Figure 2. A 3D case with convection in Lake Leman Lake Leman is discretized into 33810 tetrahedra. The surface has 1287 triangles. The Finite Element method of degree 1 is used. This is too coarse for a Navier-Stokes simulation but appropriate for a potential flow. Pressure is imposed on the left and right tips to simulates the debit of the Rhône. The pressure p solves −∆p = 0 with ∂ n p = 0 on the remaining boundaries; the velocity is u = ∇p. The top plot in Figure 3 shows p and u. The full temperature equation of Problem (P 4 ) is solved with the same physical constant as above. The temperature is set at T e initially and on the bottom and side boundaries of the lake. The time step is t = 0.1; the method is fully implicit for the temperature. At each time step 3 iterations are needed to handle the T 4 term. Figure 3 shows the temperature after 15 time steps; it appears to have reached a steady state. The top right view of Figure 3 shows a region in red where the water at the surface is the hotest. This computation is merely a feasibility study to prove that the implementation of the RT module in a standard CFD code is easy and fast. Computing time on an intel core i9 takes less than a minute. Comments on the programming tools In fifty years the research problems have become increasingly complex and without the joint development of computers and programming tools it would not be possible for a single individual to contribute or even test his ideas. The general case, κ ν , a ν non constant Photons interact with the atomic structure of the medium which implies that κ ν depends strongly on ν but also on the temperature and pressure. For the earth atmosphere the pressure and the temperature are approximately decaying exponentially with altitude. Assume that variations with altitude are known: Consider two types of scattering kernels: a Rayleigh scattering kernel p r (µ, µ ) = 3 8 [3 − µ 2 + 3(µ 2 − 1)µ 2 ] and an isotropic scattering kernel p = 1. Let a r ν and a i ν := a ν − a r ν be the scattering coefficients for both. The problem is The boundary condition at τ = 0 is a simplified Lambert condition which says that a portion α of the incoming light is reflected back (Earth albedo) and adds to the prescribed upgoing light Q + ν . Sun light is prescribed at high altitude, Z, to be Q − (µ). Let An integral formulation can be derived from (20) as in [Cha50], section 11.2: where The system is coupled to Iterative method for the general case In the spirit of (14), consider Note that for isotropic scattering K ν is not needed. Then the following convergence results hold when thermal diffusion is neglected. Assume κ ν strictly positive and uniformly bounded, and 0 ≤ a ν < 1 for all ν > 0. Let Q ± ν ≥ 0 satisfy, for some T M and some Q Then Algorithm 4.2 defines a sequence of radiative intensities I n ν and temperatures T n converging pointwise to I ν and T respectively, which is a solution of (20),(27) and the convergence is uniformely increasing. 1. Starting with T 0 = 0 is a sure way to initialise the recurrence and have T 1 > T 0 . 2. Most likely, monotone convergence holds also in the general case α > 0, u, κ T and 3. In the special case a r ν = 0, and Q ± ν (µ) = |µ|Q ± ν the problem is The iterative process is then to start with T 0 = 0, and compute T n+1 from T n by dν is continuous, strictly increasing, hence invertible. Thus (31) defines T n+1 τ uniquely. 5. One may recover the light intensity by but numerically these are singular integrals while (30),(31) are not. Indeed e − x µ /µ tends to infinity when x and µ tend to 0. 6. Theorem 2 extends a result given in [Pir21] which had unnecessary restrictions on κ ν . Proof The complete proof will appear in [BP21]. Here, for simplicity, we consider the case , ∀τ, ν, and so for all τ : As T → B ν (T ) is continuous, it implies that T n+1 τ > T n τ , ∀τ . Hence for some T * (τ ), possibly +∞, T n → T * . By continuity B ν (T n t ) → B ν (T * t ), but it has been show above that B ν (T n t ) = B n ν → B * ν , so B ν (T * t ) is finite and so is T * t . Recall that a bounded increasing sequence converges, so B ν (T n t ) → B ν (T * t ) for all t and ν and the convergence of E 1 (κ ν |τ − t|)B ν (T n t ) → E 1 (κ ν |τ − t|)B ν (T * t ) being monotone, the integral converges to the integral of the limit (Beppo Levi's lemma). This shows that T * τ is the solution of the problem. Uniqueness, Maximum Principle This section follows computations in [Gol87] (in the case Z = +∞ and with a ν = 0) and in [Mer87]. . Then, the solutions (I ν , T ) and (I ν , T ) of (37) with Q ± ν (µ) and R ± ν (µ) respectively, satisfy One has also the following form of a Maximum Principle. 6. An application to the temperature in the Earth atmosphere A numerical test is reported on Figures 4 and 5. It is an attempt at the simulation of the effect of an increase of CO 2 in the atmosphere. Our purpose is only to assess that the numerical method can detect such a small change of κ ν . Equation (31) is solved by a few steps of dichotomy followed by a few New steps. When κ ν is larger than 4 some instabilities occur, probably in the exponential integrals. This point will be investigated in the future. The physical and numerical parameters are • Atmosphere thickness: 12km • Scaled sunlight power hitting the top of the atmosphere: 3.042 × 10 −5 • Percentage of sunlight reaching the ground unaffected: 0.99 • Percentage reemitted (Earth albedo): 10%. • Percentage of sunlight being a source at high altitude (Q − ): 0.1% • Cloud (isotropic) scattering: 20%. Cloud position : between 6 and 9km • Rayleigh scattering: 20% above 9km • average absorption coefficient κ 0 = 1.225 • density drop versus altitude : ρ 0 exp(−z) • Discretization: 60 altitude stations, 300 frequencies (unevenly distributed) • Number of iterations 22. Computing time 30" per cases. One computed with κ 0 = 1.225 which corresponds to a grey atmosphere. One with κ ν shown on the right in pink color which corresponds to Figure 4. The third one is with κ ν shown in green on the right where the the transparent window around frequency 1 has been blocked. On the right the mean light intensity at altitude Z are shown (mostly outgoing waves). Filling the transparent window results in an elevation of temperature. monotone fast and accurate numerical schemes could be found. Hence, adding RT to a Navier-Stokes solver is easy and fast when radiations come from one direction only. As a final remark note that it seems doable to extend the method to the general case where κ ν depends on τ and T . Indeed if the dependency τ → κ ν (τ ) is guessed only approximately, then knowing κ M ν > κ ν (τ ) independent of τ is enough to apply the method with κ M on the left of the equation for I ν with a correction on the right equal to (κ M ν −κ ν (τ ))I ν ; this correction seems compatible with the monotone convergence of the temperature. Then the method could also be extended to the case κ function of T by an additional algorithmic m-loop using κ(T m ) instead of κ(T ) and then updating T m to the T just computed. In this article the numerical computations are only given for showing the potential of the method. Real life applications, coupling RT to the full Navier-Stokes equations requires supercomputing power and will be done later. [GHM20] Mohamed Ghattassi, Xiaokai Huo, and Nader Masmoudi. On the diffusive limits of radiative heat transfer system i: well prepared initial and boundary conditions, 2020. [Gol87] F. Golse. The milne problem for the radiative transfer equations (with frequency dependence). with the notation The last equality in (34) implies that and, assuming that 0 < κ ν ≤ κ M while 0 ≤ a ν < 1 for all ν > 0, the r.h.s. of (35) defines T as a functional of J, henceforth denoted T [J]. Thus (34) can be recast as In order to solve numerically (34), one uses the method of iteration on the sources. Starting from some appropriate (I 0 ν , T 0 ), one construct a sequence (I n ν , T n ) by the following prescription Applying the method of characteristics shows that Since B ν ≥ 0, this formula shows, by a straightforward induction argument, that Since B ν is nondecreasing for each ν > 0, formula (35) shows that we conclude from the equality above that Since the term (a ν J n ν (t) + (1 − a ν )B ν (T n (t))) in both integrals on the r.h.s. is independent of µ, one has One changes variables in the inner integral, so that Let us estimate the quantity Observe that The first inequality is the elementary rearrangement inequality (Theorem 3.4 in [Lie01]); the last one is based on the assumption 0 < κ ν ≤ κ M . Thus Multiply both sides of this inequality by κ ν and integrate in ν: one finds that At this point, we recall that T n = T [J n ν ], so that The expression of the source term can be slightly reduced, by integrating out the τ variable: Initializing the sequence I n ν with I 0 Since ∞ 1 e −θy y dy dy = ∞ 1 dy y 2 = 1 , the series above converges and one has the uniform bound the bound above and the Monotone Convergence Theorem implies that the sequence I n+1 ν (τ, µ) converges for a.e. (τ, µ, ν) ∈ (0, Z) × (−1, 1) × (0, +∞) to a limit denoted I ν (τ, µ) as n → ∞. Since we conclude from (35) and the Monotone Convergence Theorem that T n+1 (τ ) converges for a.e. τ ∈ (0, Z) to a limit denoted T (τ ) as n → ∞. Then we can pass to the limit in (38) as n → ∞ by monotone convergence, to find that for a.e. (τ, µ, ν) ∈ (0, Z) × (−1, 1) × (0, +∞). One recognizes in this equality the integral formulation of (34) or (36). Besides, since we have seen that Using again the equality Summarizing, we have proved the following result. Uniqueness, Maximum Principle This section follows computations in [Gol87] (in the case Z = +∞ and with a ν = 0) and in [Mer87]. The rather subtle monotonicity structure of the radiative transfer equations is a striking result, discovered by Mercier in [Mer87]. In view of the complexity of the computations in [Mer87], it may be useful to keep in mind the following simple remarks, which should be viewed as a motivation. Consider the steady radiative transfer equation (36) without scattering (a ν = 0) in the whole space with a source term 0 ≤ S ν ∈ L 1 (R × (−1, 1) × (0, ∞)): where λ > 0. By definition of T [I], one easily checks that The radiative intensity is given in terms of the temperature T [I] and the source S ν by the explicit formula Now, if one replaces the source of radiation S ν in the right hand side of this equation with a larger source S ν ≥ S ν , it is natural to expect that the resulting radiation intensity I ν will be such that the associated temperature T [I ] ≥ T [I]. Observe now that the function T → B ν (T ) is increasing on (0, +∞) for each ν > 0; the explicit formula for I ν in terms of S ν and T [I] shows that I ν (τ, µ) ≥ I ν (τ, µ). Of course, this argument is by no means rigorous, since it rests on the assumption that , which, although physically plausible, has not been proved yet. (Notice however that by (35), since the Planck function B ν is increasing for each ν > 0.) Thus, the map S ν → I ν preserves both the integral and the order between radiation intensities. Now there is a clever characterization of order preserving maps on L 1 leaving the integral invariant, which is due to Crandall and Tartar [? ]. Roughly speaking, a map from L 1 to itself that preserves the integral is order preserving iff it is nonexpansive in L 1 . This brings in the notion of L 1 -accretivity, which is at the heart 4 of Mercier's remarkable discovery. Indeed, the monotonicity argument above, together with Proposition 1 of [? ] (with C = L 1 (R × (−1, 1) × (0, ∞)) + , which is the set of a.e. positive elements of L 1 (R × (−1, 1) × (0, ∞))) strongly suggest that it might be a good idea 5 to study where S 1 ν , S 2 ν ∈ C and I 1 ν , I 2 ν are the solutions of the steady radiative transfer equation above with source terms S 1 ν and S 2 ν respectively. (Mercier's original argument is even more complex, because he assumes that the opacity κ ν depends on the temperature T , and is a decreasing function of T for each ν > 0 while T → κ ν (T )B ν (T ) is nondecreasing; the reader can easily verify that the intuitive argument above still applies, provided of course that our physically natural assumption that S ν ≥ S ν =⇒ T [I ] ≥ T [I] remains valid in this case as well.) Define s + (z) = 1 z≥0 , and z + = max(z, 0) while z − = max(−z, 0). Thus In accordance with the discussion above, we multiply both sides of the radiative transfer equation for two solutions I ν and I ν by s + (I ν − I ν ) and integrate in all variables. This is precisely Mercier's computation (simpler because κ ν is independent of the temperature). Denote Φ := so that D 2 ≥ 0. Next Since B ν is increasing for each ν > 0, one has so that and s + nondecreasing =⇒ D 1 ≥ 0 . At this point, we must appeal to an additional idea, which is not present in Mercier's paper [Mer87]. Since we are dealing with solutions of the radiative transfer equation having the slab symmetry, it is natural idea to use the K-invariant (in the terminology of section 10 in chapter I of Chandrasekhar [Cha50]). This idea 6 is at the heart of the exponential decay estimate for the Milne problem obtained in [Gol87], and will be used here for a different purpose. We compute 6 A somewhat similar idea, unfortunately unpublished, had been used by R. Sentis to simplify the uniqueness proof for the linear Milne problem studied in [BSS84]. In Theorem 4, if one has the stronger condition one obtains the following bound for the numerical and theoretical solutions The iteration method (42) starting from I 0 ν = 0 and T 0 = 0 defines a sequence of radiative intensities I n ν and temperatures T n converging pointwise to I ν and T = T [I] respectively, which is a solution of (41). The argument above is based on the monotonicity of the sequences I n ν and T n , and does not give any information on the convergence rate. Finally, Theorem 5 holds verbatim for the problem (41). Here are the (slight) modifications to the proof due to the Rayleigh phase function.
2021-07-30T01:16:04.575Z
2021-07-29T00:00:00.000
{ "year": 2021, "sha1": "26e6cb4aae7d21be1cf0f734966b50d8d17f24e6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "26e6cb4aae7d21be1cf0f734966b50d8d17f24e6", "s2fieldsofstudy": [ "Environmental Science", "Physics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Physics" ] }
62753187
pes2o/s2orc
v3-fos-license
Assessment choices to target higher order learning outcomes: the power of academic empowerment Assessment of higher order learning outcomes such as critical thinking, problem solving and creativity has remained a challenge for universities. While newer technologies such as social networking tools have the potential to support these intended outcomes, academics’ assessment practice is slow to change. University mission statements and unit outlines may purport the value of higher order skills; however, questions remain about how well academics are equipped to design their curriculum and particularly their assessment strategies accordingly. This paper reports on an investigation of academic practice in assessing higher order learning in their units. Despite their intentions towards higher order learning outcomes for their students, the results suggest academics may make decisions when planning their assessment tasks that inadvertently lead students on the path towards lower order outcomes. Among the themes to emerge from the study is the importance of academics’ confidence and their attitudes towards the role of assessment in learning and how these perspectives, along with the types of learning outcomes they intend for their students, can influence their task design. Introduction Universities increasingly acknowledge the value of skills such as problem solving, critical thinking and creativity (Bath et al. 2004;Winchester-Seeto et al. 2011), yet the curriculum needs to be designed to support and scaffold development of these skills, and integrating them into assessment strategies has proven a challenge (Astleitner 2002;Burns 2006;Clarkson and Brook 2007;Race 2003).While new technologies have sometimes been heralded as having the potential to address an apparent gap between the rhetoric of curriculum alignment and assessment practice in universities, academic practice is slow to change, and the uptake of new tools to support the development and demonstration of higher order skills remains relatively low.In a study undertaken at an Australian university, academics' confidence in their curriculum design capabilities emerged as an important link with the types of learning they intend for their students, their assessment strategies and the technologies they chose to support assessment.An overview of each of these themes is explored in the next section. Assessment technology literature Assessment is at the heart of students' learning experiences (Brown and Knight 1994;Rust 2002), and Ramsden (1992) suggested it defines the curriculum from the students' point of view.It serves to highlight for students what is important, how they spend their time and ultimately how they view themselves as students and graduates (Brown 1997).Among those asserting the importance of assessment in learning, Boud and Falchikov (2005) advocated development of skills for lifelong learning, encompassing the capabilities expected of graduates such as problem solving, critical thinking and metacognition (Falchikov and Thompson 2008). Despite the increased recognition of the importance of assessment as part of an aligned curriculum to support student learning (Biggs and Tang 2007;Boud and Falchikov 2006), Bryan and Clegg (2006) lamented that the focus of much of our assessment is on ''testing knowledge and comprehension and ignores the challenge of developing and assessing judgments' ' (p. 3).Falchikov and Thompson (2008) provided a stark reminder that the traditional methods relied on a limited number of techniques such as closed book examinations and essay-type assessments, which focused primarily on summative assessment and have been found to be unsuitable for developing these desired graduate skills.This gap between the intentions of teaching academics and their assessment strategies reinforces questions raised by Arum and Roska (2010) about whether higher order learning is taking place at all, or whether it is simply not assessed. A study by Samuelowicz and Bain (2002) suggested that academic perspectives about the role of assessment might influence whether units are designed to address higher order learning outcomes.Their work traced the effect of disciplinary traditions, pedagogical beliefs and epistemological frameworks on the types of assessment used.They analysed academics' orientations towards assessment using a framework developed to describe their beliefs about the role of assessment and feedback, about what should be learned and assessed, and finally about the differences between good and poor answers.Academics with an orientation towards ''reproduction of important bits of knowledge, procedure and skill'' were likely to require it in their assessments, with assessment tasks such as multiple choice questions testing understanding of facts or open-ended questions testing the ability to apply principles to a given, familiar situation.Conversely, if academics perceived assessment important in ''transforming conceptions of the discipline and/or world'', then they were more likely to design assessment requiring higher order tasks such as evaluation and creation of new solutions (Samuelowicz and Bain 2002).Building on Samuelowicz and Bain's work, Northcote (2003) proposed that the beliefs held by academics about the role of assessment in learning and teaching also influenced their choices about assessment in the online learning context.They lamented that ''despite all the evidence supporting the value of integrated qualitative assessment and the new affordances of the new technologies, online assessment has remained predominantly summative' ' (p. 68). A study by McGoldrick (2002) of academics who were likely to introduce the development of student creativity in their curriculum found that confidence emerged as a key characteristic.Along with a sound understanding of their discipline area, the academics studied demonstrated enough self-efficacy to explore different ways of delivering their curriculum rather than limiting themselves to previously tried models.This could suggest that academics' willingness to innovate may be a factor in designing assessment tasks to target higher order outcomes and select appropriate technologies to support these aims.In a recent article, Gray et al. (2012) suggest that ''an academic without a sound rationale for assessing students' Web 2.0 activities will struggle to justify the added effort flowing from the assessment (re) design''.Jonassen and Reeves (1996) were among those who saw computers as having the potential to transform learning and assessment to a focus on higher order rather than lower order learning outcomes.Since then, the opportunities offered by technologies to support the design, delivery and administration of diagnostic, formative and summative assessment have been well documented in the literature (Crisp 2007;Philips and Lowe 2003).As social networking tools such as blogs and wikis emerged, their potential was raised to capture both the processes of student learning and the final artefacts to be submitted, in either collaborative or individual contexts (Boulos et al. 2006;Bower et al. 2009;Churchill 2007;Hewitt and Peters 2006).Bower et al. (2009) (2001) to propose a framework for conceptualising learning designs with Web 2.0 technologies, raising numerous possibilities for utilising their affordances and encouraging academics to put the whole curriculum at the core of their decisions.Shephard suggested that the use of these technologies could enable higher education to ''better assess aspects of learning that have proved difficult to assess using more conventional means'' (2009, p. 386). What of academic practice in using technologies to support assessment of higher order outcomes?This article explores the extent to which academics' choices about assessment technologies are influenced by their attitudes towards the role of assessment in learning and their confidence in their curriculum design capabilities. The study The study was undertaken in an Australian university to explore academic practice in using technologies to support the assessment of higher order learning.An exploratory mixed methods approach (Creswell and Clark 2007) was used, comprised of a three-phase study conducted over 4 years including: . an initial survey in Phase 1 to gather baseline data from convenors of online units about their intended learning outcomes, assessment strategies and technology uses, along with perceived difficulties with assessment; .a series of in-depth interviews in Phase 2 to explore their curriculum in more detail; and .a final survey of convenors of online units to explore the issues with a larger sample from across the campus. The first two phases illuminated academic practices as largely employing technologies to support the assessment of lower order learning outcomes (McNeill 2011;McNeill et al. 2011).Among the challenges identified by participants in assessing student learning was designing assessment to target higher order learning outcomes such as problem solving, creativity and metacognition.The final survey, conducted as Phase 3 of the study, explored whether these themes were representative of academic practice in a larger sample from across the university. .the types of learning outcomes unit convenors intended for their students; . alignment of assessment strategies and technology choices with these intentions; and .their levels of confidence in designing the curriculum to target their intended outcomes. This article reports aspects of the final survey results in relation to possible links between confidence, attitudes, assessment and intended learning outcomes.Results from other parts of the survey have been explored in previous publications (McNeill et al. 2010a(McNeill et al. , 2010b)). Survey The convenors of online units using the university's learning management system (LMS) during Semester 1, 2010, were invited to participate in the survey.Since the study intended to explore possible uses of technology to address assessment challenges, only those academics using technology in their units were included.An adaptation of Anderson et al.'s (2001) taxonomy was used as a theoretical framework to explore the categories: . recognise or recall information, concepts or procedures, .understand, explain, categorise or summarise information, .apply information, concepts or procedures in a range of situations, .analyse, organise or deconstruct concepts, procedures or scenarios, .evaluate or make judgments about concepts, situations, procedures or hypotheses, .create, design or construct hypotheses ideas, products, procedures or scenarios, and .critique or evaluate their own knowledge or performance.This framework was used as the basis for questions around curriculum design; the types of outcomes they intended for their students, the teaching and learning activities and assessment tasks.In order to explore possible links between the respondents' attitudes towards the role of assessment in learning and their choices of technology, questions were included based on Samuelowicz and Bain's (2002) orientations to assessment framework.Respondents were also asked about which technologies they used and how they used for assessment.Demographic information about discipline, unit level and enrolment mode were also collected. Findings Of the 734 academics invited to participate, 180 responded to the survey (24.5%).There were respondents from a wide range of discipline areas, and all faculties were represented.Postgraduate units were most commonly represented with 31.8% respondents, followed by the middle years of undergraduate programs (29.1%), then first year (21.2%) and final year (17.9%).Regarding student enrolment modes, the highest representation was from units with a mixture of internal and external students (51.1%) followed by internal only modes (41.6%) and external only (7.3%). Intended learning outcomes Respondents were asked about the types of learning outcomes they intended for their students.They were asked about their levels of agreement with a list of statements, using a five-point scale from To a large extent down to Not applicable.The results are presented in Table 1. The type of learning outcome rated most highly was apply, with more than 90% of respondents agreeing that this was targeted to a large or moderate extent.Evaluate and analyse were also highly rated.Outcomes associated with recalling information and creativity were more likely to be targeted to a small extent, not at all or viewed as not applicable. Technology uses Respondents were asked about the types of technologies they used for assessment.Table 2 summarises the responses for all technologies, used to a large or moderate extent to assess the various categories of learning outcomes from Anderson et al.'s A Taxonomy for Learning, Teaching and Assessing (2001).The total number of respondents who indicated that they used each technology, whether for summative or formative assessment, is tallied for each column, with the highest rating item highlighted in bold font. The two most commonly used tools, discussion forums and online quizzes, had the highest response rate for the categories of apply and recall, respectively.Of the 57 respondents using quizzes, 46 indicated that they used them to assess whether students could recognize or recall information, concepts or procedures.Students' ability to understand or apply information also featured highly.Discussion forums were the most widely used of all options in the survey, with 129 of the total 176 respondents (73.3%) indicating they used them. There were examples in the sample of respondents using wikis, blogs, online portfolios and virtual worlds for higher order outcomes.Of the 10 respondents using wikis, eight indicated that they targeted creativity as a higher order learning outcome.Metacognitive knowledge, where students were assessed on their ability to critique or evaluate their own performance, featured most highly in the use of blogs and online portfolios, followed by understanding and evaluation.Evaluation was the target with the highest rating for virtual worlds, although it is difficult to draw conclusions from such small numbers of respondents.Creation tasks featured strongly for wikis and online portfolios.While a wider range of technologies was explored in the survey, only responses for quizzes and forums were used in this analysis because of their use across all levels of learning outcomes.In addition, usage rates for the other technologies were too low for effective statistical analysis.Options for uses of technologies included: . to encourage students to keep up with the content, .to encourage student participation in the unit, .to provide feedback on students' learning, .to enable students to discuss their learning with their peers, .to capture student collaborations during learning, .to capture students' reflections during their learning, and .to enable students to store or share their learning. Since some respondents indicated more than one purpose for using the technology, Market Basket Analysis (Kachigan 1991) was used to determine the decision rules to analyse each purpose for using that technology.Based on the decision rules, most respondents used quizzes to focus on content.By rule 1, 85.1% of those respondents who used quizzes focused only on assessing content.By rule 2, the proportion of respondents who used quizzes to focus on the combination of content and feedback was 63.8%.By rule 3, 53% of the respondents who used quizzes did so to assess participation and content.By rule 4, only 42.6% of the respondents who used quizzes focused on all options of content, participation and feedback. Of those who used forums, 92.8% of the respondents used these tools to encourage discussion among students, and 82.5% used forums at least for both participation and discussion.Only 54.64% of the respondents used forum for assessing content. Academic confidence in their curriculum design capabilities Respondents were asked about their level of confidence in their curriculum design capabilities, specifically in designing teaching and learning activities, assessment tasks and choosing technologies to target and assess their intended learning outcomes.They were asked about their levels of agreement, using a five-point scale from To a large extent down to Not applicable.For analysis, the two options of Not at all and Not applicable were combined.Results are presented in Table 3.While the majority of respondents agreed or strongly agreed that they were confident in their ability to design teaching and learning activities and assessment to suit their intended learning outcomes, levels of confidence decreased in choosing technologies.In choosing technologies to support assessment of their intended learning outcomes, only two-thirds agreed or strongly agreed that they were confident, and 20% agreed or strongly agreed that they were confident in choosing technologies to assess these outcomes. Academics' attitudes towards assessment Respondents' attitudes towards assessment were explored in relation to Samuelowicz and Bain's (2002) orientations towards assessment as assessing students' ability to: (1) reproduce information presented in lectures and textbooks, (2) reproduce structured knowledge and apply it to modified situations, and (3) integrate, transform and use knowledge purposefully. They were asked about their levels of agreement, using a five-point scale from To a large extent to Not applicable.These results are presented in Table 4 below. Over 90% of respondents agreed that assessment played an important or very important role in assessing students' ability to integrate, transform and use knowledge purposefully.While the majority of respondents rated the assessment of students' ability to reproduce information from lectures or textbooks as of low importance or not important at all, over one-quarter rated this lower order skill as being of very high or high importance. Links between confidence, curriculum design and assessment As suggested by Samuelowicz and Bain (2002), academic attitudes about the role of assessment can influence the types of tasks they design in their units.One question that arose from analysis of the data was whether there were links between respondents' attitudes about assessment, the types of learning outcomes they intended for their students, the purposes for which they used assessment technologies and their levels of confidence about their curriculum design capabilities.In order to explore these links, statistical analysis was undertaken to determine whether any patterns emerged from the data. To investigate the relationships among the continuous variables of confidence, attitude, learning outcome and assessment purposes, scatter plots with simple linear regression models were used for any of the pairs from those four continuous variables.For example, analysis based on 'attitude' is summarised in Table 5. It was concluded that the respondents with intentions towards higher learning outcomes can be predicted as also having attitudes towards assessment that valued integration and transformation over reproduction.Based on these estimated simple linear models, there were positive relationships between learning outcome and attitude, assessment targets and confidence.These positive relationships are significant because the p values for the coefficients of attitude, assessment target and confidence are close to zero. Cluster analysis K-means clustering (MacQueen 1967) was then used to choose the number of clusters for the data set to explore whether any predictive patterns were evident between the four elements of confidence, intended learning outcomes, assessment purposes and attitudes towards assessment.Two clusters emerged, as displayed in Table 6. From the analysis, two clusters of roughly equal numbers of observations emerged.With the data classified into two clusters, the mean values of attitude, learning outcome, assessment target and confidence for Cluster 1 were all lower than the corresponding mean values for Cluster 2. Those respondents in Cluster 2 reported higher levels of confidence in their curriculum design capabilities and were more likely to target higher order learning in their intended outcomes.Their uses of quizzes and forums were more likely to focus on providing feedback for students on their learning than keeping up with the content.These respondents were also less likely to target the lower order orientations towards assessment, such as reproducing information from lectures or textbooks. This phenomenon can be also seen from the scatter plot matrix based on these four elements (Figure 1).The clusters are separated by different shapes, with Cluster 1 shown as circles and Cluster 2 as triangles.As depicted in Figure 1, relationships were evident between the four elements.As denoted by the predominance of triangles in the top right-hand quadrants, those respondents in Cluster 2 reported higher levels of confidence in their curriculum design capabilities to develop teaching activities and assessment tasks and to choose technologies to support assessment to target the types of learning outcomes they intend.Those in Cluster 2 were also more likely to target higher order learning, such as analysis, evaluation, creativity and metacognition in their intended outcomes.While most respondents reported using quizzes to assess whether students were keeping up with the content, those who reported higher levels of confidence were more likely to report their use for providing feedback to students on their learning.Cluster 2 respondents were also more likely to use forums for providing feedback for students on their learning than keeping up with the content.In contrast, those respondents in Cluster 1 reported lower levels of confidence in their curriculum design capabilities and were more likely to target lower order learning in their intended outcomes, such as recognition and understanding.The relationship between confidence and attitudes towards assessment is not as strong; nevertheless, the overall trend is still maintained.Cluster 2 respondents were more likely to consider assessment as important in supporting students to integrate, transform and use knowledge purposefully rather than reproducing information presented in lectures and textbooks. Discussion The survey explored whether the higher order learning espoused as central to university learning is reflected in the intended outcomes, assessment strategies and technology choices of academics.While some of the respondents intended higher learning outcomes such as evaluation, creativity or metacognition for their students, there are many who continued to target lower order outcomes such as recognition and understanding.Application of knowledge was the most common focus of curriculum designs.The challenging outcomes relating to creativity (create) and metacognition (critique of own performance) had the most divergent responses as indicated by the higher standard deviations.Relatively large numbers of respondents indicated that creativity and metacognition were not important or not applicable.This mirrored findings from the interviews conducted earlier in the study, where academics found these higher order outcomes to be most problematic (McNeill 2011), where academics were unsure of how to design tasks to target these types of outcomes and how to allocate marks to student work.This tendency to avoid a focus on the higher order outcomes, perpetuating the challenges identified in the literature (Astleitner 2002;Burns 2006;Clarkson and Brook 2007;Race 2003), has implications for academic practice, when taking into account that 50% of survey respondents described a postgraduate or final year unit.Given the university's context of mandating capstone units where students are intended to focus more on integrating rather than acquiring new knowledge and skills and identify gaps in their own knowledge (McNeill 2011), this suggests the need to empower academics with the knowledge and skills to make informed decisions.There is a role for academic developers in supporting convenors to adopt more of a program approach to curriculum alignment, with differentiated curriculum targets as students progress through their programs.While understanding and being able to apply foundation principles may be important for students during their learning process, they need to acquire and hone higher order skills as they enter the final stages of their program and prepare for transition into the workforce or onto further study. The literature suggests that newer social networking technologies may have the potential to overcome some of the barriers in capturing and storing students' development of higher order skill such as ''creative thinking'' or ''self-reflection''; however, the level of uptake of tools such as wikis, blogs and e-portfolios is relatively low.While newer technologies with greater scope to target higher order learning have become available for academics (Bower et al. 2010), the study suggests that curriculum design practice is slow to change.Those tools such as wikis, blogs and e-portfolios, with greater potential to support the assessment of higher order learning, were used by relatively small numbers of respondents.This highlights the importance of academic development initiatives to build capability for academics to integrate innovations, including technology into their teaching.If academics understand the principles underpinning curriculum alignment and how to select technologies to best suit their intended learning outcomes, they are more likely to make effective choices. Almost all the respondents agreed that assessment played an important or very important role in assessing students ability to integrate, transform and use knowledge purposefully and use it creatively in novel situations, the highest levels of learning outcomes according to Anderson et al.'s (2001) taxonomy.While this was a positive finding, there was also a strong focus on lower order reproduction.In interviews conducted in previous phases of the study (McNeill et al. 2011), it emerged that many academics were concerned that students gained proficiency in understanding for example foundation principles, which, as Race (2006) suggests, are easier to assess than higher order uses of this knowledge.Despite the high proportion of final year or postgraduate units in the study, 90% of respondents agreed that applying information to structured situations was at least moderately important, and there was little evidence of a progression towards higher order outcomes associated with transition out of university. This illustrates a potential source of misalignment if academics choose assessment tasks to suit their attitudes about the role of assessment rather than intended higher order outcomes.While higher order outcomes may be intended by many academics, they can inadvertently select technologies to work against these types of outcomes if they are guided by their perceptions of assessment as being important for roles such as assessing whether students can reproduce information, which can encourage a focus on lower order outcomes instead.Examples include the small number of respondents (4) who indicated that they had chosen quiz tools to assess the higher order outcome of creativity.While quizzes may be ideal for testing students' understanding of foundation principles, other tools such as wikis or blogs are better able to capture the learning journey in developing creativity. Academics' confidence in their curriculum design capability emerged as an important factor in the level of alignment between their intentions towards higher order learning and the curriculum choices they make.The results from the cluster analysis suggest that the links McGoldrick (2002) established between academics' self-efficacy and their targeting of creativity in their curriculum may also extend to other types of higher order learning such as analysis, evaluation and metacognition.While many academics rated themselves as relatively confident in their ability to design their curriculum to target their intended outcomes, their levels of confidence dropped when selecting technologies, especially for assessment purposes.This lack of confidence may contribute to the predominance of the use of even the most prevalent of technologies, discussion forums, for formative assessment purposes.The allocation of marks for summative purposes emerged from the Phase 2 interviews as a source of confusion and uncertainty (McNeill et al. 2011), which reiterated finding from a previous study (Byrnes and Ellis 2006), where academics were found to limit the allocation of grades to lower order tasks when technologies were employed, thereby minimising risks for their students and themselves. One way to increase the levels of confidence convenors have in curriculum design and decision making about what technologies they include in their units is to equip Research in Learning Technology Citation: Research in Learning Technology 2012; 20: 17595 -http://dx.doi.org/10.3402/rlt.v20i0.17595them with a greater understanding of the principles underpinning higher order learning, such as curriculum alignment and the role of scaffolding and feedback in learning.Frameworks for evaluating technologies are necessary to help academics determine whether the affordances of particular technologies are suitable for their curriculum. Conclusion In a university sector under increasing pressure for accountability about what and how students learn, the need to capture evidence of the types of higher order learning typically associated with graduate capabilities is crucial.Although technologies have been heralded as having the potential to address some of these issues, the challenge remains of changing academic practice in adopting and using these tools effectively.The significance of this study is in illuminating current gaps in curriculum designs, between what academics and indeed program leaders intend for their students and what is targeted in the assessment.The study explored the perspectives of those convenors of online units at one Australian university who used the centrally supported LMS.While there may be other views from those using different platforms, the results provide a picture of current uses of technologies to support assessment, useful as a realistic baseline when considering the rhetoric around the potential of new technologies. Despite the hype that sometimes surrounds the potential of social networking tools such as blogs and wikis, the majority of respondents used the more traditional tools of quizzes and discussion forums, which are available in the LMS and typically focus on assessing lower order outcomes.While these results may indicate the extent of work still to be done in raising awareness amongst teaching academics of the affordances of technology, the study is also significant in affirming the value of academic development work and professional development.The results reinforce the importance of academics' confidence in informing their curriculum alignment to target their intended learning outcomes with appropriate assessment strategies and technology choices.Given the rapid growth of technologies for possible use in education, this understanding will become increasingly important.As newer technologies become more widely available through centrally managed platforms, there are opportunities for professional development to equip academics with the confidence to integrate these tools into their curriculum to target the elusive higher order outcomes.Strategies currently under development as a result of this study include a series of online and on-campus workshops to scaffold academics through the integration of technologies into their curriculum, beginning with questions of alignment, case studies and showcases with commentary on the uses of tools to support the development and assessment of higher order learning outcomes and faculty-based support teams to provide guidance for individuals and groups of academics.These are conducted as part of a university-wide implementation of a new learning management system, with the aim of using technologies to drive innovation and curriculum enhancement. The study reiterates the importance of academic development work; however, it also provides a reminder of the complexity of curriculum design and the myriad of influences at play.One of the opportunities for further research is more qualitative exploration of the links between the curriculum elements in specific contexts, such as an investigation of the alignment between assessment tasks and grading criteria to determine a holistic perspective of the types of learning outcomes targeted.While the use of technologies as a powerful option to support assessment of higher order learning, they need to be aligned with the whole curriculum. Figure 1 . Figure1.Scatter plot matrix for Clusters 1 and 2, depicting four elements of attitude, intended learning outcome (learning), assessment target (assessment) and confidence. built on Anderson et al.'s A Taxonomy for Learning, Teaching and Assessing Table 2 . Technologies used to target specific learning outcomes for summative or formative assessment. Table 3 . Academic confidence in curriculum design. Table 5 . Simple linear regression analysis.
2018-12-11T00:11:54.193Z
2012-09-24T00:00:00.000
{ "year": 2012, "sha1": "23094b9837ea25dd0c6e986946f9cdb2945110c0", "oa_license": "CCBY", "oa_url": "https://journal.alt.ac.uk/index.php/rlt/article/download/1287/pdf_1", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "23094b9837ea25dd0c6e986946f9cdb2945110c0", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
8329838
pes2o/s2orc
v3-fos-license
Joint assessment of white matter integrity, cortical and subcortical atrophy to distinguish AD from behavioral variant FTD: A two-center study We investigated the ability of cortical and subcortical gray matter (GM) atrophy in combination with white matter (WM) integrity to distinguish behavioral variant frontotemporal dementia (bvFTD) from Alzheimer's disease (AD) and from controls using voxel-based morphometry, subcortical structure segmentation, and tract-based spatial statistics. To determine which combination of MR markers differentiated the three groups with the highest accuracy, we conducted discriminant function analyses. Adjusted for age, sex and center, both types of dementia had more GM atrophy, lower fractional anisotropy (FA) and higher mean (MD), axial (L1) and radial diffusivity (L23) values than controls. BvFTD patients had more GM atrophy in orbitofrontal and inferior frontal areas than AD patients. In addition, caudate nucleus and nucleus accumbens were smaller in bvFTD than in AD. FA values were lower; MD, L1 and L23 values were higher, especially in frontal areas of the brain for bvFTD compared to AD patients. The combination of cortical GM, hippocampal volume and WM integrity measurements, classified 97–100% of controls, 81–100% of AD and 67–75% of bvFTD patients correctly. Our results suggest that WM integrity measures add complementary information to measures of GM atrophy, thereby improving the classification between AD and bvFTD. Introduction Alzheimer3s disease (AD) and behavioral variant frontotemporal dementia (bvFTD) are the leading causes of young onset dementia (Ratnavalli et al., 2002;Harvey et al., 2003). BvFTD has a very heterogeneous presentation, but is mostly characterized by a marked, progressive decline in personality and/or behavior. Symptoms such as loss of manners or decorum, impulsive actions, apathy and changing of eating behavior are common (Rascovsky et al., 2011). Furthermore, patients often show deficits in cognitive domains of executive functioning, attention and working memory Hornberger et al., 2008Hornberger et al., , 2012. AD is mainly characterized by episodic memory impairment in the initial phase but deficits in visuospatial abilities, executive functioning, language and attention are also common (Nestor et al., 2004;Smits et al., 2011). Clinical diagnostic criteria for bvFTD and AD have been proposed (Rascovsky et al., 2011;McKhann et al., 2011), but the frequent overlap of clinical symptoms associated with AD and bvFTD and heterogeneity within one syndrome pose serious problems in the differential diagnosis (Greicius et al., 2002;Miller et al., 2003;Walker et al., 2005;Harris et al., 2015). Although the definite diagnosis of both types of dementia is only possible at autopsy, magnetic resonance imaging (MRI), providing measurements of gray matter (GM) atrophy and white matter (WM) integrity, have been shown to detect brain changes in an early disease stage. Studies on GM atrophy have shown precuneus, lateral parietal and occipital cortices to be more atrophic in AD than in bvFTD, whereas atrophy of anterior cingulate, anterior insula, subcallosal gyrus, and caudate nucleus was more severe in bvFTD compared to AD (Rabinovici et al., 2007;Du et al., 2007;Looi et al., 2008). However, many scans differ from the predicted patterns of atrophy and overlap between AD and bvFTD is common: GM loss in dorsolateral prefrontal cortex, medial temporal lobes, hippocampus and amygdala is found in both AD and bvFTD and does not help to discriminate between the two disorders (Rabinovici et al., 2007;Munoz-Ruiz et al., 2012;van de Pol et al., 2006;Barnes et al., 2006). Moreover, especially in the beginning of the disease, cortical atrophy may not be visible by eyeballing. In addition to local GM damage, a decrease of fractional anisotropy (FA) in WM, suggesting WM tract damage has been shown, especially in bvFTD. Previous studies showed that compared to AD, WM integrity was lost in bvFTD especially in the frontal and bilateral temporal regions (Zhang et al., 2009;Chen et al., 2009). Taking into account WM abnormalities holds promise to improve the distinction between AD and bvFTD but only a few studies have been conducted so far (Zhang et al., 2009;Mahoney et al., 2014). Moreover, it is conceivable that the combination of information from GM and WM may help in the discrimination between AD and bvFTD. Most former studies focused on either GM or WM damage however, while only a few investigated the extent to which the loss of WM integrity and GM atrophy are related and how they jointly contribute to the clinical classification of patients (McMillan et al., 2012;Mahoney et al., 2014;Zhang et al., 2011). Generalizability of these findings is limited as in one study patients from the whole FTLD spectrum were compare to AD patients (McMillan et al., 2012) and in other studies the different imaging modalities were only linked to each other but not used for diagnostic discrimination (Mahoney et al., 2014;Zhang et al., 2011). In this multi-center study we compared patterns of cortical and subcortical GM atrophy and of WM integrity between patients with bvFTD, AD and controls with the ultimate goal to facilitate clinical diagnosis. In addition, we investigated the joint discriminative ability of GM atrophy and WM integrity measurement to distinguish both patient groups from controls and from each other. Patients In this two center study, we included 39 patients with probable AD and 30 patients with bvFTD, who visited either the Alzheimer Center of the VU University Medical Center (VUMC) (probable AD: n = 23; probable bvFTD: n = 16; possible bvFTD: n = 4) or the Alzheimer Center of the Erasmus University Medical Center Rotterdam (probable AD: n = 16; probable bvFTD: n = 9; possible bvFTD: n = 1). All patients underwent a standardized 1-day assessment including medical history, medical history (dementia, psychiatry, cardiovascular) of first-degree relatives, informant-based history, physical and neurological examination, blood tests, neuropsychological assessment, and MRI of the brain. Diagnoses were made in a multidisciplinary consensus meeting according to the core clinical criteria of the National Institute on Aging and the Alzheimer3s Association work group for probable AD (McKhann et al., 1984;McKhann et al., 2011) and according to the clinical diagnostic criteria of FTD for bvFTD (Rascovsky et al., 2011). To minimize center effects, all diagnoses were re-evaluated in a panel including clinicians from both centers. In addition, we included 41 cognitively normal controls (VUMC: n = 23; Rotterdam: n = 18), who were recruited by advertisement in local newspapers. Before inclusion in the present study, controls were screened for memory complaints, family history of dementia, drugs-or alcohol abuse, major psychiatric disorder, and neurological or cerebrovascular diseases. They underwent an assessment including medical history, physical examination, neuropsychological assessment, and MRI of the brain comparable to the work-up of patients. Inclusion criteria for both cohorts were: (1) availability of a T1weighted 3-dimensional MRI (3DT1) scan and a set of diffusion weighted imaging (DWI) images designed to allow calculation of the diffusion tensor at 3 T, and (2) age between 50 and 80 years. Exclusion criteria were: (1) large image artifacts (n = 12); (2) failure of imaging analyzing software to process MR scans (n = 6); and (3) gross brain pathology other than atrophy, including severe white matter hyperintensities and/or lacunar infarction in deep gray matter structures. Level of education was rated on a seven-point scale (Verhage, 1964). The study was conducted in accordance with regional research regulations and conformed to the Declaration of Helsinki. The local medical ethics committee of both centers approved the study. All patients gave written informed consent for their clinical and biological data to be used for research purposes. Neuropsychological assessment To assess dementia severity we used the Mini-Mental State Examination (MMSE). Cognitive functioning was assessed using a standardized neuropsychological test battery covering five major domains: memory (immediate recall, recognition and delayed recall of Dutch version of the Rey Auditory Verbal Learning Test and total score of Visual Association Test A), language (Visual Association Test picture naming and category fluency (animals: 1 min)), visuospatial functioning (subtest of Visual Object and Space Perception (VOSP) Battery: number location), attention (Trail Making Test part A (TMT A), Digit Span forward, and Letter Digit Substitution Test (LDST)), and executive functioning (Digit Span backwards, Trail Making Test part B (TMT B), letter fluency, and Stroop Color-Word test, card III). For a detailed description of neuropsychological tests see (Smits et al., 2011). For each cognitive task, z-scores were calculated from the raw test scores by the formula z = (x − μ) / σ, where μ is the mean and σ is the standard deviation of the subjective complaints group. The value z = 0 therefore reflects the average test performance of the subjective complaints group in a given domain. Scores of TMT A, TMT B, and Stroop were inverted by computing −1 × z-score, because higher scores imply a worse performance. Next, composite z-scores were calculated for each cognitive domain by averaging z-scores. Composite z-scores were calculated when at least one neuropsychological task was available in each cognitive domain. MR image acquisition and review Imaging at the VUMC was carried out on a 3 T scanner (Signa HDxt, GE Healthcare, Milwaukee, WI, USA), using an 8-channel head coil with foam padding to restrict head motion. Patients and controls from the Erasmus University Medical Center Rotterdam were all scanned at the Leiden University Medical Center (LUMC). Imaging at LUMC was performed on a 3 T scanner (Achieva, Philips Medical Systems, Best, The Netherlands) using an 8-channel SENSE head coil. In addition, the MRI protocol included a 3D Fluid Attenuated Inversion Recovery (FLAIR) sequence, dual-echo T2-weighted sequence, and susceptibility weighted imaging (SWI) which were reviewed for brain pathology other than atrophy by an experienced radiologist. Gray matter volume DICOM images of the 3DT1-weighted sequence were corrected for gradient nonlinearity distortions and converted to Nifti format, after which the image origin was automatically placed approximately on the anterior commissure using a linear registration procedure. The structural 3DT1 images were then analyzed using the voxel-based morphometry toolbox (VBM8; version 435; University of Jena, Department of Psychiatry) in Statistical Parametric Mapping (SPM8; Functional Imaging Laboratory, University College London, London, UK) implemented in MATLAB 7.12 (MathWorks, Natick, MA). In the first module of the VBM8 Toolbox ("Estimate and Write") the 3DT1 images are normalized to MNI space and segmented into GM, WM and cerebrospinal fluid (CSF). We used the default settings, except for the clean-up, where we used the light clean-up option to remove any remaining non-brain tissue, as advised in the VBM8 tutorial. Tissue classes were normalized in alignment with the template with the 'non-linear only' option which allows comparing the absolute amount of tissue corrected for individual brain size. The correction is applied directly to the data, which makes a head-size correction to the statistical model redundant. Subsequently, all segmentations were checked with the second and third module of the VBM8 Toolbox ("Display one slice for all images" and "Check sample homogeneity using covariance") and by a one-by-one visual check. In the fourth module, images were smoothed using an 8 mm full width at half maximum (FWHM) isotropic Gaussian kernel. Voxelwise statistical comparisons between groups were made to localize GM differences by means of a full factorial design with diagnosis (AD, bvFTD, controls) as factor with independent levels with unequal variance, using absolute threshold masking with a threshold of 0.1 and implicit masking. Age, sex and center were entered as covariates. Post hoc, we compared AD with controls, bvFTD with controls, and AD with bvFTD. The threshold for significance in all VBM analyses was set to p b 0.05 with family wise error correction (FWE) at the voxel level and an extent threshold of 0 voxel. Volumes of deep gray matter (DGM) structures The algorithm FIRST (FMRIB3s integrated registration and segmentation tool) (Patenaude et al., 2011) was applied to estimate left and right volumes of five DGM structures: thalamus, caudate nucleus, putamen, globus pallidus, and nucleus accumbens, and two medial temporal lobe (MTL) structures: hippocampus and amygdala. Left and right volumes were summed to obtain total volume for each structure. FIRST is integrated in FMRIB3s software library (FSL 4.15) (Jenkinson et al., 2012) and performed both registration and segmentation of the above mentioned anatomical structures. A two-stage linear registration was performed to achieve a more robust and accurate pre-alignment of the seven structures. During the first-stage registration, the 3DT1 images were registered linearly to a common space based on the Montreal Neurological Institute (MNI) 152 template with 1 × 1 × 1 mm resolution using 12 degrees of freedom. After registration, a second stage registration using a subcortical mask or weighting image, defined in MNI space, was performed to improve registration for the seven structures. Both stages used 12 degrees of freedom. This 2-stage registration was followed by segmentation based on shape models and voxel intensities. Volumes of the seven structures were extracted in native space, taking into account the transformations matrices during registration. The final step was a boundary correction based on local signal intensities. All registrations and segmentations were visually checked for errors. To correct the volumes of the seven structures for head size we used a volumetric scaling factor (VSF) derived from the normalization transform matrix from SIENAX (Structural Image Evaluation using Normalization of Atrophy Cross-sectional) (Smith et al., 2002), also part of FSL. In short, SIENAX extracted skull and brain from the 3DT1 input whole-head image. In our study, brain extraction was performed using optimized parameters (Popescu et al., 2012). These were then used to register the subject3s brain and skull image to standard space brain and skull (derived from MNI152 template) to estimate the scaling factor (VSF) between the subject3s image and standard space. Normalization for head size differences was done by multiplying the raw volumes of the seven structures by the VSF. Next to the VSF, we also obtained brain tissue volumes of GM and WM (Zhang et al., 2001). Total volumes of the seven structures and volumes of GM and WM, and VSF were transferred to SPSS for further statistical analyses. White matter integrity All preprocessing steps of the DWI images were performed using FSL (Jenkinson et al., 2012;Smith et al., 2004), including motion-and eddycurrent correction on images and gradient-vectors, followed by diffusion tensor fitting. Fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (L1; largest eigenvalue), and radial diffusivity (L23; average of the two smallest eigenvalues L2 and L3) were derived for each voxel. Each subject3s FA image was used to calculate nonlinear registration parameters to the FMRIB58_FA brain, which were then applied to all four parameter images. The registered FA images were averaged into a mean FA image, which was skeletonized for tractbased spatial statistics (TBSS) (Smith et al., 2006). The skeleton was thresholded at 0.2 to include only WM and used for TBSS statistics in all diffusion parameters. Each subject3s aligned FA data was then projected onto this skeleton and the resulting data fed into voxelwise cross-subject statistics. The projection parameters for each voxel were then also applied to the MD, L1 and L23 data to create skeletonized data in standard space for each subject. Differences in FA, MD, L1 and L23 between controls, AD and bvFTD patients were analyzed in a voxelwise fashion using FSL3s randomize with 5000 permutations and age, sex and center as covariates. A family wise error (FWE) corrected Threshold-Free Cluster Enhancement (TFCE) significance level of p b 0.05 was used to correct for multiple comparisons. Extraction of regions of interest (ROI) As a next step, we extracted ROIs from the VBM and TBSS group analyses, to be able to combine the most promising MR markers in one statistical model. Gray matter ROIs (VBM): We extracted all significant voxels from the resulting T-maps from the comparisons AD b controls, bvFTD b controls, and bvFTD b AD from the VBM analyses. This resulted in three GM ROIs: 'GM ROI AD b Controls', 'GM ROI bvFTD b Controls', 'GM ROI bvFTD b AD'. This was done by merging the normalized modulated GM segments of all subjects into a 4D file. The T-maps of all contrasts were thresholded at p b 0.05 (FWE corrected) and binarized. We then calculated the GM volume of each of the three ROIs and transferred it to SPSS for further analyses. White matter integrity ROIs (TBSS): We extracted all significant voxels (TFCE, FWE corrected p b 0.05) from the statistical contrast image from the comparisons AD b controls, bvFTD b controls, and bvFTD b AD. This resulted in three FA ROIs: 'FA ROI AD b Controls', 'FA ROI bvFTD b Controls', 'FA ROI bvFTD b AD'. We then calculated the mean FA in each of the three ROIs. The same was done for MD, L1 and L23, resulting in three ROIs per diffusivity measurement. We transferred mean FA, MD, L1 and L23 of all ROIs to SPSS for further analysis. Statistical analysis SPSS version 20.0 for Windows was used for statistical analysis. Differences between groups for demographics and cognition were assessed using univariate analysis of variance (ANOVA) (age, VSF, NBV), Kruskal-Wallis tests (level of education, MMSE, GDS, CDR, composite cognitive domain z-scores) and χ 2 test (sex, center, history of dementia, psychiatry, cardio-vascular events in first-degree relative). Multivariate analysis of variance (MANOVA) was used to compare total head-size corrected volumes of MTL and DGM structures (dependent variables) between the different diagnostic groups (between-subjects factor) with Bonferroni adjusted post hoc tests. Age, sex and center were used as covariates. To determine which combination of MR markers based on VBM, DGM structures and TBSS measurements differentiated the three patient groups with the highest accuracy, we conducted a discriminant function analysis with leave-one-out cross validation. As predictors we entered the following variables: 'GM ROI AD b Controls', 'GM ROI bvFTD b Controls', 'GM ROI bvFTD b AD'; total head-size corrected volumes of hippocampus, thalamus, caudate nucleus, putamen and nucleus accumbens (as these structures significantly differed between the groups); 'FA ROI AD b Controls', 'FA ROI bvFTD b Controls', 'FA ROI bvFTD b AD'; as well as sex, age, and center. Because of colinearity we performed another discriminant function analyses with the other diffusion parameters L1 and L23 instead of FA. In this discriminant function we used the following variables as predictors: 'GM ROI AD b Controls', 'GM ROI bvFTD b Controls', 'GM ROI bvFTD b AD'; total head-size corrected volumes of hippocampus, thalamus, caudate nucleus, putamen and nucleus accumbens; ; as well as sex, age, and center. In general, a discriminant analysis creates k-1 linear combinations (discriminant functions) of the entered predictor variables which provides the best discrimination between the groups (k). To identify the most optimal combination of variables for best discrimination, stepwise forward analysis was used with a decision scheme based on the F-value of Wilk3s lambda (entry: 3.84; removal: 2.71). Statistical significance for all analyses was set at p b 0.05. Demographics Demographic and cognitive data for all patients (AD: n = 32; bvFTD: n = 24) and controls (n = 37) fulfilling inclusion criteria are summarized in Table 1. AD patients were older than controls (p b 0.001); there were no differences in gender distribution or education. Compared to controls and AD, bvFTD patients had less first degree relative with dementia. Compared to controls, AD and bvFTD performed worse on all cognitive domains, except on visuospatial functioning. Compared to bvFTD patients, AD patients performed worse on memory and attention. Both dementia groups had smaller normalized brain volumes than controls (p b 0.001). AD patients had lower MMSE scores than both other groups (p b 0.05). CDR and GDS scores were lowest in controls (p b 0.001) but did not differ between the two dementia groups. Gray matter volume The full factorial design showed main effects of diagnosis (Fig. 1). Post hoc comparisons showed that compared to controls, AD patients showed a reduction of GM in superior and middle temporal gyrus, parahippocampal gyrus, hippocampus, posterior cingulate, mid cingulum, cuneus, precuneus, occipital lobe, superior and inferior parietal lobe and inferior frontal gyrus (p b 0.05, FWE corrected). BvFTD patients had less GM compared to controls in superior, middle, and inferior frontal gyrus, orbito-frontal gyrus, insular, temporal gyrus, parahippocampal gyrus and hippocampus. Controls did not show any regions with less GM than AD or bvFTD (p b 0.05, FWE corrected). Compared to AD patients, bvFTD patients had less GM matter in left inferior and medial frontal gyrus, in right inferior frontal gyrus, and in orbitofrontal gyrus (p b 0.05, FWE corrected). AD patients did not show any regions of significantly reduced GM compared to bvFTD patients. For comparisons between patient groups we also explored the results at a non-corrected p = 0.001 level (figure in supplementary materials): Compared to AD patients, bvFTD patients showed less GM in orbitofrontal, inferior frontal, medial frontal lobe, temporal pole, fusiform gyrus and anterior cingulate. Compared to bvFTD patients, AD patients showed less GM in precuneus, posterior cingulate, occipital lobe, angular gyrus and inferior parietal lobe. Volumes of deep gray matter structures Normalized volumes of MTL and DGM structures are summarized in Table 2. MANOVA adjusted for age, sex and center revealed group differences in hippocampus, thalamus, caudate nucleus, putamen and nucleus accumbens (Fig. 2). Post hoc tests showed that nucleus accumbens and caudate nucleus volume discriminated all groups, with bvFTD having most severe atrophy. Hippocampus and thalamus discriminated dementia patients from controls. bvFTD patients had smaller putaminal volumes than controls. Fig. 3 shows the mean skeleton with significant regions in FA, MD, L1 and L23 for different group comparisons. Compared with controls, AD patients showed widespread patterns of lower FA values, incorporating 44% of the WM skeleton voxels, in areas including the fornix, corpus callosum, forceps minor, thalamus, posterior thalamic radiation, superior and inferior longitudinal fasciculus. Furthermore, they had higher MD values in 36% of the WM skeleton voxels including the Compared to controls, bvFTD patients showed widespread patterns of lower FA values in 58% of the investigated WM voxels throughout the whole brain, in areas including the fornix, corpus callosum, forceps minor, thalamus, anterior thalamic radiation, superior and inferior longitudinal fasciculus and inferior fronto-occipital fasciculus. Furthermore, they had higher MD values in 55% of the investigated WM voxels including the inferior fronto-occipital fasciculus, uncinate fasciculus and the forceps minor. They had higher L1 values in 39% of the WM skeleton voxels including the inferior fronto-occipital fasciculus, inferior longitudinal fasciculus, corticospinal tract and corpus callosum, and higher L23 values in 62% of the investigated WM voxels in the inferior and superior longitudinal fasciculus, corticospinal tract, corpus callosum, fornix, inferior fronto-occipital fasciculus and the anterior thalamic radiation compared to controls. In direct comparison between the two dementia groups, bvFTD patients had lower FA values in 17% of the investigated voxels, solely located in the frontal parts of the brain, like the rostrum and the genu of the corpus callosum, forceps minor, anterior part of the internal and external capsule, anterior parts of the fronto-occipital fasciculus and superior longitudinal fasciculus. Furthermore, bvFTD patients had higher MD values in 21% and higher radial diffusivity values in 23% of the investigated WM voxels including forceps minor, uncinate fasciculus, inferior fronto-occipital fasciculus and anterior thalamic radiation, higher axial diffusivity values in 14% of the investigated WM voxels including inferior fronto-occipital fasciculus, uncinate fasciculus and forceps minor compared to AD patients. AD patients had no areas of reduced diffusivity or increased fractional anisotropy compared to bvFTD. Extraction of regions of interest (ROI) In Fig. 4 the GM, FA, MD, L1 and L23 ROIs are depicted. The ROIs represent all significant voxels from a two-group-comparison. In Table 3 compositions of the different ROIs are summarized. Predictive value of GM volume, volumes of DGM structures, and white matter integrity Subsequently, we used discriminant analysis to identify the combination of MR-markers providing optimal classification. Using stepwise forward method, the first discriminant analysis selected the following predictors: (1) GM ROI AD b Controls; (2) hippocampal volume; (3) volume of putamen; (4) FA ROI AD b Controls; (5) FA ROI bvFTD b Controls; (6) center; (7) age; and (8) sex. The two resulting discriminant functions had a Wilk3s lambda of 0.082 (p ≤ 0.001) and 0.388 (p ≤ 0.001). Fig. 5a shows the projection plot of the two canonical discriminant functions for discrimination of the three groups. Discriminant function 1 discriminated AD from bvFTD and controls. Discriminant function 2 discriminated bvFTD from AD and controls. The loadings of the individual predictors for each function are shown in Table 4a. GM ROI AD b Controls had the highest loading on discriminant function 1. Discriminant function 2 was primarily composed of the variables FA ROI bvFTD b Controls, hippocampal volume, FA ROI AD b Controls, and GM ROI AD b Controls. Cross-validation successfully classified 91.4% of all cases correctly, with correct classification of 100% of controls, 100% of AD patients, and 66.7% of bvFTD patients. The second discriminant analysis selected the following predictors: (1) GM ROI AD b Controls; (2) GM bvFTD b AD; (3) L1 ROI AD N Controls; (4) L1 ROI bvFTD N Controls; and (5) L1 ROI bvFTD N AD. The two resulting discriminant functions had a Wilk3s lambda of 0.134 (p ≤ 0.001) and 0.437 (p ≤ 0.001). Fig. 5b shows the projection plot of the two canonical discriminant functions for discrimination of the three groups. Discriminant function 1 discriminated AD from bvFTD and controls. Discriminant function 2 discriminated bvFTD from AD and controls. The loadings of the individual predictors for each function are shown in Table 4b. GM ROI AD b Controls and L1 ROI AD b Controls had the highest loadings on discriminant function 1. Discriminant function 2 was primarily composed of GM ROI bvFTD b AD, L1 ROI bvFTD b AD, L1 ROI bvFTD N Controls, GM ROI AD b Controls, and L1 ROI AD N Controls. Cross-validation successfully classified 86% of all cases correctly, with correct classification of 97.3% of controls, 81.3% of AD patients, and 75% of bvFTD patients. Discussion The main finding of this study is that there are GM and clear WM differences between AD and bvFTD which both independently contributed to the classification of both types of dementia. Despite a comparable disease stage, bvFTD patients had more atrophy in orbitofrontal and inferior frontal areas, caudate nucleus and nucleus accumbens than AD patients. Furthermore, they had more severe loss of FA, higher MD, L1 and L23 values, especially in the frontal areas. Combination of modalities led to 86-91.4% correct classification of patients. GM contributed most to distinguishing AD patients from controls and bvFTD patients, while WM integrity measurements, especially L1, contributed to distinguish bvFTD from controls and AD. A large number of studies investigated the differences between controls and AD or bvFTD patients with regard to either GM or WM pathology. Their results are in line with the current study showing GM andL23 ROIs AD N controls: All significant areas (p b 0.05, FWE TFCE corrected) from the TBSS group comparisons where AD patients had higher MD (pink), higher L1 (blue) and higher L23 (green) values compared to controls.(E) MD, L1 and L23 ROIs bvFTD N controls: All significant areas (p b 0.05, FWE TFCE corrected) from the TBSS group comparisons where bvFTD patients had higher MD (pink), higher L1 (blue) and higher L23 (green) values compared to controls.(F) MD, L1 and L23 ROIs bvFTD N AD: All significant areas (p b 0.05, FWE TFCE corrected) from the TBSS group comparisons where bvFTD patients had higher MD (pink), higher L1 (blue) and higher L23 (green) values compared to AD patients. atrophy of medial temporal lobe structures and temporoparietal lobes in AD (Whitwell et al., 2011;Möller et al., 2013;Frisoni et al., 2002) and atrophy of orbitofrontal, anterior cingulate, insula, lateral temporal cortices, and caudate nucleus in bvFTD (Rabinovici and Miller, 2010;Hornberger et al., 2011;Couto et al., 2013;Looi et al., 2012). DTI studies on AD reported a rather consistent pattern of FA reductions in widely distributed WM tracts exceeding MTL regions (Scola et al., 2010;Agosta et al., 2011;Salat et al., 2010). In patients with bvFTD significant FA reductions in the superior and inferior longitudinal fasciculus, as well as additional FA decreases in the uncinate fasciculus and the genu of the corpus callosum have been reported (Borroni et al., 2007;Matsuo et al., 2008). To determine whether GM atrophy or WM integrity have potential diagnostic use, a direct comparison between AD and bvFTD is more important than the comparison with a control group. With respect to GM atrophy, precuneus, lateral parietal and occipital cortices have been shown to be more atrophic in AD than in bvFTD, whereas atrophy of anterior cingulate, anterior insula, subcallosal gyrus, and caudate nucleus are more atrophic in bvFTD compared to AD (Rabinovici et al., 2007;Du et al., 2007;Looi et al., 2008). In our study, we did not find any areas which are more atrophic in AD compared to bvFTD. This could be explained by the strict FWE-corrected VBM approach, as we found less GM in posterior brain regions in AD patients when not applying the multiple comparisons correction. These results are in line with another study not applying multiple comparisons correction (Rabinovici et al., 2007). Another explanation that we did not find any GM reductions in AD could be that our patients are included in an early disease stage, with relatively higher MMSE scores compared to another study (Du et al., 2007). Nevertheless, patterns of GM atrophy often overlap, as there are numerous regions of GM loss which are found in both AD and bvFTD (Rabinovici et al., 2007;Munoz-Ruiz et al., 2012;van de Pol et al., 2006;Barnes et al., 2006). The few existing DTI studies demonstrated WM alterations in FTD compared to AD, including more widespread FA reductions in the frontal, anterior temporal, anterior corpus callosum, inferior fronto-occipital fasciculus and bilateral anterior cingulum (Zhang et al., 2009(Zhang et al., , 2011Hornberger et al., 2011;Avants et al., 2010;McMillan et al., 2012). One of these studies also investigated the MD, L1 and L23 differences between FTD and AD and found increased L1 and L23 values in FTD compared to AD (Zhang et al., 2009). Our study is in line with these previous studies, failing to observe reduced FA and increased MD, L1 and/or L23 in AD relative to bvFTD. The same is seen in the DGM structures, where bvFTD patients have more subcortical brain damage compared to AD patients but not the other way around (Looi et al., 2008(Looi et al., , 2009Halabi et al., 2013). The combination of different imaging analysis methods suggests that the non-cortical parts of the brain play an important role in bvFTD. Networks in bvFTD, consisting of white matter and deep gray matter structures, may be different compared to the cortical networks in AD. Indeed, studies of functional connectivity show more functional network connectivity in FTD compared to controls and AD (Filippi et al., 2013;Zhou et al., 2010). Studies using multimodel network analyses should focus on this topic in future studies. We attempted to combine GM and WM measures to increase the discrimination of patient groups and showed that next to GM atrophy, WM integrity measures helped in distinguishing AD from bvFTD. A few earlier studies have combined WM and GM information with the objective to better discriminate between AD and bvFTD. They found that FTD patients exhibited more WM damage than AD patients in an early stage of the disease suggesting that measuring of WM damage add up in the discrimination between these two dementias (Zhang et al., 2011;Mahoney et al., 2014). Another study only linked the two imaging modalities and support the idea of a network disease in FTD but did not examine diagnostic value of GM and WM (Avants et al., 2010). Only two studies actually used a multimodal combination of WM and GM. In one study they achieved a classification with 87% sensitivity and 83% specificity between AD and bvFTD (McMillan et al., 2012). In another study they developed a new metric which gives a measure of the amount of WM connectivity disruption for a GM region and showed classification rates were 8-13% higher when adding WM measurements to GM measurements (Kuceyeski et al., 2012). The novelty of the study lies in the combination of three measures to separate AD from bvFTD. We combined VBM based measures of cortical atrophy, FIRST based measures of atrophy of DGM structures and DTI based measures of WM integrity to yield an optimal classifier. Both discriminant analyses revealed that cortical GM contributed to the separation of AD from the other two groups and WM integrity measurements contributed to the discrimination of bvFTD from the other groups. Especially axial diffusivity increased the discriminatory power for bvFTD. This could be explained by the notion that, despite some involvement of DGM and WM, AD is assumed to be a cortical dementia with specific GM regions being affected whereas bvFTD predominantly affect areas (frontal-insula-anterior cingulate) which are part of structurally and functionally connected neural networks. These networks are connected by specific WM tracts located within damaged GM areas as the frontal lobes and are preferentially affected, contributing to network failure in bvFTD. The finding of more severe damage of DGM structures add up to the hypotheses of bvFTD being a network disorder as DGM structures can be seen as relay stations in the frontostriatal brain networks. These findings are further supported by the fact that bvFTD had the same disease stage (comparable MMSE, CDR, duration of symptoms) as AD patients but have more WM and DGM structure damage. A possible limitation of this study is that we did not have postmortem data available, so the possibility of misdiagnosis cannot be excluded. Nevertheless, we used an extensive standardized work-up and all AD patients fulfilled clinical criteria of probable AD, 19 patients fulfilled the criteria for probable bvFTD and 5 patients for possible bvFTD. All diagnoses were re-evaluated in a panel including clinicians from both centers to minimize sample effects. Because this is a multicenter study, the differences in data acquisition parameters between the two centers might introduce some noise in the DTI analysis. However, we adjusted for center in all models and moreover, a recent study showed that when considering scanner effects in the statistical model, no relevant differences between scanners were found (Teipel et al., 2012). To be confident that the different scanners did not essentially influence the results, we repeated the TBSS analyses for a subtest of all subjects of VUMC only and results remained essentially unchanged. Another limitation could be the significant age difference between the AD patients and controls. However, we corrected for age in all analyses and repeated the VBM, FIRST and TBSS analyses in an age matched subgroup which did not change the results essentially. Among the strengths of this study is the sample size and its multi-center nature. Most of the studies comparing AD with bvFTD use smaller sample sizes. We had enough power to detect differences using FWE and FWE-TFCE correction to adjust for multiple comparisons. Another strength is the unique combination of three imaging parameters in this study to achieve optimal discrimination between AD and bvFTD. Conclusion Accurate diagnosis of patients in life is increasingly important, both on clinical and scientific grounds. It is a guide to prognosis and prerequisite for optimal clinical care and management. AD and bvFTD are difficult to discriminate due to overlapping clinical and imaging features. Therefore, there is an urgent need to improve diagnostic accuracy in a quantitative manner. This study has shown that DTI measures add complementary information to measures of cortical and subcortical atrophy, thereby allowing a more precise diagnosis between AD and bvFTD. If acquisition, preprocessing and analyses methods are easier to implement in the daily clinical routine, DTI could be incorporated in the standard dementia MRI protocol in the future. Acknowledgements/disclosures The VUMC Alzheimer Center is supported by Alzheimer Nederland and Stichting VUMC Fonds. Research of the VUMC Alzheimer Center is part of the neurodegeneration research program of the Neuroscience Campus Amsterdam. The clinical database structure was developed with funding from Stichting Dioraphte. This project is funded by the Netherlands Initiative Brain and Cognition (NIHC), a part of the Netherlands Organization for Scientific Research (NWO) under grant numbers 056-13-014 and 056-13-010. The gradient non-linearity correction was kindly provided by GE medical systems, Milwaukee. Christiane Möller is appointed on a grant from the national project 'Brain and Cognition' ("Functionele Markers voor Cognitieve Stoornissen" (# 056-13-014)). She also received financial support from Alzheimer Nederland for attending courses. Dr. Yolande Pijnenburg reports no disclosures. Prof. Dr. Serge Rombouts is supported by The Netherlands Organisation for Scientific Research (NWO), Vici project nr. 016.130.677. Dr. Jeroen van der Grond reports no disclosures. Elise Dopper reports no disclosures. Prof. Dr. John van Swieten reports no disclosures. Adriaan Versteeg reports no disclosures. Dr. Petra Pouwels reports no disclosures. Prof. Dr. Frederik Barkhof serves/has served on the advisory boards of: Bayer Schering Pharma, Sanofi-Aventis, Biogen Idec, UCB, Merck Serono, Novartis and Roche. He received funding from the Dutch MS Society and has been a speaker at symposia organized by the Serono Symposia Foundation. For all his activities he receives no personal compensation. Prof. Dr. Philip Scheltens serves/has served on the advisory boards of: Genentech, Novartis, Roche, Danone, Nutricia, Baxter and Lundbeck. He has been a speaker at symposia organized by Lundbeck, Merz, Danone, Novartis, Roche and Genentech. He serves on the editorial board of Alzheimer3s Research & Therapy and Alzheimer3s Disease and Associated Disorders, is a member of the scientific advisory board of the EU Joint Programming Initiative and the French National Plan Alzheimer. For all his activities he receives no personal compensation. Dr. Hugo Vrenken has received research support from Merck Serono, Novartis, and Pfizer, and speaker honoraria from Novartis; all funds were paid to his institution. Dr. Wiesje van der Flier is recipient of The Alzheimer Nederland grant (Influence of age on the endophenotype of AD on MRI, project number 2010-002).
2016-05-12T22:15:10.714Z
2015-09-09T00:00:00.000
{ "year": 2015, "sha1": "8043321b2e8d4ca7cce89a82ead85a0012c2adf3", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.nicl.2015.08.022", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8043321b2e8d4ca7cce89a82ead85a0012c2adf3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
195065327
pes2o/s2orc
v3-fos-license
SHOC2 complex-driven RAF dimerization selectively contributes to ERK pathway dynamics Significance The ERK signaling pathway is hyperactivated in a majority of cancers. However, because it mediates myriad physiological responses, the clinical efficacy of current ERK pathway inhibitors has been severely limited by toxicity. This study uncovers both SHOC2 phosphatase complex-dependent and -independent mechanisms of RAF and ERK activation that are differentially engaged in a context and spatio-temporal–dependent manner. KRAS oncogenic signaling preferentially depends on SHOC2 dependent-mechanisms, which thus presents a therapeutic opportunity. This study provides a molecular framework for how targeting the SHOC2-holophosphatase regulatory node of the RAF activation process provides a mechanism for selective inhibition of ERK signaling. Despite the crucial role of RAF kinases in cell signaling and disease, we still lack a complete understanding of their regulation. Heterodimerization of RAF kinases as well as dephosphorylation of a conserved "S259" inhibitory site are important steps for RAF activation but the precise mechanisms and dynamics remain unclear. A ternary complex comprised of SHOC2, MRAS, and PP1 (SHOC2 complex) functions as a RAF S259 holophosphatase and gain-of-function mutations in SHOC2, MRAS, and PP1 that promote complex formation are found in Noonan syndrome. Here we show that SHOC2 complex-mediated S259 RAF dephosphorylation is critically required for growth factor-induced RAF heterodimerization as well as for MEK dissociation from BRAF. We also uncover SHOC2-independent mechanisms of RAF and ERK pathway activation that rely on Nregion phosphorylation of CRAF. In DLD-1 cells stimulated with EGF, SHOC2 function is essential for a rapid transient phase of ERK activation, but is not required for a slow, sustained phase that is instead driven by palmitoylated H/N-RAS proteins and CRAF. Whereas redundant SHOC2-dependent and -independent mechanisms of RAF and ERK activation make SHOC2 dispensable for proliferation in 2D, KRAS mutant cells preferentially rely on SHOC2 for ERK signaling under anchorage-independent conditions. Our study highlights a context-dependent contribution of SHOC2 to ERK pathway dynamics that is preferentially engaged by KRAS oncogenic signaling and provides a biochemical framework for selective ERK pathway inhibition by targeting the SHOC2 holophosphatase. SHOC2 | RAF | MRAS | RAS | ERK S ignaling by the RAF-MEK-ERK (ERK-MAPK) pathway is used by many extracellular signals to mediate a vast array of biological responses in a cell-type-dependent manner. The mechanisms regulating signal specificity remain poorly understood but are known to include modulators, scaffolds, feedbacks, and crosstalk with other signaling pathways that jointly control spatial and temporal dynamics of ERK activation. This in turn regulates phosphorylation of different ERK substrates in a cell-type-, compartment-, and context-dependent manner (1,2). Aberrant activation of the ERK pathway is one of the most common defects in human cancer, with oncogenic mutations in RAS and RAF genes found in ∼30% and ∼8% of cancers, respectively. Up-regulated ERK signaling is also responsible in a family of developmental disorders, referred to as RASopathies (3)(4)(5). ERK pathway inhibitors have shown little clinical benefit against RAS mutant tumors because of resistance and toxicity (5). Strikingly, in both RAS and BRAF mutant cells, most resistance mechanisms lead to ERK pathway reactivation, highlighting a strong "oncogene addiction" of these cancers to ERK signaling. However, the potent pathway suppression required for antitumor activity is limited by the inhibitor doses that can be administered safely because of toxicity (6,7). ERK activity is essential for normal tissue homeostasis and systemic ablation of MEK1/2 or ERK1/2 genes in adult mice leads to death of the animals from multiple organ failure within 2-3 wk, even under conditions of partial inactivation (8), highlighting the difficulties of inhibiting the ERK pathway with a therapeutic index. To effectively harness the addiction of RAS mutant cancers to ERK signaling into viable therapies, new strategies to inhibit the pathway with improved therapeutic margins are needed, for example by inhibiting ERK signaling in a context-or compartment-dependent manner (9,10). MEK and ERK kinases are fully activated by phosphorylation in two sites within its kinase domain by RAF and MEK, respectively. On the other hand, RAF activation is a complex multistep process that remains incompletely understood (11). A consensus model stipulates that under resting conditions, the three RAF kinases (ARAF, BRAF, and CRAF/RAF1) are kept in the cytosol in an inactive state by an intramolecular interaction mediated by 14-3-3 dimers binding in a phosphorylation-dependent manner to conserved sites at the N terminus (S214 ARAF, S365 BRAF, S259 CRAF, hereby referred to as the "S259" site) and Cterminal end (S729 in BRAF, S621 in CRAF) (11)(12)(13). Upon activation, RAS-GTP binds with high affinity to the RAS binding domain (RBD) of RAF and recruits RAF to the membrane where the cysteine-rich domain (CRD) also plays a role in membrane anchoring. Dephosphorylation of the S259 site is known to provide an additional activating input that releases the 14-3-3 from this site and allows RAF to adopt an open conformation where RAF dimerizes with other RAFs, as well as KSR proteins. Definitive confirmation of this model, however, awaits the crystal structure of full-length RAF with or without bound 14-3-3. Nevertheless, the importance of the S259 dephosphorylation regulatory step is highlighted by RAF1 gain-of-function mutations in Noonan Significance The ERK signaling pathway is hyperactivated in a majority of cancers. However, because it mediates myriad physiological responses, the clinical efficacy of current ERK pathway inhibitors has been severely limited by toxicity. This study uncovers both SHOC2 phosphatase complex-dependent and -independent mechanisms of RAF and ERK activation that are differentially engaged in a context and spatio-temporal-dependent manner. KRAS oncogenic signaling preferentially depends on SHOC2 dependent-mechanisms, which thus presents a therapeutic opportunity. This study provides a molecular framework for how targeting the SHOC2-holophosphatase regulatory node of the RAF activation process provides a mechanism for selective inhibition of ERK signaling. syndrome that cluster around S259 to disrupt the interaction with 14-3-3 (14)(15)(16)(17). Furthermore, although RAF1 mutations are rare in cancer, they cluster on residues S257 and S259 (cosmic database). The precise dynamics and mechanism of S259 dephosphorylation remain unclear (11). We have previously shown that MRAS, a closely related member of the RAS family, upon activation forms a complex with the leucine-rich repeat protein SHOC2 and protein phosphatase 1 (PP1) that functions as a highly specific S259 RAF holophosphatase (18,19). The importance of the SHOC2-MRAS-PP1 complex (SHOC2 complex) in RAF-ERK regulation is validated by gain-of-function mutations in Noonan syndrome in all three components-SHOC2, MRAS, and PP1-which promote phosphatase complex formation (20)(21)(22)(23). On the other hand, the phosphatase PP2A has also been variously implicated in mediating S259 dephosphorylation (24)(25)(26)(27), although this was primarily based on the use of okadaic acid and the misconception that it behaves as a specific PP2A inhibitor (28) (in addition to not discriminating between direct or indirect effects). Furthermore, in contrast to its role as a regulatory subunit within a phosphatase complex, other studies have suggested that SHOC2 can function as a scaffold that promotes the RAS-RAF interaction (29)(30)(31)(32)(33). RAF proteins also undergo multiple activating phosphorylation events. Among them, phosphorylation within the negativecharge regulatory region (N-region) plays a key divergent role among RAF paralogues (11). In CRAF, S338 and Y341 phosphorylation within the S 338 SYY 341 motif by PAK and SRC family kinases (SFK) plays a crucial role in regulated activation (34). In contrast, the homologous S 446 SDD motif in BRAF constitutively provides the negative charges required for activity by virtue of acidic D amino acids and constitutive S 446 phosphorylation (11,34,35). This difference in N-region regulation is believed to account for BRAF having higher basal activity, being the most frequent RAF target for mutational activation in cancer and for BRAF being the initial activator in asymmetric RAF heterodimers (11,36). In this study, we have used RNAi and CRISPR to ablate SHOC2 and RAF function, as well as phosphoproteomics to comprehensively characterize the role of the SHOC2 phosphatase complex in RAF and ERK pathway regulation. We have uncovered a selective role for SHOC2 in ERK pathway dynamics, and show that although SHOC2 phosphatase-mediated dephosphorylation of the S259 site is critically required for growth factor-induced RAF heterodimerization, there also exist SHOC2independent mechanisms of ERK activation, which are dependent on N-region phosphorylation of CRAF. Importantly, KRAS oncogenic signaling differentially relies on SHOC2-dependent mechanisms, which provides both a therapeutic opportunity and a molecular framework for selective inhibition of ERK signaling in a compartment and context-dependent manner. To study the role of the SHOC2 complex in the regulation of RAF kinases, we generated an inducible T-REx-293 cell line (T-17 cells) where addition of the tetracycline analog Doxycycline (Dox) leads to expression of active MRAS-Q71L and SHOC2. In these cells, Dox-induced MRAS/SHOC2 expression led to potent S365 dephosphorylation of ectopic TAP6-BRAF that is inhibited in a dose-dependent manner by the serine/threonine phosphatase inhibitor calyculin A (Fig. 1A). To assess possible RAF regions involved in S259 dephosphorylation, transiently transfected BRAF and CRAF mutants were tested for dephosphorylation upon expression of MRAS/SHOC2. Among the mutants tested, only the RBD mutants R188L BRAF and R89L CRAF were defective for MRAS/SHOC2-induced S365/S259 dephosphorylation ( Fig. 1 B and C). Interestingly, when the CRAF R89L RBD mutant was constitutively localized to the membrane by fusion with a RAS membrane-targeting region (CRAF-CAAX R89L), S259 dephosphorylation was efficiently induced by MRAS/SHOC2 expression ( Fig. 1C). Taken together, these data suggest that membrane recruitment through interaction with the RBD is required for efficient S259 RAF dephosphorylation. MRAS/SHOC2 expression levels in T-17 cells did not prove to be tuneable because at the lowest Dox concentration that induced expression, there was a maximum effect on MRAS/ SHOC2 protein levels and concomitant S365 dephosphorylation ( Fig. 1 D and E). When ectopic T6-BRAF in these cells was purified with streptactin beads, MRAS/SHOC2 expression led to a decrease in the amount of MEK bound to T6-BRAF and a concomitant interaction of T6-BRAF with CRAF ( Fig. 1 D and E). To further study the specificity of the role for MRAS/ SHOC2 on RAF-MEK interactions, GST-pulldown assays were performed after cotransfection of myc-MEK1 with GST-tagged CRAF, BRAF, and KSR1 in HEK293T cells. Under basal conditions, MEK1 bound most strongly to KSR1 and only weakly to CRAF (KSR1 > BRAF >> CRAF), and Dox-induced MRAS/ SHOC2 expression led to strong dissociation of MEK from BRAF and CRAF but not from KSR1 (Fig. 1F). Taken together, the above data suggest that MRAS/SHOC2-induced S365 BRAF dephosphorylation promotes MEK dissociation from BRAF and BRAF heterodimerization with CRAF. SHOC2 Is Required for EGF-Induced S365/S259 Dephosphorylation, RAF Dimerization, BRAF-MEK Dissociation, and Efficient ERK Pathway Activation. To assess the role of endogenous SHOC2 within the context of growth factor signaling, T-REx-293 cells where SHOC2 expression was stably inhibited by shRNA expression were used to analyze lysates and immunoprecipitates (IPs) of endogenous RAS and RAF proteins in a time course of EGF treatment. EGFstimulated S365 BRAF dephosphorylation, MEK, ERK, and RSK phosphorylation, but not AKT and EGFR Y1068 phosphorylation, were severely impaired in SHOC2 knockdown (KD) cells, consistent with a selective role of SHOC2 in RAF-ERK pathway activation ( Fig. 2A). When immunoprecipitating RAF, MEK can be readily detected in complex with BRAF but not CRAF under basal conditions (37), and higher levels of P-S365 BRAF in SHOC2 KD cells correlate with higher levels of MEK and 14-3-3 bound to BRAF ( Fig. 2 A and B). EGF stimulated MEK and 14-3-3 dissociation from BRAF and BRAF binding to CRAF, and this response is strongly inhibited in SHOC2 KD cells (Fig. 2B). EGFinduced BRAF interaction with KSR is also impaired in the absence of SHOC2 ( Fig. 2C and SI Appendix, Fig. S1A). In clear contrast, RAF interaction with RAS, as measured on RAS IPs, was not impaired but enhanced in SHOC2 KD cells (Fig. 2B), likely as a result of loss of inhibitory feedbacks (see Discussion). To extend these observations to other cell lines, a CRISPR/ CAS9 strategy was used to completely ablate SHOC2 function in DLD-1 KRAS G13D colon carcinoma cells. EGF-induced dephosphorylation of P-S365/S259 B/CRAF is impaired in SHOC2 knockout (KO) cells (Fig. 2D). Similarly, EGF-stimulated phosphorylation of MEK, ERK, and RSK, but not AKT, is strongly inhibited in SHOC2 KO cells and this response is rescued by reexpression of SHOC2 WT but not SHOC2 mutants defective for interaction with MRAS and PP1, such as D175N or RVxF-SILK (18,19,23) (Fig. 2D). SHOC2 E457K disrupts MRAS/ PP1 interaction less efficiently (19,23) and only partially rescues ERK pathway activation by EGF. Therefore, ERK pathway regulation by SHOC2 correlates well with its ability to form a ternary complex with MRAS and PP1. To analyze RAF interactions in DLD-1 KO cells, endogenous RAF IPs were performed on a time course of EGF stimulation as before. In parental DLD-1 cells, EGF stimulates transient S365 BRAF dephosphorylation with dynamics that mirror MEK and 14-3-3 dissociation from BRAF and BRAF dimerization with ARAF and CRAF (Fig. 2 E and F). As seen in T-REx-293 KD cells, SHOC2 KO DLD-1 have higher basal levels of MEK and 14-3-3-bound BRAF complexes. Moreover, EGF-simulated MEK and 14-3-3 dissociation from BRAF and BRAF heterodimerization with CRAF and ARAF are strongly impaired in SHOC2 KO cells (Fig. 2E). To further validate that the effect of SHOC2 ablation on ERK pathway activation was dependent on its function within an S259 RAF holophosphatase, T6-BRAF WT and S365A mutant (which cannot be phosphorylated and therefore should be insensitive to the phosphatase function of the SHOC2 complex) were stably expressed in parental and SHOC2 KO DLD-1 cells. Expression of BRAF S365A (unlike BRAF WT) leads to higher basal P-MEK and P-ERK levels in both parental and SHOC2 KO cells, consistent with ERK pathway activation by these RAF mutants being insensitive to regulation by SHOC2 (Fig. 2G). When ectopic T6-BRAF was purified from these cells with streptactin beads, T6-BRAF WT displayed higher basal MEK binding in SHOC2 KO cells, whereas no MEK can be detected in complex with T6-BRAF S365A, consistent with a role for S365 dephosphorylation in the regulation of the BRAF-MEK interaction (Fig. 2G). Taken together, the above results strongly suggest that SHOC2 complex-mediated S259 RAF dephosphorylation is required for 14-3-3 dissociation from RAFs, MEK dissociation from BRAF, and BRAF heterodimerization with ARAF, CRAF, and KSR, but not for RAF binding to RAS (SI Appendix, Fig. S2). SHOC2 Is Selectively Required for Early but Not Late ERK Pathway Activation by EGF in DLD-1 cells. When ERK pathway dynamics were studied in an EGF time course in DLD-1 isogenic cells, MEK, ERK, and RSK phosphorylation was strongly impaired at early time points (2.5-5 min) in SHOC2 KO cells compared with parental cells, whereas little differences were seen between them by 20 min of EGF treatment ( Fig. 3 A and B). Similar effects were seen on downstream ERK targets sites, such as BRAF T753, CRAF S289/296/301, EGFR T699, and IRS S363/639 feedback sites, as well as RSK targets, such as YB1 S102 (Fig. 3A). No effect was seen in ERK-independent sites on AF6 or RPS6, whereas AKT S473 phosphorylation is enhanced in the absence of SHOC2, consistent with a negative feedback crosstalk upon ERK pathway inhibition (38,39). This response is reproducibly seen in multiple DLD-1 SHOC2 KO clones tested, ruling out clonal variation (SI Appendix, When other agonists, such as lysophosphatidic acid and FBS were used to stimulate DLD-1 cells, ERK activation was similarly impaired preferentially at early time points in the absence of SHOC2. On the other hand, ERK activation by TNFα (which is RAS-RAF independent) was completely unaffected (SI Appendix, Fig. S3 E and F). Taken together, these results are consistent with an agonist-dependent biphasic ERK activation response in which a rapid, transient phase requires the SHOC2 complex, whereas a slow, sustained phase is independent of SHOC2 ( Fig. 3C). Phosphoproteomic Analysis of SHOC2's Contribution to EGF-Regulated Dynamics. To further study the contribution of SHOC2 to ERK pathway dynamics in an unbiased manner, a label-free phosphoproteomic approach was used to compare global EGF-regulated phosphorylation in parental or SHOC2 KO DLD-1 cells. The MEK inhibitor Trametinib was also used in parental cells to compare global pharmacological pathway inhibition to genetic SHOC2 inhibition ( Fig. 4A). In total, 7,053 phosphosites were quantified, corresponding to 3,091 inferred proteins. In parental cells that were stimulated with EGF, 89 and 78 phosphosites were found to be significantly regulated at 5 and 20 min, respectively (cutoffs: fold-change ± 2, adjusted P < 0.05) ( Fig. 4B and Dataset S1). Functional and phosphorylation motif analysis of the inferred proteins in parental cells are shown in SI Appendix, Fig. S4. Pretreatment with Trametinib dramatically reduced EGF-regulated phosphorylation events with only 5 and 10 phosphosites significantly regulated at 5 and 20 min, respectively (94% and 87% inhibition compared with untreated cells) (Fig. 4B). This highlights the crucial role of the ERK pathway in early signaling by EGF either directly or indirectly by providing priming phoshophorylation for other EGF-regulated kinases (40). In SHOC2 KO cells, inhibition of EGF-regulated phosphorylation was significantly more pronounced at 5 min than 20 min of EGF treatment (90% vs. 38.5% inhibition, respectively) ( Fig. 4B). When the phosphoproteomes of parental and SHOC2 KO cells were compared at either 5 or 20 min of EGF treatment, only 1 phosphosite was significantly changed at 20 min, whereas 26 phosphosites were differentially regulated by EGF in parental but not SHOC2 KO cells at 5 min (21 down-regulated in SHOC2 KO cells, 5 up-regulated) (Fig. 4C). Selected examples of these phosphosites are shown in SI Appendix, Fig. S4C. In conclusion, using phosphoproteomic profiling, we independently determined a selective contribution of SHOC2 to ERK pathway dynamics with a preferential role of SHOC2 at early (5 min) vs. late (20 min) times of EGF treatment. SHOC2-Independent Late ERK Activation Requires CRAF. To address the contribution of RAF isoforms to early vs. late SHOC2dependent and -independent mechanisms of ERK activation, CRISPR was used to knock out the three RAF paralogues in DLD-1 cells. In contrast to SHOC2 deletion, ablation of one or any two combinations of RAF isoforms had no significant effect on EGF-stimulated ERK activation (Fig. 5 A and B and SI Appendix, Fig. S5 A-D). However, KD of the remaining CRAF in dual A/B RAF KO cells potently inhibits EGF-stimulated ERK activation (Fig. 5B) and proliferation in colony formation assays (SI Appendix, Fig. S5E). Thus, as observed in other systems (41,42), there is redundancy among RAF isoforms but RAF function is essential for ERK activation and proliferation of DLD-1 cells. When siRNAs where used to acutely inhibit expression of individual RAF isoforms, transient KD of individual RAF proteins in parental DLD-1 cells had no effect on EGF-stimulated ERK phosphorylation, consistent with the complementation observed in RAF KO cells. In clear contrast, however, CRAF KD (but not ARAF or BRAF) strongly inhibited MEK and ERK phosphorylation in SHOC2 KO cells (Fig. 5 C and D). Similar results were observed in HEK293T cells, although CRAF KD has a modest inhibitory effect in control cells as well (SI Appendix, Fig. S5F). Strong ERK pathway inhibition upon combined SHOC2 and CRAF inhibition correlates with a strong inhibition of proliferation in DLD-1 cells (Fig. 5E). Taken together, these data suggest that, whereas there is redundancy among RAF isoforms in an early phase of SHOC2-dependent ERK pathway activation, CRAF is the primary RAF kinase driving sustained ERK activation by EGF in the absence of SHOC2. SHOC2-Independent ERK Activation Requires Palmitoylated HRAS/ NRAS and CRAF N-Region Phosphorylation. Previous studies have shown a biphasic HRAS activation response to EGF with a rapid transient phase occurring at the plasma membrane, followedwith a 10-to 20-min delay-by a sustained phase at the Golgi (43,44), that is strikingly reminiscent of the ERK response observed in this study. Futhermore, HRAS can differentially activate CRAF in some contexts (45,46). We thus used siRNAs to investigate the contribution of RAS isoforms to ERK activation by EGF. KD of any RAS protein had no effect on ERK activity in parental DLD-1 cells, consistent with redundancy as observed for RAF isoforms. However, in SHOC2 KO cells, KD of HRAS and NRAS, but not KRAS, significantly impaired EGFstimulated ERK activation (Fig. 6A). Furthermore, combined KD of HRAS and NRAS inhibited ERK activity more strongly than NRAS/KRAS or HRAS/KRAS combinations in SHOC2 KO cells (SI Appendix, Fig. S6A). Unlike KRAS, NRAS and HRAS are modified by palmitoylation (47) (2-BP) selectively reduced ERK activation at 20 min in SHOC2 KO cells (Fig. 6B). These results thus suggest that the SHOC2independent/CRAF-dependent sustained phase of ERK activity is driven by palmitoylated NRAS/HRAS proteins. To further investigate additional molecular mechanisms that may be contributing to SHOC2-independent CRAF activation, a panel of kinase inhibitors was tested for their ability to modulate sustained ERK activation. In addition to ERK pathway inhibitors, PAK (FRAX597), FAK (PF-562271), and SRC family (SU6656) kinase inhibitors significantly impaired ERK phosphorylation at 20 min of EGF treatment in SHOC2 KO cells ( Fig. 6C and SI Appendix, Fig. S6B). Both PAK and SRC are known to phosphorylate the CRAF N-region at S338 and Y341, respectively, whereas FAK has been linked to both SRC and RAC/PAK signaling. Indeed, FAK inhibitors impaired PAK1 phosphorylation and PAK, FAK, and SFK inhibitors also impaired CRAF S338 phosphorylation (Fig. 6D). Taken together, these results suggest that N-region phosphorylation in CRAF plays an important role in sustained ERK activation by EGF in the absence of SHOC2. A model summarizing all our data is shown in Fig. 6E. SHOC2 Is Selectively Required for ERK Pathway Activation under Anchorage-Independent Conditions in KRAS Mutant Cells. We have previously shown that SHOC2 is preferentially required for anchorage-independent proliferation in some RAS mutant cell lines (18). We thus set out to use our isogenic DLD-1 system to elucidate a biochemical mechanism for this observation. SHOC2 KO DLD-1 clones had similar growth rates as parental cells in 2D but were impaired in their ability to grow under anchorage-independent conditions in 3D (Fig. 7 A and B). This effect was partially rescued by reexpression of SHOC2 WT, but not the D175N mutant defective for MRAS/PP1 interaction (Fig. 7B), and is consistent with a selective requirement for the RAF phosphatase function of SHOC2 for tumorigenic properties in some RAS mutant cells. To study a molecular mechanism for this selective SHOC2 contribution to 3D growth, lysates of parental and SHOC2 KO DLD-1 cells growing in 2D or suspension (poly-HEMA-coated dishes) were compared. In suspension cells, phosphorylation of AKT and its downstream substrate site S1718 AF6 is strongly impaired [consistent with PI3K/AKT signaling being adhesiondependent in many cell types (48)(49)(50)], but this is unaffected in SHOC2 KO cells (Fig. 7 C-F). Similarly, phosphorylation of FAK and PAK kinases, also known to be regulated by integrinmediated attachment to the extracellular matrix (48), was similarly down-regulated in suspension in both parental and SHOC2 KO cells, which correlated with decreased phosphorylation of known PAK sites on CRAF (S338) and MEK (S298) (Fig. 7C). In clear contrast, basal ERK signaling, as determined by phosphorylation of ERK and ERK substrate sites on BRAF (T753) and CRAF (S289/290/296), was unaffected in parental DLD-1 cells, but significantly decreased in SHOC2 KO clones only in suspension. A selective inhibition of ERK signaling in cells in suspension upon SHOC2 ablation was also seen in other SHOC2 KO KRAS mutant colorectal cell lines, such as HCT116 (Fig. 7D) and SW480 (Fig. 7E) cells, but not in V600E, dimerizationindependent BRAF mutant RKO or HT29 cells (Fig. 7F). Thus, SHOC2 is preferentially required for ERK signaling under anchorage-independent conditions in the context of oncogenic KRAS but not BRAF signaling. An implication of these observations is that SHOC2-independent mechanisms of ERK activation must predominate under 2D basal growth conditions and that a mechanism similar to that observed in the sustained phase of EGF stimulation involving N-region CRAF phosphorylation by FAK/SRC or PAK kinases (Fig. 6) may also independently operate in the context of anchorage-dependent/2D growth. Consistent with this possibility, treatment of DLD-1 cells growing in 2D with PAK, FAK, and SRC family inhibitors led to decreased CRAF S338 phosphorylation in both parental and SHOC2 KO cells, but more potently inhibited ERK phosphorylation in the absence of SHOC2 (Fig. 7G). Taken together, our observations suggest that SHOC2-dependent and CRAF/N-region-dependent mechanism of RAF activation differentially contribute to ERK activation in a context-dependent manner: whereas redundancy makes SHOC2 dispensable for ERK activity under anchorage-dependent 2D growth conditions, in the absence of attachment to the extracellular matrix KRAS-mutant cells preferentially rely on SHOC2-dependent mechanism for ERK signaling (Discussion and SI Appendix, Fig. S8). Discussion This study highlights a key role for S259 RAF dephosphorylation by the SHOC2 phosphatase complex in regulating the dissociation 14-3-3 from the N-terminal RAF regulatory region and RAF dimerization. In the absence of SHOC2, EGF-stimulated BRAF-ARAF, BRAF-CRAF, and BRAF-KSR heterodimerization are strongly impaired, whereas RAF interaction with RAS is actually increased (Fig. 2B). This result shows that the RAS-RAF interaction can be uncoupled from RAF dimerization in some contexts and is consistent with a model where coordinate inputs from RAS and the SHOC2 holophosphatase are required for RAF heterodimerization and activation. Increased RAS-RAF interaction in the absence of SHOC2 is incompatible with a role for SHOC2 as a scaffold promoting RAS-RAF interaction as suggested by some overexpression studies (29,30). Instead, it is consistent with decreased ERK activity in the absence of SHOC2, leading to relief of ERK inhibitory feedbacks, both upstream of RAS and at the level of RAF, such as CRAF S289/296/301 and BRAF T753 that disrupt RAF-RAS interaction (51,52). Similarly, inhibitory ERK feedback sites on EGFR (T699) and IRS-1 (S636/ 639) are also inhibited in the absence of SHOC2 and likely contribute to increased AKT phosphorylation upon SHOC2 and ERK pathway inhibition (Fig. 3 A and B) (38,39,53). There is controversy around the precise order of the initial steps in the RAF activation cycle and whether S259 dephosphorylation precedes or follows RAS-GTP binding (11). S259A mutation in CRAF promotes association with RAS (54), which can be interpreted to suggest that S259 dephosphorylation may precede RAS binding, possibly by 14-3-3 dissociation facilitating access of the RAF RBD to RAS. However, our studies support an alternative model (SI Appendix, Fig. S2) where RAS-GTP binding to RAF and recruitment to the membrane is independent of, and precedes S259 dephosphorylation by the SHOC2 complex: in a time course of EGF stimulation, RAF binding to RAS peaks at 2.5 min and precedes S259 dephosphorylation, which peaks at 5-10 min (Fig. 2B). Additionally, in the absence of SHOC2, under conditions where levels of P-S259 RAF and RAF-14-3-3 complexes are high, RAF readily interacts with RAS in response to EGF (in fact there is increased RAS-RAF interaction; see discussion above and Lysates were probed and P-ERK quantified by Li-COR (mean ± SD) (n = 3-7). Significance is determined using a two tailed t-test *P < 0.05, **P < 0.01, or ***P < 0.001. See SI Appendix, Fig. S6B for representative experiment. (D) Cells were pretreated with 10 μM PAK (FRAX597), SRC (SU6656), and FAK (PF-562271) inhibitors alone or in combination, 30 min before stimulation with EGF for 20 min. (E) Model of selective contribution of the SHOC2 complex to ERK pathway spatiotemporal dynamics. EGF Receptor activation leads to N/H/K-RAS and MRAS/SHOC2 complex activation at the plasma membrane and an early phase of ERK activation involving A/B/C-RAF isoforms. As a result of intracellular trafficking of palmitoylated proteins (by the constitutive de/reacylation cycle and/or receptormediated endocytosis and/or other nonmutually exclusive mechanisms not shown), H/N-RAS travel to endomembrane compartments from where they signal through CRAF to drive sustained ERK pathway activation. Because poly-basic motif-containing KRAS-4B and MRAS (and associated proteins) remain at the plasma membrane, this CRAF is now uncoupled from regulation by the SHOC2 complex, but is instead dependent on N-region phosphorylation by kinases, such as PAK, SFK, and FAK. See Discussion for further details. Membrane anchors represent farnesyl (red) and palmitate (black) groups. S338 and S341 residues in CRAF belong to the N-region. ERK may phosphorylate diverse substrates in different compartments, as shown by different color arrows. Fig. 2B). Furthermore, we have previously shown that S259 phosphorylation can be readily detected on the RAS-bound RAF (19). Taken together, these observations suggest that S259-phosphorylated RAF is able to bind to RAS and that the RAF RBD is likely to be accessible for interaction with RAS within the closed RAF conformation. Our proposed model also allows for the observations that S259A CRAF promotes RAS binding or that SHOC2 can accelerate the RAS-RAF interaction (31,(54)(55)(56) when the tandem arrangement of RBD and CRD and their cooperation in RAF membrane localization is considered: the CRD can interact with RAS as well as phospholipids, and helps anchor RAF at the membrane (57)(58)(59)(60). The CRD hydrophobic loops are likely to be buried in the closed/inactive RAF conformation and may only be exposed for membrane interaction in the open/active conformation upon release of 14-3-3 from the regulatory domain in a mechanism analogous to that proposed for KSR (61). According to this possibility, CRD exposure upon S259 dephosphorylation or experimentally in S259A RAF mutants, would increase membrane avidity and stabilize RAS binding to the RBD (59,60). We also note that our model is consistent with the observation that a CRAF-CAAX mutant that is constitutively localized at the membrane, is independent of RAS but can still be further activated by EGF (62,63) as well as by S259 dephosphorylation (SI Appendix, Fig. S1B). Our study suggests a role for SHOC2-mediated BRAF S365 dephosphorylation in the regulation of the BRAF-MEK interaction, which inversely correlates with BRAF dimerization. Because under resting conditions MEK interacts with BRAF much more strongly than ARAF or CRAF, we speculate that the unique N-terminal BRAF-specific (BRS) domain of BRAF may mediate an additional interaction with MEK in the inactive BRAF conformation. The BRAF BRS domain also interacts with KSR1 (64), suggesting a mechanism for competitive displacement upon growth factor stimulated BRAF-KSR dimerization (SI Appendix, Fig. S2). A definitive answer awaits determination of the crystal structure of full-length BRAF in complex with 14-3-3 and MEK. Our study has uncovered a selective contribution of the SHOC2 phosphatase complex to ERK pathway dynamics. In DLD-1 cells EGF stimulates ERK pathway activation in a pattern consistent with a biphasic response in which SHOC2 is required for a rapid, transient phase, but not a slower, sustained phase that instead depends on palmitoylated HRAS/NRAS and CRAF signaling. SHOC2 complex formation is driven by MRAS-GTP and thus its cellular location is likely to be determined primarily by the membrane localization signals within the carboxylterminal hypervariable region (HVR) of MRAS. The HVR of RAS proteins directs their differential spatial segregation, with palmitoylated HRAS and NRAS being able to signal from the plasma membrane as well as endomembrane compartments, whereas the polybasic-motif-containing KRAS-4B is thought to signal exclusively from the plasma membrane (47). The MRAS HVR contains a polybasic motif as a second membrane targeting signal and is thus expected to closely mirror KRAS-4B in its plasma membrane localization, while being refractory to the intracellular trafficking mechanisms of palmitoylated proteins. Indeed, overexpression of YFP/mCherry-fusion proteins in human mammary epithelial cells supports this scenario, as in addition to the plasma membrane, HRAS and NRAS (but not KRAS-4B or MRAS) can be readily detected to colocalize with CRAF at the Golgi and/or other intracellular compartments (SI Appendix, Fig. S7). We propose a model (Fig. 6E) where upon EGF stimulation, the rapid phase of SHOC2-dependent ERK activation occurs at the plasma membrane, where SHOC2 complex formation upon MRAS activation leads to S259 dephosphorylation on proximal A/B/C-RAF proteins recruited by H/N/K-RAS proteins. In this phase there is redundancy among RAS and RAF isoforms for ERK pathway activation, whereas SHOC2 appears to play an essential, nonredundant role (Fig. 3C). The slow, sustained phase of ERK activation may be driven by internalization of palmitoylated RAS proteins that thereby become spatially segregated from the SHOC2 complex that remains anchored at the plasma membrane by MRAS, alongside KRAS-4B. Internalization may result from intracellular trafficking by the constitutive acylation cycle of palmitoylated proteins and/or receptormediated endocytosis and/or other mechanisms operating in a nonmutually exclusive manner (44,65,66). From these intracellular Fig. 7. SHOC2 is selectively required for ERK pathway activation under anchorage-independent conditions in KRAS mutant cells. (A) SHOC2 is dispensable for anchorage-dependent/2D growth. Incucyte growth curves of DLD-1 parental and three independent SHOC2 KO clones stably expressing WT and D175N SHOC2 were generated using the Incu-Cyte Live Cell imaging system. Representative of n = 2 experiments. (B) SHOC2 KO impairs growth in 3D. Cells in A were seeded in low attachment plates and growth at day 5 measured by Alamar blue staining (mean ± SD) (n = 2-4). Significance is determined using a two tailed t test *P < 0.05, **P < 0.01, or ***P < 0.001. (C) SHOC2 is preferentially required for ERK pathway activation in 3D in DLD-1 cells. DLD-1 cells were grown for 24 h on regular or poly-HEMA-coated plates and lysates immunoprobed as indicated. compartments, H/N-RAS proteins signal primarily through CRAF, which is now uncoupled from regulation by the SHOC2 complex but dependent on N-region phosphorylation by kinases such as PAK, SRC, and FAK (directly or indirectly). A biphasic HRAS activation by EGF with a slow sustained phase at the Golgi dependent on the acylation cycle (44,65,67), as well as differential CRAF activation by HRAS (but not KRAS) dependent on endocytosis (45), are both consistent with this model. A similar biphasic ERK response upon G protein-coupled receptor internalization has been linked to phosphorylation of different ERK substrates from spatially distinct signaling platforms (1,68) and it is likely that SHOC2-dependent and -independent phases of ERK activation are also associated with phosphorylation of ERK substrates at distinct spatial compartments. We also note that similar biphasic kinetics linked to compartment-specific RAS-ERK signaling have been observed during the process of thymocyte selection (66,69) and future studies should address the role of the SHOC2 complex in immune tolerance. The contribution of CRAF S259 dephosphorylation and/or dimerization to the slow ERK activation phase remains unclear. We were unable to detect significant S259 dephosphorylation or RAF heterodimerization in the absence of SHOC2, but low levels below the sensitivity of our experimental conditions cannot be ruled out. On the other hand, we note that experimental constraints when analyzing endogenous proteins have not allowed us to measure homodimerization, and S259-independent CRAF homodimerization during the slow, sustained phase remains a distinct possibility. Reports of N-region CRAF phosphorylation promoting relief from autoinhibition and dimerization (70,71) and of high levels of S338 phosphorylation activating CRAF in the presence of high levels of inhibitory phosphorylation at S43 and S259 (72) support this scenario. It is also worth noting that both SFK and PAK activators, such as RAC and CDC42, are palmitoylated and expected to travel with H/N-RAS during both endocytosis and acylation cycle scenarios of intracellular trafficking, which would thus facilitate N-region phosphorylation of the H/N-RAS bound CRAF at these compartments. The biochemical mechanisms of SHOC2-independent, CRAF N-region-dependent ERK activation observed in the sustained phase of EGF stimulation in DLD-1 cells appear to operate as well in the context of anchorage-dependent proliferation in 2D (SI Appendix, Fig. S8). Integrin signaling regulates both FAK-SRC and PAK activation and cooperates with RTKs to regulate sustained ERK activation in multiple contexts (73)(74)(75)(76)(77)(78)(79). Thus, integrins are well poised to mediate, at least in part, SHOC2-independent ERK activation from sites of attachment to the extracellular matrix. Redundant SHOC2-dependent and SHOC2-independent/ CRAF-dependent mechanisms of ERK activation under basal 2D conditions are likely to account for the observation that both SHOC2 and CRAF ablation alone are well tolerated, whereas combined inhibition potently inhibits growth (Fig. 5E), as complete inhibition of the ERK response is incompatible with proliferation (8,41,42). In clear contrast however, in the absence of adhesion to the extracellular matrix, a key contribution of SHOC2 to ERK activity in KRAS mutant cells is uncovered in 3D (Fig. 7). Basal PI3K/AKT and FAK/PAK activation is strongly impaired in the absence of matrix-dependent attachment, which is likely to enhance the dependency on SHOC2-dependent ERK signaling for anchorage-independent growth in RAS mutant cells (18). Taken together, our results thus provide a molecular mechanism for a selective RAS oncogene addiction to SHOC2 that has also been observed in other studies (80,81) (https://depmap.org) and presents a therapeutic opportunity. Current ERK pathway inhibitors have failed in the clinic against RAS-driven cancers primarily because toxicity precludes a therapeutic index. Our study suggests the SHOC2 phosphatase complex functions as a regulatory node for only a subset of the ERK signaling response. Thus, in contrast to targeting RAF/ MEK/ERK core pathway components that inhibit global ERK signaling, targeting the SHOC2 complex may provide a mechanism for selective ERK pathway inhibition that may provide better therapeutic margins against RAS-driven tumors. PP1 holophophosphatases remain underexplored targets of pharmacological inhibition (82)(83)(84) and future efforts should drive development of inhibitors of the SHOC2 holophosphatase. In summary, this study highlights a selective contribution of the SHOC2 phosphatase complex to RAF regulation and ERK pathway spatiotemporal dynamics that is differentially engaged by KRAS oncogenic signaling and that may allow for context and compartment-specific inhibition of ERK signaling. Materials and Methods Cell Proliferation in Anchorage-Dependent and Independent Assays. Growth curves in 24-well plates were generated using the IncuCyte system (Essen BioScience). Pictures were taken every 2 h, with each data point a composite of four different images from the same well. Growth medium was replaced every 2 d. Anchorage-independent growth (or growth in 3D) was assessed by seeding 1,000 cells in 384-well ultralow attachment plates (Greiner). After 5 d, Alamar blue was added to cells and fluorescence measured using a plate reader. Colony assays were performed 2 d after the siRNA transfection by seeding 2,000 cells in 24-well plates or 30,000 cells in 6-well plates. Cells were grown for 10 d replacing media every 2 d, stained with 0.5% Crystal violet, and photographed using a digital scanner. Cell Lysis and IP Assays. Cells were lysed in PBS with 1% Triton-X-100, protease inhibitor mixture (Roche), phosphatase inhibitor mixture, and either 1 mM EDTA or 5 mM MgCl 2 . Tagged proteins were immunoprecipitated/pulled down from cleared lysates using either FLAG (M2) agarose (Millipore Sigma), glutathione Sepharose, or Streptactin beads (GE Healthcare). Endogenous proteins were immunoprecipitated using antibodies (SI Appendix, Supplementary Materials and Methods) and protein A/G beads (GE Healthcare). After 2-h rotating incubation at 4°C, beads were extensively washed with PBS-E or PBS-M lysis buffer, drained, and resuspended in NuPAGE LDS sample buffer (Life Technologies). Samples were analyzed by Western blot with HRP (GE Healthcare) and DyLight (Thermo Scientific) conjugated secondary antibodies. Membranes were visualized using an Odyssey scanner (Li-COR) or Image Quant system (GE Healthcare).
2019-06-20T13:11:21.058Z
2019-06-18T00:00:00.000
{ "year": 2019, "sha1": "aabf6f64797e070caebd948ea0eb26070a7aaabd", "oa_license": "CCBYNCND", "oa_url": "https://www.pnas.org/content/pnas/116/27/13330.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "3c9e2013ba9647dae4adbafb1fdff3f800bf86ff", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
219627745
pes2o/s2orc
v3-fos-license
Shear Performance of Steel Fibers in Reinforced Concrete Beams Over the past few decades, a significant growth was observed on utilization of steel fibers in Reinforced Concrete (R.C) members. Past research studies on hybrid concrete endorsed optimum utilization of steel fibers (1.5% by volume) as it effectively contributed to improve flexural properties of reinforced concrete members such as R.C beams and slabs .But the contribution of fibers against shear resistance mechanism of R.C beams are not identified well in the previous research. In this context an experimental program was conducted to find Shear contribution and associated Parameters of fibers in the Steel Fiber Reinforced Concrete (SFRC) beams. A series of test programmes are conducted on three full scale reinforced concrete beams (NSF: No steel fibers, BSF1: Steel fibers in shear span, BSF2: Steel fibers in full span) with different configuration of shear reinforcement by using varied range of SFRC in the tested beam. The test results evaluated on the basis of strength and durability aspects at service loads and limit of failure conditions. The results concluded that the presence of steel fibers in reinforced concrete beam significantly contributed to induce shear resistance mechanism and ductile property of R.C beam. This improvement observed in BSF2, when the SFRC constituted in shear span region and the rest of R.C beam arranged with minimum conventional stirrups as shear reinforcement. Further the steel fibers possess good compatibility with concrete and steel reinforcement ,which enhance mechanical and serviceability conditions of R.C beam such as shear strength, ductility, stiffness with respect to strength and deflection, crack width during serviceability conditions of the beam. INTRODUCTION Research communities are striving to improve plastic properties of reinforced concrete members and to mitigate brittle failures induced during high shear conditions. In this context researchers identified perceptive options such as addition of admixtures and fiber composite material with green concrete or by changing the reinforcement detailing aspects of concrete members. But the later induce constructability issues and design constrains such as reinforcement fabrication and placement in the structural members. Although the design codes (ACI, NZS,BS,IS) established threshold limit on utilization of steel reinforcement in R.C members such as beams, columns and slabs, the researchers identified successive methods such as introducing admixture or fiber component elements, and use of geo-polymers or different composites in concrete to improve plastic properties of R.C elements. In this process the researchers found good suitability of steel fibers due to its compatibility and coexistence with green concrete. The use of steel fibers in R.C beams endorsed to improve shear strength and durability to mitigate or delay the brittle shear failure conditions of beams [1], [2]. Experimental works of researchers [4], [6], [8] found that the substantial use of steel fibers in reinforced concrete will enhance the ductility and delay the failure mechanism of beam [3].Subsequently, fibers contributed significant role to improve shear strength [1], [3], and stiffness [6], [10] as it possess uniform stress distribution within R.C beams. Although the contribution of reinforcement steel and concrete against shear failure mechanism of beams are wellestablished in the previous research and well addressed by design codes(ACI,NZS,BS,IS), the shear resistance mechanism of steel fibers alone are not identified if fibers are used various locations of beam such as in shear span or full span of with or without confined confinement as the conventional practice of shear resistance is developed by use of stirrups or bent up bar reinforcement. Since the tensile properties of random fibers are significantly influence shear resistance mechanism of R.C beams [3], there is a need for further investigations to identify unique contribution of steel fibers against shear resistance mechanism. This helps to encourage the designers for more versatility against utilization of fibers. II. RESEARCH SIGNIFICANCE The shear performance of reinforced concrete significantly influenced by ductile property of R.C beam at failure . To improve the ductility, the design codes established threshold limitation on use of steel reinforcement. Hence further improvement of plasticity may endorsed by addition of steel fibers in R.C beams [1], [2].But the contribution of fibers against shear resistance mechanism of SFRC beams are not identified well by the previous research. Due to this reason most of the designers obviated to use SFRC in present R.C construction practice. This paper focused on issues related to contribution of steel fibers against shear resistance mechanism of R.C beams. Test specimen NSF is represented by Figure:1a, and is taken as control beam of M25 grade concrete (in the absence of steel fibers) and conventional stirrup reinforcement used for shear resistance mechanism of beam. In this specimen, shear resistance mechanism is provided by both steel and concrete III. SCOPE AND STUDY LIMITATIONS Test specimen BSF.1 is represented by Figure 1b, where the pouring of SFRC carried in shear span zone of the beam only and the rest of beam poured with M25 grade conventional concrete with use of minimum conventional shear reinforcement in the form of stirrups. In this specimen, shear resistance mechanism is provided by both steel , concrete and fibers. The test specimen BSF2 is represented by Figure 1C, and the pouring of SFRC by adding 1.5% steel fibers in M25 grade conventional concrete and the pouring of concrete done over the entire beam that possess without conventional shear reinforcement. The considerations are given to evaluate both shear capacity and confinement effects of beam. In this specimen, shear resistance mechanism and confinement of beam was provided by concrete and fibers only.Stirrup reinforcement not considered for shear resistance mechanism. B. Material Testing & Specifications Cement (OPC-53), Fine aggregate (River sand Zone II) and Coarse aggregate (20mm) used to prepare M25 grade concrete. Random steel fibers used to mix with conventional concrete at optimum dosage of 1.5% by volume of concrete. Plasticizers are not mixed during preparation of concrete mix and 0.50 water cement ratio considered during preparation of concrete. Steel fibers possess aspect ratio (length/diameter) as 30 with hooked type fibers used (Ref: figure.2) in concrete. Cylindrical specimens used to determine the mechanical properties of concrete such as splitting tensile strength and modulus of elasticity tested concrete. V. EXPERIMENTAL OBSERAVTIONS The test results shows shear mode of failure at ultimate loads in the beams and corresponding observations are made for peak load, ultimate shear capacity, ultimate deflection, ultimate moment, and shear crack width at service and ultimate load. In the first series the testing of control beam (NSF) was initiated by the formation of secondary crack at mid span of beam (Moment 17.32kN.m) and as the load increases on test specimen the subsequent development of primary crack occurred in the shear zone at moment capacity 23.81 kN-m. By further increment of load (10kN/minute) the shear cracks are formed at shear zone (primary cracks) and expanded at an inclination of 48 0 and beam intends to fail at moment 49.79 kN.m. The crack width at failure was observed as 20mm. In the second series of testing the beam BSF1 ,primary crack initiated at mid span and at moment capacity 17.32 kN-m. As the load increasing on beam specimen the primary cracks started in shear zone and progressed at35 O . and the observed moment capacity is 25.98 kN.m. By subsequent increment of sequential loading, the primary crack at shear span expanded widely and progressed and the shear failure observed at moment capacity 56.29 kN.m. The crack width at failure was measured as18mm. In the third series of test programme, the load deflection test on specimen beam BSF-2 observed the formation of secondary crack are initially started at moment of 10.82 kN.m in the flexure zone of beam. As the loads tends to increasing at 10kN/minute the formation of primary crack initiated at shear zone and progressed at inclined angle 42 0 and at initial moment capacity of 19.48kN.m. Further increasing of loads makes the cracks propagate towards compressive zone of beam. The cracks are widening progressively till the ultimate failure of beam happened at the moment capacity of 45.46 kN.m . The crack width at failure was measure as 26mm. The tested beams (NSF,BSF1,BSF2) are failed at different ultimate loads due to incorporation of steel fibers in the beams. In the beam NSF, the initial flexural crack developed at mid span of beam at reaction load of 40kN. The shear resistance of control beam(NSF) provided by conventional shear reinforcement in the form of two legged stirrups and the beam intended to fail at reaction of 115kN (Vu) and at ultimate moment of 49.795kN.m (Mu). In the NSF beam, the shear cracks are developed at intended failure surface of 42 0 angle and Shear mode of failure occurred as shown in figure 8a. Shear Failure of R.C conventional Beam NSF (Fig.8a) The specimen beam BSF1, with fiber reinforced concrete (1.5% fibers by volume) used in shear span was noted in figure 8b. In the initial stage of testing the beam was subjected to develop flexure crack at mid span at reaction load 80kN and the corresponding moment was observed as17.32kN-m. As the load increment proceed at 10kN/minute, primary cracks are developed in shear span of beam and progressed towards compression zone of beam at angle 38.6 0 with horizontal plane. Finally the beam failed in shear at ultimate load of 130kN and the ultimate moment of beam is 56.29kN-m.Refer the following figure 8b The specimen beam using fiber reinforced concrete at full span of beam represented by BSF-2 which is observed by development of flexure crack at mid span of beam at reaction load of 25kN and the corresponding moment is observed as 10.82kN-m. The static loads are applied at increment of 10kN/minute, and further application of loads, primary cracks are developed in shear span and progressed towards compression zone of beam that makes an angle at 41 0 withhorizontal axis. Finally the beam intends to fail at ultimate shear load 105kN with ultimate moment of 45.46kN-m. The mode of failure is referred in fig:8c At Ultimate and Service loads the test results are shown in in the following Table.4 The test results of theoretical and experimental programme of beams NSF,BSF1,BSF2 are compared in the table 5 Table .5 VI.
2020-01-02T21:45:57.810Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "3f68263650b89a3ad86de424ab9dff9be7f4ef0a", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijitee.b6346.129219", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1192ecf3b88907f194e9d8aa84df869205fa620d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
238728858
pes2o/s2orc
v3-fos-license
Policy Formulation During Pandemic COVID-19: A New Evidence of Multiple Streams Theory from Yogyakarta, Indonesia This study aims to answer how the policy formulation process resolves the pandemic's impact in DIY Province, Indonesia, in 2020. DIY is chosen as the case in this study because the governor is also a king in this region. It was also considered the best province in handling the pandemic Covid-19 in 2020. This study used multiple streams proposed by John W. Kingdon to elaborate on the policy formulation process. This research method is qualitative, conducted using an online questionnaire, in-depth interview, documentary from March to October 2020. It found that policy, problems, and political streams overlapped in a policy window, i.e., pandemic crisis, as a common concern that must be addressed immediately. The new finding is that the government administration system that combines monarchy and decentralization models has encouraged crises to be resolved more quickly through an integrated multiple streams formulation. The other new finding is the governor, as a policymaker, can take advantage of the pandemic as a policy window to act as the sole policy entrepreneur. Introduction The policy formulation process is one of the most crucial stages in the policy process. The success of this stage will determine the following steps of policy. In policy studies' early development, such a mindset is commonly associated with a typical system theory model. For instance, Easton (1965) explains how the input stage in demand and support initiated the political process. Both subsequently undergo a process that occurs within the political system. how policymakers in Yogyakarta Special Province (DIY) in Indonesia handle the Covid-19 pandemic, particularly in health, economics, transportation, and social safety nets. These specific cases depend on the unique locus and governance. A study conducted by Kurniadi (2009) in DIY showed this government history. It had been built and led by a king (Sultan) who also serves as a governor. Indonesian Constitution Law Number 13 (2012) stipulates concerning DIY's privileges: leadership (governor), bureaucracy, culture, land, and spatial. Sabatier (2007) criticized the M.S. approach that the multiple-streams framework has no explicit hypotheses. It is also fluid in its structure and operationalization that falsification is difficult (Zohlnhöfer, 2016). However, this approach is still useful for analyzing cases in DIY. The reason is the similarity of several policy formulation variables during this pandemic with the multiple streams model. The other reason is Blomkamp et al. (2017) suggestion that to understand better the policy networks and practices in Indonesia could further uncover who is involved in the process, what evidence they use, and how they can shape debates on particular issues. This theory will analyze how the streams of problems, policy, and politics interact with each other, what the policy window appears, and who the policy entrepreneur handles the Covid 19 pandemic. Mintrom & Norman (2009) suggested that there is also a need for more study of the interactions between policy entrepreneurs and their specific policy contexts. Context greatly influences policy choices because the policy problems to be resolved are generally very multidimensional and ill-structured. This condition forces decision-makers to move from a way of thinking to another when dealing with a public issue (Mintrom & Norman, 2009;Zahariadis, 2008). The study argues that combining the monarchy system and the decentralized unitary state system in an administration system will have its policymaking model as a particular context in this research area. According to such understanding, this study will subsequently examine how the provincial government's policy formulation practices during pandemic covid 19. Based on the number of COVID-19 cases in DIY as of October 2020, 3,835 people were infected, recovered 3,147 people, and 93 people died. The number of layoffs affected 101,805 people, or 4.57 percent of the total workforce of 2,2 million people (Tribunjogja.com, 2020). At the national level, the economic growth in this province has decreased by 6 percent to be -2.69 percent (Tribunjogja.com, 2020). The amount of central government spending in this province in 2020 has been adjusted and dropped to IDR 10,04 trillion. The realization of budget expenditures was only 63.16 percent until September 2020 (jogjaprov.go.id, n.d.). Theoretical Framework During the 2020 Pandemic in DIY, the policy formulation crisis shows an irregular pattern of problems and policies streams. Moreover, many variables determine the policy formulation process. Hence, this research's theoretical framework used W. J. and J. W. Kingdon & Brodkin (2014) through M.S. theory. Kingdon explained that M.S. consists of problem identification (Problem stream), solution production (Policy stream), and choice Journal of Public Administration and Governance ISSN 2161-7104 2021 (Politics Stream). The Problem Streams The problem stream is a public perceived problem shared by the public and policymakers because it has disrupted life together (Zahariadis, 2008;Eising, 2013). It is the perception of a public affair that requires government action that may overcome it. Therefore, the public problem is a policy problem. This policy problem has the characteristics of interdependency, subjective, artificial, dynamic (Dunn, 2018). This feature forces public problems to be comprehensive (holistic), which is the subsystem as an inseparable part of the more extensive system that binds it. (Mitroff & Blankenship, 1973). J. W. Kingdon & Brodkin (2014) and Eising (2013) mention several components in problem streams: indicators to measure change, focus on the momentum of an event, feedback, budget constraints, and how the problem is defined. Policymakers consider the existence of change indicators because decision-making obtains information from specific political pressures and comes from evidence-based policy indicators. Policymakers need this indicator to measure the magnitude of the change to be more concerned with the problem (J. W. Kingdon & Brodkin, 2014). However, indicators of change are not always available in the policy information. Momentum factors of events such as crises, disasters, or symbols of change that are currently emerging, including the experience of individual decision-makers, encourage policy issues to get the public and the government's attention (J. W. Kingdon & Brodkin, 2014). Ambiguity situations like this make it difficult to unify the understanding of thinking (Feldman, 1989;Zahariadis, 2008;J. W. Kingdon & Brodkin, 2014). The similarity in defining policy issues will make it easier for the government to complete the policy agenda (J. W. Kingdon & Brodkin, 2014). Theoretical discussion of M.S. above by Cairney & Jones (2016) concluded that "Problem streamattention lurches to a policy problem, Policy stream a solution to the problem is available, and Politics streampolicymakers have the motive and opportunity to turn a solution into policy." This study examines problem streams based on how the community, executive, and legislature in DIY define a crisis as a common problem. Another question is how the executive responds to the crisis according to the legislative perception of the health, economic, transportation, and social assistance sectors. What exactly is the executive doing when the crisis begins. This study also examines the availability of the local budget and the executive solidity to decide the pandemic budget. In differences in perceptions between these agencies, the governor's role is mediating frictions between agencies to make mutual agreements between actors. Policy Streams A policy stream is a solution offered by the community, experts, and other stakeholders in responding to problems (Eising, 2013). The idea of a policy solution must first go through the debate stage with the support of science and technology (evidence-based policy). This idea must also get public support because of the similarity of both the problem and the solution. In this case, J. W. Kingdon & Brodkin (2014) relies on the role of policy communities as groups of experts and other stakeholders who will discuss options for available policy solutions. The concept of policy communities encourages the emergence of Policy Entrepreneurs, namely, those who carry out an advocacy function when trying to make policy changes. (Mintrom & Norman, 2009;Eising, 2013). W. J. and (J. W. Kingdon & Brodkin (2014) noted that policy entrepreneurs "could be in or out of government, in elected or appointed positions, in interest groups or research organizations."The motivations of policy entrepreneurs include helping the government solve public problems, forming policy ideas that are following the values they believe in, and the desire always to be close to the decision-making ring (J. W. Kingdon & Brodkin, 2014). Policy entrepreneurs have a role in softening up problematic policy ideas because they do not meet technical and scientific criteria, the level of public acceptance, budget availability, and meet the interests of decision-makers (J. W. Kingdon & Brodkin, 2014;Mintrom, 2011;Mintrom & Norman, 2009). The role of policy entrepreneurs can combine three streams that produce policy changes (Colebatch, 2006;Sabatier, 2007;Zahariadis, 2008; J. W. Kingdon & Brodkin, 2014). These policy streams examine the basis of the DIY government in making decisions. When valid and reliable data is not available, how does the local government make decisions and the community's response? The unavailability of data encourages each agency to interpret its pandemic handling program. Through these policy streams, it is increasingly visible how the role of regional heads as policy entrepreneurs in solving problems is. Mintrom & Norman (2009) argue that these policy entrepreneurs have several functions: the ability to read issues, define problems, and build networking. This study examines how policy entrepreneurs role is in handling the COVID-19 pandemic in DIY. Political Streams The last stream is a political stream. Political streams occur and generally flow freely between problems and policy streams (J. W. Kingdon & Brodkin, 2014). Political streams consist of many factors, such as changes in national conditions, changes in officials and members of parliament, and campaigns that are of pressure to be carried out by interest groups, including political parties (J. Eising, 2013; J. W. Kingdon & Brodkin, 2014;Novotny et al., 2015). In other words, J. W. Kingdon & Brodkin (2014) states that political streams consist of the public mood, pressure group campaigns, election results, partisan or ideological distributions in parliament, and changes of government (Zahariadis, 2008;Sabatier, 2007). The study analyzes the national mood and its implications in the center and local government relationship in DIY in political streams, including central support to ensure the security of regional officials in making policies to handle the COVID-19 pandemic. In addition, the analysis pays attention to social and political conditions in DIY, especially the level of hidden conflicts that influence the policy agenda. The term of politics includes explaining how the DPRD's political position in the government system in DIY and the frictions of interest that occur in the executive. According to Kingdon, the elected and appointed officials are important actors in the agenda-setting process, and they are generally the most influential Journal of Public Administration and Governance ISSN 2161-7104 2021 actors among the agenda setters (J. W. Kingdon & Brodkin, 2014;Novotny et al., 2015). The combination of political and administrative abilities and supported by the cultural position allows the governor to have a role as a policy entrepreneur. The implication of these streams is the emergence of the policy window. It refers to where policy entrepreneurs fight multiple streams to push their pet solutions or make attention to their particular problems. J. W. Kingdon & Brodkin (2014) analogizes the convergence of the three streams to a policy referred to as "primeval soup policy," which takes time to do "softening" so that many actors can accept the issue in a policy subsystem (Cairney & Jones, 2016). The policy window is generally open for a limited time. Political streams such as changing the government regime, changing the political constellation in parliament, or shifting a national mood, can open the policy window. Meanwhile, from the aspect of problem streams, the window is open due to public issues that are trending and become a common problem. Crises, disasters, and changes in positions in the bureaucracy are also factors that determine whether or not the policy window is open. Finally, the window will be closed when there are no mutually agreed policy alternatives (J. W. Kingdon & Brodkin, 2014). The primeval soup policy requires policy entrepreneurs' intervention to fight for alternative approaches that all parties accept. Kingdon's M.S. theory explains that a person or group must be a policy entrepreneur who tries to harmonize problems, solutions, and politics behind a policy output. The entrepreneurial policy consists of three categories, i.e., individual policy entrepreneurs, collective policy entrepreneurs, and donor organizations (Meijerink & Huitema, 2010). Overall, J. W. Kingdon & Brodkin (2014) states that policy entrepreneurs are vital actors in advocating alternative solutions when the policy windows are open into a legal policy. There are four policy entrepreneurship elements: displaying social acuity, defining problems, building teams, and leading by example (Mintrom & Norman, 2009). If the Kingdon model tries to put the three streams in balance, will the formulation process produce the final decision? There is a policy entrepreneur who can unite various interests. In the context of handling Covid-19, it confirms that a governor acts as a policy entrepreneur that accommodates different opposing streams. argued that within the policymaking process, policy entrepreneurs take advantage of "windows of opportunity" to promote policy change (Mintrom & Norman, 2009). This study tries to answer how pandemics as a public problem can shift into a policy window. It could also be a novelty that can be an input for knowledge development on how crises help break the policy deadlocks due to different institutional missions and competition among high-level public officials. The role of policy entrepreneurs in opening policy windows to handle a pandemic's impact is interesting to examine. It is in contrast to a conclusion from (Pellini et al., 2018). They concluded that without acceptance by civil servants within the bureaucracy, leadership (even from the very top) alone could not change behaviors and attitudes towards the use of evidence in policymaking. On the other hand, this study found the significant role of the DIY top leaders (governors) in determining the data, the policy process, and the results. The position of civil servants in the Journal of Public Administration and Governance ISSN 2161-7104 2021 policy formulation process is to translate the governor's vision. The researcher analyzed the DIY policy formulation to determine the similarities and differences using the Kingdon model's formulation process. The study highlighted a conclusion formed by the researcher about the overall meaning obtained from the DIY policies' formulation process. Research Method The method of this study is qualitative research. This study focused on the case of policy formulation in health, economics, transportation, and social safety nets in DIY during the pandemic Covid-19 in 2020. Through those policies, the researcher attempted to identify the policy formulation process. This study selected policymaking in DIY, given its status as the oldest province in Indonesia. It also has a unique administration model that combines a monarchical model with a decentralized government. Furthermore, the central government has claimed that DIY was a successful province in preventing the pandemic Covid-19 in 2020. This research was initially through a desk study process, which began in March 2020 and ended in October 2020. The data collecting is by online questionnaires, interviews by phone, and documentaries. The primary data is questionnaires from higher-ranking public officials of the local government at provincial and district levels (PEMDA) and all local legislative members at provincial and district levels (DPRD). In addition, the data collecting is in-depth interviews with eight critical informants from higher-ranking public officials directly involved in the technical process of policy formulation from the PEMDA (4 informants) and DPRD (4 informants). In the initial stage, prepared a map analysis of stakeholders holding a role in policymaking. The researcher also played as a researcher in DIY administration, often following the provincial and districts policy process since 2016. The research also found some other actors according to the recommendations of key informants (snowball technique). The questionnaires were shared with all policymakers, either PEMDA or DPRD respondents, by filling out the questionnaires using the google form application. The number of policymakers as a population (a higher-ranking public official) from PEMDA was 164 people, and the number of DPRD populations was 244. The number of returned questionnaires filled out by the higher-ranking public official was 31 percent, and DPRD was 34 percent. The data from the questionnaires were analyzed using SPSS and presented descriptively. The respondents' answers to assess the extent to which the factors suspected of forming the policy formulation model were obtained from questionnaires and reinforced by the results of in-depth interviews by phone and researcher's observations while interacting with policymakers. As a reinforcement, data from online newspapers were juxtaposed with primary data to conclude the policy formulation process during the Covid-19 pandemic. ISSN 2161-7104 2021 Problem Streams In early March 2020, after the central government determined Indonesia's status as a pandemic Covid-19, the regions followed government policies. However, some provinces want to choose their territory status beyond the government central's wants, i.e., locking down its territory in Jakarta. The center has prohibited that status and replaced it with the Large-Scale Social Restrictions (PSBB). DIY did not follow the central policy by either setting lockdown status or PSBB status. The interesting phenomenon is that many villages in DIY have unilaterally designated themselves as lockdown areas. This status created tensions because it disrupted the supply of goods and services in neighborhoods. Although some of these villages finally gave in after being pressured by local officials, several other villages still insisted on implementing the local lockdown status until October 2020. The community's assertiveness to protect themselves by deciding their area's status to be locked down differs from the confusion at local government levels. They tend to wait for policies from the central government, even though Covid-19 sufferers' cases increased at that time. The primary data shows that more than 90 percent of respondents from PEMDA and DPRD answer that the spread of Covid-19 in DIY was worrying. The data indicate that 72 percent of the PEMDA public officials are concerned that the increasing condition of the Covid-19 pandemic in Jakarta will impact the DIY. It is similar to respond 62 percent of DPRD members who also felt worried that the same thing would happen in DIY. Thus, the data on PEMDA and DPRD members' perceptions confirm that they face the same public problem, pandemic Covid-19. However, the conclusion of interviews with public officials made it clear that they were facing an unusual condition: "We understand that this is a crisis that we cannot avoid." (interview with Mr. A and Mr. B, informants from PEMDA). PEMDA officials' understanding of the crisis is a determinant variable in shaping policies. They admitted that there must be quick actions to respond to the impact of the pandemic Covid-19. According to the DPRD members, there is an awareness to change the policy quickly (Table 1). ISSN 2161-7104 2021 Many DPRD members responded that the local government's actions were slow. They considered that PEMDA's ability to manage sectors affected by Covid-19 were slow (i.e., 37 percent of the health sector, 45 percent of the economic sector, 39 percent of the transportation sector, and 42 percent of the social safety nets). The perception data from DPRD members showed that the PEMDA officials are not reasonably responsive to rapid policy problems, even though they have already been aware of the crisis. For example, informants C, D, E from DPRD criticized the health sector for not having valid data on how many people were infected and the available health facilities. Meanwhile, economic institutions have not been sufficiently alert in the economic sector to determine what economic sectors are affected by the pandemic and what local government policies should respond to pandemic impacts immediately. The DPRD complained about the unclear policy on vehicles allowed to enter and leave DIY in the transportation sector. In the social safety net sector, complaints related to the target group's data validity's unavailability and budget allocation. Public officials often slow response to crises like pandemics because they have two choices to carry out their duties: changing work patterns into a pandemic crisis or maintain routine work. The data in the following Table 2 shows that PEMDA public officials are in doubt of responding to these two things. When the pandemic began to break out, many public officials replied that they began discussing forming a task force for handling the Pandemic Covid-19 (87 percent). Some others also respond that they are refocusing the budget for managing the Pandemic Covid-19 (97 percent), planning Work from Home (WfH) for bureaucrats (93 percent), and discussing strategies for dealing with the effects of covid (97 percent). On the other hand, they also stated that they still had to discuss the local budget and program plan for 2021 (91 percent) and make regular institutional reports (94 percent). In Indonesia's bureaucratic system, both jobs are routine works of institutions that absorb more than 70 percent of working time. Therefore, some public officials answered (31 percent) that they still carried out regular ISSN 2161-7104 2021 activities. Journal of Public Administration and Governance The choice of public officials to focus on handling pandemics is something very positive in a crisis. However, the surprising answer was that as many as 13 percent of the officials said that they did not consider resolving public problems when asked what considerations their commitments were when determining the impact of Covid. Although 13 percent of them is still less than those who answered otherwise, it needs to find why they did not consider resolving public problems like the impact of Covid-19. Public officials who respond not to consider the pandemic are public officials from institutions who think that their institutional duties are not directly affected by the pandemic, for example, agriculture, information communication, settlements, youth and sports, and population. The GRDP data for 2020 shows that six sectors are still growing positively. These sectors are communication information (19.70 percent), health services (19.18 percent), education services (4.47 percent), agriculture (4.19 percent), real estate (1.27 percent), and water supply (0.51 percent) The data above concludes that the pandemic has placed many public officials uncertain about what they should do, completing routine activities or responding immediately to a pandemic case. In other words, the pandemic was not the primary duty of public officials. Instead, the main task is still completing routine activities to comply with the budget 2020. Most public officials (79 percent) answer that another obstacle for making policies is budget constraints. They acknowledged that stringent regulations on using the budget prevented them from changing unilaterally without the central government's approval. The next stage of the dispute was at the step of synchronizing program design with budget availability. Regarding this process, two secondary policymakers from the planning office and financial office played significant roles. The planning office would bring policy design to the financial office to determine the appropriate financial allocation. The finance office has greater power to intervene even alter planning office design in determining the budget. The following excerpt is a statement conclusion of two members of DPRD from a major political party: "Finance office is the financial commander. It has a big role to change the policies that DPRD has formulated". (Informant C, H, DPRD). The process of establishing a consensus between the planning office and the finance office is complex. The implementing institutions are often staggered by the decision from two institutions. Financial office considerations generally refer to the interpretation of the budget capacity, whereas planning office refers to the technical planning argument. "The Finance office often steps out of their authoritative domain by using their argument concerning the interpretation of the budget effectiveness that may perhaps differ from the planning office interpretation upon the same issue" (Informant, F, PEMDA) This fact concludes that the "money follow function" approach was not effective in policy formulation. The new government regulation that frees regional flexibility for budget refocusing to respond to a pandemic is still ineffective. It seems that local governments are still hesitant to implement this government regulation for fear of financial unaccountability. It ISSN 2161-7104 2021 also refocuses the budget requires a long discussion. Finally, the PEMDA and DPRD agreed to the budget changes in October 2020. In general, all budget components have decreased, as shown in Table 3 below: The confusion among public officials increased when the governor of DIY do not want to implement the lockdown but "calm down" and "slow down" instead. The meaning of "calm down" is not panic but awareness of the dangers of a pandemic, while "slow down" is defined as an activity not to harm others through disease transmission. Even though the meaning of calm down and slow down is perfect, it is an indirect signal from the governor to public officials not to change policies that can frighten the public (Gatra.com, 2020). The governor, as the king, never left the principles of Javanese culture in making policies "wong sabar rejekine jembar, ngalah urip luwih berkah." It means that humans must be patient with this problem and let it run as it is. This principle dominates the thinking of public officials to make policy in this region. The policies made do not recognize drastic changes because public officials must calculate according to the previous conditions' achievements (incremental approach). The conclusion is that the definition of policy problems with various indicators generally ends up in a definition of policy problem by the Governor's orders. All parties in the executive and legislative branches must follow the definition of the problem decided by the governor. Policy Streams The local government used data from the central government (98.51 percent) for policymaking. Most public officials were panic because there were no valid data to prevent the impact of pandemic Covid-19. It was also exciting data when PEMDA took data from information developed in newspapers (71.64 percent) and social media (58.21 percent) as well intuition (56.72 percent) (see Table 4). The absence of valid information about covid has created a wrong policy response to prevent it. Examples of such destructive policies are that the central government requires that public facilities spray disinfectants on humans directly, pay more attention to the economy than the health sector, the ineligible target group of social safety nets and the choice to implement PSBB. This last policy, President Jokowi admitted that the PSBB policy was ineffective and replaced it with a Restriction on Community Activities (PPKM) policy in January 2021. The central and local governments had become powerless in facing the confusion of knowledge regarding effective pandemic handling. The policy response made by the DIY government is similar to the garbage can from Cohen et al. (2014). Any information from anyone could be a piece of accurate policy information without filtering the information source's accuracy. The impact of uncertainty in government policies made public trust in the government lowest in 2020. Response for impatience of the DIY community was such as continuing local lockdowns in their villages. Exit voices from experts and the public used newspapers and social media to implement a total lockdown in DIY. However, the provincial and districts government tend to wait for policies from the central government. It is in line with nearly 100 percent of the PEMDA and DPRD answered using central government regulations as the primary source for policymaking. The central government issued the Government Regulations Substitute to Law (2020) in April 2020 (Perppu), which frees each region to refocus its budget for overcoming a pandemic outbreak. Furthermore, it makes it easier for PEMDA to technically adjust its budget to manage better the pandemic impact, especially the health, economy, social safety nets, and transportation sectors. In addition, this Perppu makes local governments more confident when responding to critics in managing the impact of the pandemic. The central policy decided that the policy choices are more likely to strengthen the economy rather than health to prevent a pandemic. As a result, the central government's presence in regions policymaking to handle the pandemic has grown stronger since August 2020. Since then, DIY has always followed the central decision as a basis for policymaking. The local government's role in handling the pandemic Covid-19 has led to the re-appearance of the classic conflict between the technocratic approach (planning office) and financial administration (financial office). The governor will let the different points of view between them avoid hegemony in interpreting his vision. He understands that his subordinates often feel hesitant with his presence in meetings. To overcome this obstacle, the governor established them as secondary policymakers to use their authority. Thus, their conflict can remain to be managed by the governor. The planning office and financial office each have policy communities to formulate policy. The planning office as a technocratic agency works ISSN 2161-7104 2021 based on data analysis of public affairs, while the financial office work with financial capacity and rules. Both sides are a large coalition in the process of policy formulation in DIY administration. This phenomenon is a negotiation between two sections with different views and interests to find common ground in formulating policies. Journal of Public Administration and Governance This policy stream coincides with the problem streams, between the importance of utilizing empirical data in policy formulation and the strength of policies determined by the center in finance. The governor's role in opening this deadlock is essential. His ability to unite central and regional interests through approaches to the central government and unify different regions' interests has resulted in the right policies. He used the pandemic crisis as a policy window to reconcile differences and produce effective policies. Political Stream DIY is a special province in Indonesia. The governor of DIY represents the central government and also the king of the Kingdom in DIY. The relationship between the Sultan and the center during the Jokowi era was relatively conducive to the previous central administration period. Therefore, the Sultan's political position as governor is relatively strong. However, the Sultan experienced legitimacy due to internal conflict regarding the next successor to the throne. The inner political turmoil within the kingdom had heated the political situation between various groups, which also influenced how the Sultan's legitimacy was in front of the people and the political forces in Yogyakarta. This pandemic's emergence revived the Sultan's political position towards the central government and his people in the regions. The central government's legitimacy decreased by the extreme political differences between political groups as a direct impact of the presidential election in 2019. It tended to weaken the legitimacy of the central government. DIY needs a solid leader to handle the impact of the pandemic. Sultan Hamengkubuwono X represented this need. As a king, he has established himself as a policy entrepreneur. Therefore, this pandemic has prompted the Sultan to re-strengthen his political position in DIY. As previously explained, the program's design for controlling pandemics has some interests, particularly between the planning and implementing institutions. The planning office still insisted on their interpretation of the governor's vision to reduce pandemic impacts. On the other hand, the implementing institution generally believes that the program is routine work in the previous fiscal years. This difference of interests also occurs in a pandemic situation, rendering the policy draft dispatched back and forth between the planning office and the implementing institution responsible for its technical implementation. An example is an implementing agency that considers keeping the expenditure on goods in the 2020 budget. On the other hand, the planning office believes that these goods' logistics expenditures should be replaced and allocated to logistics expenditures on goods needed to respond to a pandemic's impact, such as medical equipment. Nevertheless, this process remains true to the proper courses of the technocratic approach. Lobbying in this stage happened between the planning office and the implementing institution through informal dialogue between the institutional heads. Data from the Journal of Public Administration and Governance ISSN 2161-7104 2021 questionnaire confirm that 81 percent of the public officials said that they would compromise when there were differences in data among institutions. The remaining (13 percent) left it to the institutional decision of the respective government institution, and 6 percent of them went it to the institution's head. In the Indonesian bureaucratic culture, every formulating program must consider equity for units and positions in the organization. If bureaucrats ignored it, the program implementation would disrupt. However, actors can resolve the difference in interest in designing the program because all parties felt they had acquired a part of that fiscal year's plan. Most public officials (99 percent) responsible for handling this pandemic admit that they can control conflicts of interest between them. They always refer to central government policies and policies decided by the governor. Pemda and DPRD will discuss the local budget plan's policy design and the DPRD to ratify the typical situation's arrangement. It is included in the policy design phase because the legislation process may change the initial setup. In addition, DPRD had only begun involving during the local budget plan for the program's distribution under its interests. "DPRD often requests changes in the amount and location of target groups. They intentionally increased the number of targets and locations of residents in the case of social safety nets before they ratify the program design". (Informant G, PEMDA) The process of social safety nets frequently experienced political pressures during the implementation. For example, changes in the number of recipients during the plan and the implementation stage of social safety nets happened because DPRD members must comply with their promises to their constituents. The political process of program budgeting during a pandemic is different from the process in typical situations described above. The executive is sticking to Perppu No.1/2020, particularly Article 3 (1), which gives regional governments the flexibility to prioritize the use of budget allocations for specific activities (refocusing), change allocations, and use of local budget (APBD), especially in the context of handling a pandemic. Article 27 (2) emphasized that there is protection for public officials included in the Financial System Stability Committee (KSSK) not to be criminalized or convicted in carrying out handling the impact of the Covid 19 pandemic. Even paragraph 3 emphasized that all actions, including decisions taken following this Perppu, are not the object of a lawsuit that can be sent to the State Administrative Court. One main point in the political stream is still a governor ordered. Technocrats gathered in the Budget Team (TAPD) attempts to translate it into programs design. The formulation process tends to be concise, involving a few actors, one-way, and top-down centric. The successful program design lies in harmonizing programs as an interpretative form of the order in the bureaucracy. Public officials conveyed this order as a policy in figurative and symbolic expressions. The planning office stated that the governor ordered the implementation office to translate his wishes. "The governor bypasses the agenda-setting stage by deciding which topics are a priority. Then he instructed the planning office by giving orders symbolically. For example, the governor's wish regarding development is "I want pandemic corona in here to decline" and "when will I be cutting the ribbon." (Informant A, PEMDA) These symbolic expressions cannot be directly made into program design as the literal translation means. In the governor's words of "I want pandemic in here to decline," this compels the planning office to design policies and break down programs that are relevant in attaining the mandate of reducing pandemic. Simultaneously, the expression "when will I be cutting the ribbon," stated by the governor, is a symbolic speech in which he wishes to carry out a program with far-reaching impacts and implications. Then it requires the officials to translate the governor's intent into a policy design and plan for controlling the pandemic. Policy Window and Governor as a Policy Entrepreneur This pandemic era is the governor's policy window to restore his subordinates' trust in the PEMDA and DPRD. Even at the same time, this is an excellent opportunity to gain legitimacy in the regional community's presence, whose issues of succession to the throne have eroded. The governor understands during the COVID-19 pandemic, DIY needs a solid leader to unify various confusion of data, unclear policies from the center, and political frictions at the national level that are very sharp and influential in regions including DIY. The problems, policies, and politics stream united into policies during the pandemic seemed like they did not have a solid policy basis. Policymakers in DIY panicked when they faced the absence of valid data and unclear policy from the central. They face a situation where the community is getting impatient with what the government is doing. Many villages make their policies to secure their respective territories by locking down their villages. The governor saw public opinion and led to low public acceptance of the quality of central government policies as an opportunity to reunite all components of the DIY community. The approaches taken by the province seem to be cautious because they do not want to conflict with central government policy. However, this was where the Sultan's ability was able to reduce pressure from the DIY community. On the other hand, he did not want to conflict with the central government, which faced extreme public pressure to make a national lockdown option. By taking advantage of his cultural position, the governor was able to get different parties back together to protect the DIY community from the worse impact of Covid-19. Even DIY was able to produce innovative breakthroughs to suppress the spread of Covid-19 at least until October 2020. He showed the ability to involve all communities in the prevention activities of a pandemic. It is a kind of policy innovation in a community-centered approach. This innovation made Jogja in August 2020 receive an award from the center as the best province in coping with Pandemic Covid-19. The governor is the main point of reference in formulating the strategic issues to be executed into the bureaucracy's policy designs. In addition, the governor gives macro-oriented instructions that the relevant institutions will subsequently translate. The former policy inputs come from participants in periodic coordinative meetings held by the local government. This process is similar to the situation in which the governor considers his advisors' suggestions. The latter comes from the governor's relations with internal bureaucrats, particularly officials from strategic institutions such as the financial office, planning office, and DPRD. In the end, to ensure the government's vision from being hegemonized by personnel at the lower level, the governor will unite two big offices, the planning office, and the financial office, concurrently by one public official in 2020. This position's concurrency maintains mutual oversight in formulating policies that are in accord with his conceptions. Hence, the governor's orders are a foundation of the province's agenda-setting that determines the priority of issues transmitted into the bureaucratic machinery. These orders have a positive aspect as it simplifies the long agenda-setting process. However, it has a negative impression due to the loss of a more extensive public participation process. The governor's instructions are duly translated into a policy design by the subordinating bureaucratic machinery. This rigid process is related to the governor's status as a king, which bears implications over his administrative and political authorities (Mallarangeng & Tuijl, 2004). This condition eliminates models, such as problem diagnosis (Weimer & Aidan, 2011) and problem verification (Patton et al., 2015), when systematically setting priority issues. There is no stage of filtering issues through various methods conducted by the bureaucrats. As Dunn (2018) stated, agenda-setting is the stage in which strategic issues are formulated based on levels of urgency by key actors. The wisdom of the governor circumvents the style of the technocratic system. The governor is present as a single policy entrepreneur who determines the motives underlying the government's policy formulation. There are no other policy entrepreneurs except for the Sultan who can unify the problem stream, policy stream, and political stream. The Sultan can harmonize these three aspects since he can intervene in those aspects via macro-oriented instructions. Therefore, it may refer to the governor's dominance in determining strategic issues during the agenda-setting process. The governor, apart from being a policymaker, is also a policy entrepreneur. All interviewed informants from both the PEMDA and the DPRD approved the governor's opinion as a policy entrepreneur. Hence, this finding proves a difference in the entrepreneur policy concept described by John W. Kingdon. If Kingdon believes that the policy entrepreneur is the closest person to the policymaker, then a policy entrepreneur in DIY's policymaking is the governor as a policymaker. The argument is that the governor, as a king, has a great power to intervene in all policy problems and unite all groups in this region. Conclusion This research found that the administration system that combines monarchy and decentralization has encouraged crises to be resolved more quickly through integrated multiple streams. This study also found that the Covid-19 Pandemic was not just a public problem. Still, policymakers could use it as a policy window to unite various streams from a problem, policy, and political side. In the problem streams, noise occurs when there is no valid data with precise crisis indicators. The financial crisis limited the government to overcome all the effects of the pandemic crisis. Besides, the center's policy streams also do not provide certainty about the policy choices. There is not enough information, knowledge, and technology to solve the crisis. Hence, the public is confused with unclear policies that have a political and managerial impact on the Journal of Public Administration and Governance ISSN 2161-7104 2021 government. Political streams from the center disrupted the regions because political friction was still occurring at the national level. Meanwhile, the areas experienced political conflict due to past policies that ignored the local community's socio-cultural factors. In this crisis, the governor, with his social insight, can bridge the different definitions of pandemic problems among actors. He could unify the regions' policy streams with the central policies without conflicting with the center's interests. It resulted in his political position at the center being even more vital. Solid political support from the center has become a valuable asset for him in uniting the various attitudes of groups in the regions at the bureaucratic, political, and community levels to support his idea of solving the pandemic's impact. Thus, the governor builds a network that spreads on all fronts, including making concurrent positions as public officials who hold the most vital functions in his government. Another new finding is that the governor has a dual role as a policymaker and a policy entrepreneur. His role as a policy entrepreneur could arise because of his position as a king in this region. This finding is slightly different from the model proposed by John W. Kingdon regarding the policy entrepreneur as an individual, group, or organization close to the policymaker. Instead, this study finds evidence that the policy entrepreneur is in the same position as the policymaker.
2021-09-27T17:59:48.689Z
2021-08-21T00:00:00.000
{ "year": 2021, "sha1": "8049c852033ddd0311a214cdbd00beaaed346038", "oa_license": "CCBY", "oa_url": "https://www.macrothink.org/journal/index.php/jpag/article/download/18741/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "01183280df8b549c2e72e923f73c9890de47c2ae", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
14887983
pes2o/s2orc
v3-fos-license
Time-scale dependence of correlations among foreign currencies For the purpose of elucidating the correlation among currencies, we analyze daily and high-resolution data of foreign exchange rates. There is strong correlation for pairs of currencies of geographically near countries. We show that there is a time delay of order less than a minute between two currency markets having a strong cross-correlation. The cross-correlation between exchange rates is lower in shorter time scale in any case. As a corollary we notice a kind of contradiction that the direct Yen-Dollar rate significantly differs from the indirect Yen-Dollar rate through Euro in short time scales. This result shows the existence of arbitrage opportunity among currency exchange markets. Introduction It is widely known that foreign exchange rates sometimes show very violent fluctuations characterized by power law distributions [T. Mizuno][K. Matia]. A very important report appeared recently that arbitrage chances exist by considering three currency markets simultaneously [Y. Aiba]. This result is due to the fact that the distribution of return of rotation transaction of three currencies show a similar fattail. As statistical laws among foreign exchange rates in the short time scale are not known sufficiently, we investigate correlations among foreign exchange rates in this paper. Correlations among financial stocks in markets have been actively discussed. The strength of synchronization among stocks was analyzed by using the ultrametric spaces and they discussed about the portfolio of stock [R. N. Mantegna, H. E. Stanley]. The interaction of companies was investigated by analyzing return of many stocks and a directed network of influence among companies was defined [L. Kullmann]. It will be noticed in the present paper that the correlation among foreign exchange rates resembles the stock case. In the following section we discuss the cross-correlation between exchange rates with no time difference to show the relation between synchronization of exchange rate. Then, we show the maximum correlation between exchange rates is observed with nonzero time shift, namely, the direction of influence is discussed. The correlation among foreign exchange rates We first analyze a set of daily data provided by Exchange Rate Service [ERS] for 25 exchange rates for about 3 years from January '99 to August '01 as listed in Table 1. We first estimate cross-correlation functions for these exchange rates measured by USD (United States Dollar). The largest correlation value (=0.95) is observed for a pair of CHF(Swiss Franc)/USD and EUR(Euro)/USD. As demonstrated in Fig.1 it is evident that these exchange rates are remarkably synchronized. There are cases with negative correlation values as found for the pair of MXP(Mexican-Peso)/USD and CHF(Swiss Franc)/USD whose correlation value is -0.23. It is known that correlations between geographically closer currencies tend to have larger correlation and there exist key currencies in each area, for example, Euro for West Europe, Yen for Asia, Hungarian Forint for East Europe and Australian Dollar for Oceania [H. Takayasu and M. Takayasu]. Although we can find such large correlations among currencies in daily data, we can expect low correlations in short time scale as it is common that dealers in major banks tend to work with a single foreign exchange market. In order to clarify this tendency we examine tick-by-tick data provided by Reuters for about 4 months from March '02 to July '02. In Fig.2 we plot the correlation value as a function of coarse-grained time-scale for a pair of CHF/USD and EUR/USD. From this figure we notice that the correlation vanishes if we observe the high-resolution data with the precision of seconds and the correlation value is about 0.5 in the time scale of 5 minutes (300 sec.). From these results it is understood that these two currency markets, CHF/USD and EUR/USD, are working independently in very short timescale. Fig.3 The cross-correlation with a time shift of CHF/USD and EUR/USD. The cross-correlation with a time shift In order to clarify the nature of short time interaction among currencies we calculate the cross-correlation with a time shift, namely, we observe the correlation of these two markets with a time difference by the following equation, where, dpA(t) is the rate change in the market A at time t, dpB(t + dt) is the rate change in the market B at time t + dt, σ is the standard deviation of rate changes in each market. In Fig.3, we show the correlation value between CHF/USD at time t + dt and EUR/USD at time t as a function of time difference dt. Here, we show two plots for different coarse-graining time-scales, 60 sec. and 120 sec.. In both cases it is found that the largest value of correlation is observed around dt = 10 seconds, which implies that in an average sense the EUR/USD market is going about 10 second ahead and the CHF/USD market is following it. Currency correlation in short time scale Here, we discuss about value of currency correlation in each foreign exchange market in a short time scale. From Fig.2, it is noticed that the correlation between foreign exchanges is very low in a short time scale. Namely, each exchange rate is changing rather independently. In order to clarify this property, we analyze the exchange rate of Yen-Dollar by analyzing a set of tick-by-tick data provided by CQG for about 2 years from February '99 to March '02. We introduce two definitions of Yen-Dollar rate: One definition of JPY/USD is the usual transaction rate and the other JPY/USD is defined through Euro as {EUR/USD}×{JPY/EUR}. Here, all exchange rates are given by the middle rate (=(Bid rate + Ask rate)/2). In Fig.4 we plot the cross-correlation value of these exchange rates in different time scales. The correlation value is not unity in the time scale less than about 1 hour. This result clearly shows that the value of a currency differs in different markets in the time scale less than an hour. This is a kind of selfcontradiction of markets causing the occurrence of triangular arbitrage opportunity as shown in Fig.5 [Y. Aiba]. Discussion We have clarified the detail properties of correlation among foreign exchanges; the short time correlation is generally very small even between the pair of currencies showing large correlation in daily data. Within the time scale of a minute we can observe the direction of influence from one currency market to others. In very short time scale we can find contradiction of exchange rates between JPY/USD and {EUR/USD}×{JPY/EUR}. For example, if you observe carefully the two rates, JPY/USD and {EUR/USD}×{JPY/EUR}, then you can buy Yen cheaper in one market and can sell it with higher rate in the other market even taking into account the effect of the spread (=Ask rate -Bid rate) that is about 0.05%. Although, no time lag is considered regarding the actual transactions, it is now clear how triangular arbitrage opportunity appears.
2014-10-01T00:00:00.000Z
2003-03-17T00:00:00.000
{ "year": 2003, "sha1": "bd93bf9f7563b0c418aa74c05b8c3da8b70fa8f7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0303306", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "100fbc5e35f4caae40d8ea22cba08f248eeeb295", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics", "Physics" ] }
39561363
pes2o/s2orc
v3-fos-license
Biomarkers for Identifying Risk of Immune Reconstitution Inflammatory Syndrome In the last 10–20 years, access to antiretroviral therapy (ART) has improved worldwide, resulting in substantial reduction in HIV-associated mortality and increased life expectancy, especially in low and middle-income countries. However, immune reconstitution inflammatory syndrome (IRIS), the clinical deterioration in patients with HIV initiating ART, is a common complication of ART initiation. The manifestations of IRIS depend on the type of opportunistic infection. With HIV-1 as the strongest predisposing factor to tuberculosis (TB) and TB as the commonest cause of death in HIV-1 infected persons in Africa, the otherwise beneficial dual therapy for HIV-1 and TB is frequently complicated by the occurrence of TB-immune reconstitution inflammatory syndrome (TB-IRIS) (Walker et al., 2015). Two forms of TB-IRIS are recognized: paradoxical, which occurs in patients established on anti-tuberculosis therapy before ART, but who develop recurrent or new TB symptoms and clinical features after ART initiation; and unmasking TB-IRIS in patients not receiving treatment for TB when ART is started, but who present with active TB within 3 months of starting ART (Meintjes et al., 2008). Paradoxical TB-IRIS affects approximately 15.7% of all HIV-1-infected patients commencing ART while on TB treatment, and up to 52% in some populations, causing considerable morbidity and mortality (Namale et al., 2015). While the clinical features are relatively well-described, specific diagnostic tools and treatments for TB-IRIS are lacking. The diagnosis of IRIS is clinical, and excluding other causes for a clinical deterioration, such as other opportunistic infections and drug-resistance, is challenging, especially in a resource-limited setting. While risk factors for IRIS have been identified, such as low CD4 count pre-ART initiation and the presence of a disseminated opportunistic infection, there are no biomarkers that predict which patients will develop IRIS. The identification of biomarkers for IRIS prediction may help elucidate the mechanism of IRIS pathogenesis, which may in turn facilitate the development of specific therapies and additionally, allow high-risk patients that would benefit from specific preventative strategies to be identified. A large number of investigations have addressed the roles played by different aspects of the immune response in contributing to TB-IRIS pathogenesis, reviewed in Lai et al. (2015a). A recent unbiased whole-blood transcriptomic profiling of HIV-TB co-infected patients commencing ART showed that inflammation in TB-IRIS is driven by innate immune signaling and activation of the inflammasome, which triggers the activation of transcription factors leading to hypercytokinemia, resulting in systemic inflammation (Lai et al., 2015b). Other recent work also suggests that extracellular matrix destruction by matrix metalloproteinases may play a role in paradoxical TB-IRIS (Tadokera et al., 2014, Shruthi Ravimohan, 2015). Immunosuppressive corticosteroid therapy improves symptoms and reduces hospital admissions but is not without adverse events, and is potentially detrimental in cases of drug-resistant TB (Meintjes et al., 2010). Therefore therapeutic strategies that offer greater immune specificity should be explored. The CADIRIS study, a double-blind, randomized, placebo-controlled trial, investigated the use of maraviroc (a CCR5 antagonist) for IRIS prevention, based on the hypothesis that inflammatory cytokines and chemokines mediate the influx of CCR5-expressing immune cells in IRIS and CCR5 blockade would prevent these inflammatory cells leaving the circulation, reducing local inflammatory reactions leading to IRIS. The study recruited HIV-infected participants with advanced immunosuppression (CD4 count < 100/μl) from five clinical sites in Mexico and one in South Africa and followed them for 1 year. Patients were assigned to receive either maraviroc (600 mg twice daily) or placebo in addition to ART, the primary outcome being time to an IRIS event by 24 weeks. Maraviroc had no significant effect on development of IRIS after ART initiation. While this CCR5 inhibitor has proven antiviral activity, safety and tolerability as part of an ART regimen, its use as an immune-modulator to prevent IRIS appears un-warranted (Sierra-Madero et al., 2014). Well-conducted clinical trials, even if their outcome is negative, are an enormously valuable resource for further studies, such as identifying correlates of risk and/or protection. In this issue of EBioMedicine, Musselwhite and colleagues (Musselwhite et al., 2016) investigate plasma biomarkers predictive of IRIS in samples banked at enrolment from HIV-infected patients entering the CADIRIS trial. With the hypothesis that the risk of IRIS is most likely already present before starting ART, and can be predicted from measuring biomarkers in plasma samples collected before starting ART, they assessed twenty biomarkers in an exploratory way, and retrospectively associated them with the risk of developing IRIS. Of the 267 patients with banked plasma samples, 62 developed IRIS within 6 months of ART initiation, 31% of them TB-IRIS specifically, within median of 13 days of ART. The results indicate that baseline concentrations of vitamin D and higher concentrations of D-dimer, as well as markers of T cell and monocyte activation (interferon-γ and sCD14) were independently associated with risk of IRIS in general. Vitamin D deficiency was prevalent. Higher vitamin D levels were associated with protection against IRIS events, suggesting vitamin D plays an immune-modulatory role. However, vitamin D and D-dimer concentrations were not associated with TB-IRIS specifically, perhaps due to lack of power for this sub-analysis. TB-IRIS was associated with higher concentrations of CRP, sCD14, and interferon-γ and lower hemoglobin than other forms of IRIS and these parameters were used in a composite score to predict TB-IRIS over Other IRIS, with an area under the curve of 0.85 (CI 0.79-0.92) on Receiver Operator Characteristics (ROC) analysis. The strength of this study lies in reasonable power to assess predictors of IRIS and the availability of plasma samples prior to starting ART on two different continents, contributing to the generalizability of the findings. Interesting comparisons are drawn between TB-IRIS and other causes of IRIS, demonstrating heterogeneity in IRIS pathophysiology. As patients with CD4 counts ≥ 100/μl and those with critical illness (e.g. severe laboratory abnormalities, CNS infections) were excluded, generalizability of the findings to these groups is unknown. Further work is required to confirm the findings in these and other at-risk patient populations. In the last 10-20 years, access to antiretroviral therapy (ART) has improved worldwide, resulting in substantial reduction in HIVassociated mortality and increased life expectancy, especially in low and middle-income countries. However, immune reconstitution inflammatory syndrome (IRIS), the clinical deterioration in patients with HIV initiating ART, is a common complication of ART initiation. The manifestations of IRIS depend on the type of opportunistic infection. With HIV-1 as the strongest predisposing factor to tuberculosis (TB) and TB as the commonest cause of death in HIV-1 infected persons in Africa, the otherwise beneficial dual therapy for HIV-1 and TB is frequently complicated by the occurrence of TB-immune reconstitution inflammatory syndrome (TB-IRIS) (Walker et al., 2015). Two forms of TB-IRIS are recognized: paradoxical, which occurs in patients established on antituberculosis therapy before ART, but who develop recurrent or new TB symptoms and clinical features after ART initiation; and unmasking TB-IRIS in patients not receiving treatment for TB when ART is started, but who present with active TB within 3 months of starting ART (Meintjes et al., 2008). Paradoxical TB-IRIS affects approximately 15.7% of all HIV-1-infected patients commencing ART while on TB treatment, and up to 52% in some populations, causing considerable morbidity and mortality (Namale et al., 2015). While the clinical features are relatively well-described, specific diagnostic tools and treatments for TB-IRIS are lacking. The diagnosis of IRIS is clinical, and excluding other causes for a clinical deterioration, such as other opportunistic infections and drug-resistance, is EBioMedicine 4 (2016) 9-10 challenging, especially in a resource-limited setting. While risk factors for IRIS have been identified, such as low CD4 count pre-ART initiation and the presence of a disseminated opportunistic infection, there are no biomarkers that predict which patients will develop IRIS. The identification of biomarkers for IRIS prediction may help elucidate the mechanism of IRIS pathogenesis, which may in turn facilitate the development of specific therapies and additionally, allow high-risk patients that would benefit from specific preventative strategies to be identified. A large number of investigations have addressed the roles played by different aspects of the immune response in contributing to TB-IRIS pathogenesis, reviewed in Lai et al. (2015a). A recent unbiased wholeblood transcriptomic profiling of HIV-TB co-infected patients commencing ART showed that inflammation in TB-IRIS is driven by innate immune signaling and activation of the inflammasome, which triggers the activation of transcription factors leading to hypercytokinemia, resulting in systemic inflammation (Lai et al., 2015b). Other recent work also suggests that extracellular matrix destruction by matrix metalloproteinases may play a role in paradoxical TB-IRIS (Tadokera et al., 2014;Shruthi Ravimohan, 2015). Immunosuppressive corticosteroid therapy improves symptoms and reduces hospital admissions but is not without adverse events, and is potentially detrimental in cases of drug-resistant TB (Meintjes et al., 2010). Therefore therapeutic strategies that offer greater immune specificity should be explored. The CADIRIS study, a double-blind, randomized, placebo-controlled trial, investigated the use of maraviroc (a CCR5 antagonist) for IRIS prevention, based on the hypothesis that inflammatory cytokines and chemokines mediate the influx of CCR5-expressing immune cells in IRIS and CCR5 blockade would prevent these inflammatory cells leaving the circulation, reducing local inflammatory reactions leading to IRIS. The study recruited HIV-infected participants with advanced immunosuppression (CD4 count b 100/μl) from five clinical sites in Mexico and one in South Africa and followed them for 1 year. Patients were assigned to receive either maraviroc (600 mg twice daily) or placebo in addition to ART, the primary outcome being time to an IRIS event by 24 weeks. Maraviroc had no significant effect on development of IRIS after ART initiation. While this CCR5 inhibitor has proven antiviral activity, safety and tolerability as part of an ART regimen, its use as an immune-modulator to prevent IRIS appears un-warranted (Sierra-Madero et al., 2014). Well-conducted clinical trials, even if their outcome is negative, are an enormously valuable resource for further studies, such as identifying correlates of risk and/or protection. In this issue of EBioMedicine, Musselwhite and colleagues (Musselwhite et al., 2016) investigate plasma biomarkers predictive of IRIS in samples banked at enrolment from HIV-infected patients entering the CADIRIS trial. With the hypothesis that the risk of IRIS is most likely already present before starting ART, and can be predicted from measuring biomarkers in plasma samples collected before starting ART, they assessed twenty biomarkers in an exploratory way, and retrospectively associated them with the risk of developing IRIS. Of the 267 patients with banked plasma samples, 62 developed IRIS within 6 months of ART initiation, 31% of them TB-IRIS specifically, within median of 13 days of ART. The results indicate that baseline concentrations of vitamin D and higher concentrations of Ddimer, as well as markers of T cell and monocyte activation (interferon-γ and sCD14) were independently associated with risk of IRIS in general. Vitamin D deficiency was prevalent. Higher vitamin D levels were associated with protection against IRIS events, suggesting vitamin D plays an immune-modulatory role. However, vitamin D and D-dimer concentrations were not associated with TB-IRIS specifically, perhaps due to lack of power for this sub-analysis. TB-IRIS was associated with higher concentrations of CRP, sCD14, and interferon-γ and lower hemoglobin than other forms of IRIS and these parameters were used in a composite score to predict TB-IRIS over Other IRIS, with an area under the curve of 0.85 (CI 0.79-0.92) on Receiver Operator Characteristics (ROC) analysis. Contents lists available at ScienceDirect The strength of this study lies in reasonable power to assess predictors of IRIS and the availability of plasma samples prior to starting ART on two different continents, contributing to the generalizability of the findings. Interesting comparisons are drawn between TB-IRIS and other causes of IRIS, demonstrating heterogeneity in IRIS pathophysiology. As patients with CD4 counts ≥100/μl and those with critical illness (e.g. severe laboratory abnormalities, CNS infections) were excluded, generalizability of the findings to these groups is unknown. Further work is required to confirm the findings in these and other at-risk patient populations. Disclosure The authors declared no conflicts of interest.
2018-04-03T04:35:44.560Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "f37326eeec51d32532bbbd536d5c4ba72ab9aadf", "oa_license": "CCBYNCND", "oa_url": "http://www.thelancet.com/article/S2352396416300470/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f37326eeec51d32532bbbd536d5c4ba72ab9aadf", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118548889
pes2o/s2orc
v3-fos-license
Topological Defects in Gravitational Lensing Shear Fields Shear fields due to weak gravitational lensing have characteristic coherent patterns. We describe the topological defects in shear fields in terms of the curvature of the surface described by the lensing potential. A simple interpretation of the characteristic defects is given in terms of the the umbilical points of the potential surface produced by ellipsoidal halos. We show simulated lensing shear maps and point out the typical defect configurations. Finally, we show how statistical properties such as the abundance of defects can be expressed in terms of the correlation function of the lensing potential. I. INTRODUCTION The central tenet of Einstein's General Relativity is that massive bodies curve spacetime. As a result, light from distant galaxies is deflected by mass distributions encountered along the line of sight. The images of distant galaxies, that act as sources, are magnified and sheared -this effect is known as gravitational lensing. The most striking manifestation of gravitational lensing effects, know as strong lensing, consists in the formation of multiple images of a single background galaxy. In the special case in which a source, a very large mass and the observer happen to be approximately aligned an Einstein ring can be observed and from its diameter the lensing mass can be inferred [1]. Along typical lines of sight the lensing effect is weak, leading to percent level magnifications and shears. However, such a small signal can still be detected from a statistical analysis that relies on the coherence of the shear field over the sky. This allows us to infer how the dark matter is concentrated around galaxies and galaxy clusters, as well as providing a testing ground for dark energy and modified gravity theories [2,3]. In this paper we explore the connection between the theory of topological defects and the spatial patterns of shear fields due to weak gravitational lensing. The starting point of our approach rests on an analogy between gravitational lensing shear fields, as a probe of structure formation on cosmological scales, and the anisotropic optical or mechanical response of materials, as a probe of their inhomogeneous structure on microscopic scales. As an illustration, the topological defects in the local shear field of an elastic medium reflect the external deformations applied to the solid [4]. Similarly for thin liquid crystal films confined on a curved substrate, the density of topological defects depends on the inhomogeneous curvature of the underlying surface [5,6,7]. In a somewhat different context, the distribution of optical singularities can be used to shed light on the statistics of randomly polarized light fields [8]. Here we suggest that topological defects in the cosmic shear can be used as a probe of the gravitational potential generated by the lensing mass fluctuations on large scales. The outline of this work is as follows. In §2, the basic formalism of weak gravitational lensing and the criteria to identify topological defects in shear fields are presented. Our geometric approach is presented in §3 where the defects in the shear field are related to the properties of an imaginary surface whose lines of constant height are defined by the contour lines of the underlying gravitational potential responsible for the lensing. This mapping is applied to describe the characteristic behavior of ellipsoidal structures in the mass distribution and to more realistic shear lensing maps obtained using N-body computer simulations. In §4 we turn to the case of a random mass distribution whose gravitational potential is a Gaussian variable that describes the stochastic geometry of a surface. This assumption (valid on large angular scales) allows us to express the defect density in terms of the two point correlation function of the gravitational potential. The latter is estimated for the standard Cold Dark Matter model for large-scale structure in the universe and compared to the results of simulations. Corrections for weakly non-Gaussian fields can be calculated using perturbation theory as discussed in §5 where ideas for further work are briefly sketched. II. TOPOLOGICAL DEFECTS IN LENSING SHEAR FIELDS In this section, we review the basic formalism of gravitational lensing that relates the deformation of the shapes of background galaxy images to the projected mass density responsible for lensing. A. Basics of Gravitational Lensing Consider a source whose true angular position on the sky makes an angle β with an arbitrary optic axis. As a result of the lensing, an observer sees the light ray as coming from an image at an angle θ (from the optic axis) that differs from β by the (reduced) deflection angle α. These angular displacements can also be viewed as two dimensional vectors { α, β, θ} on a locally flat sky (see [9] for a review). These vectors are related by: In this notation, the image distortion caused by gravitational lensing is locally described by A ij , the Jacobian matrix of the transformation between β and θ which reads where we used the relation α = ∇ θ ψ between the deflection angle and the gradient of the projected (two-dimensional) Newtonian potential ψ( θ). The latter satisfies the Poisson where the convergence, κ( θ), is given by the weighted projection of the mass density fluctuation field, see Ref. [10] for precise definitions. Equation (2) fixes the trace of the deformation matrix A ij . The other two independent components of A ij can be rewritten in terms of a shear tensor γ = {γ 1 , γ 2 } and the angle α as where γ = (γ 2 1 + γ 2 2 ) 1/2 ≥ 0 and α is the orientation angle of γ. The physical interpretation of the shear tensor is straightforward once the components of the Hessian ψ ij are expressed in terms of γ 1 , γ 2 and κ with the aid of Equations (2-3) and then substituted into Eq. (1). The result reads The eigenvalues of this matrix are and v 2 = [− sin α, cos α], respectively. The direction of shear is v 1 : when the deformation tensor (A −1 ) ij acts on a test circular source, it magnifies the image isotropically, by the first term in Eq. (4), and it deforms it into an ellipse with major axis of magnitude (1 − κ − γ) −1 oriented in the α direction, by the second term (see Fig. 1). In the sub-section below we will consider the pattern of the shear on the sky induced by the spatial variation of ψ( θ). Over patches of order a degree on a side, characteristic tangential patterns around lensing galaxy clusters and galaxy groups are observable. Thus measuring the anisotropy introduced in the light distribution from distant galaxies allows us to infer information about mass inhomogeneities along the line of sight that generated the lensing shear. B. Topological defects The local orientation of the shear can be visualized by means of a line field, that is to say a vector n with both ends identified n = −n. This is analogous to the description of interacting liquid crystal molecules in condensed matter physics, where the director field n denotes the local orientation of the molecules. In that context, it is often useful to develop a coarse grained view of the many-body system by concentrating on global configurations of the director field called disclinations [11]. Disclinations are topological defects like vortices in fluid mechanics that can be classified according to the index of the streamlines around them. For example , the shear field angle α(θ 1 , θ 2 ) has a topological defect of index m if around any contour that surrounds the location of the defect θ d . Upon using Stokes theorem, Eq. (5) can be cast in the differential form where δ 2 D ( θ − θ d ) denotes the Dirac delta function at the location of the defect. As a result, topological defects are analogous to localized regions of quantized magnetic flux with charge proportional to the winding number m, and ∇α plays the role of the electromagnetic gauge index as a sink which is obtained from the vortex by a c = π/2 rotation. This invariance distinguishes the topological classification adopted in this work from the more common E-B modes classification adopted in previous studies of the CMB polarization field and of gravitational lensing. For example a 45 degree rotation of each shear around an m = 1 defect does not change its topological charge, but it changes an E mode to a B mode. In this study, we concentrate on global topological features of the shear field whose positions on the sky are related (non-locally) to the large scale mass fluctuations responsible for the lensing. III. THE INDUCED GEOMETRY OF THE LENSING SHEAR In this section, we derive the connection that exists between the topological defects in the shear field and the projected gravitational potential ψ( θ) generated by the mass fluctuations. It is convenient to view ψ(θ 1 , θ 2 ) as the height function of a non-intersecting two dimensional surface. In the weak lensing regime, the derivatives of the potential ψ are small; the differential geometry of the surface is then approximated by the properties of the Hessian ψ ij [12]. We show that the shear direction corresponds to the principal direction of maximal curvature on the surface, provided that |κ|, γ ≪ 1. A. Defects as umbilical points On such a two dimensional surface, we can step away from any given point in an infinite number of directions so that for each direction a curvature is defined. As the direction is smoothly varied, two perpendicular directions of principal curvatures can be found for which the curvature is maximal and minimal. The two principal curvatures, henceforth denoted by {κ 1 , κ 2 }, are the eigenvalues of the Hessian matrix ψ ij : The eigenvectors give the local directions of the principal axes. Since the magnitude of the shear field γ is positive, κ 1 = κ + γ corresponds to the principal direction of maximal curvature. The Hessian matrix ψ ij determines the deformation matrix A ij according to Eq. (1). Upon comparing Eq. (7) with Eq. (4) and the discussion following it, we can conclude that the direction of maximal curvature is the shear direction along which a reference circular source is stretched most by the lensing potential (eg. it points along the major axis of the ellipse in Fig. 1). The two principal curvatures are equal when 4ψ 2 12 + (ψ 11 − ψ 22 ) 2 is equal to zero. This defines the umbilical points of the surface: points where the principal directions are undefined and the surface is locally spherical or flat [13]. We can now connect the geometry of the surface which represents the gravitational potential and the topological defects in the lensing shear field. At points where the shear field γ has a topological defect, both of its components, see Eq. (3) must vanish, because its local direction is undetermined. This ensures that the two principal curvatures are equal. As a result, topological defects of the shear field can be identified as the umbilical points of the induced surface. Indeed the umbilical points can be classified according to the index of the principal direction vector field, which is either + 1 2 or − 1 2 at an umbilical point. As we shall see this corresponds to topological defects of index ± 1 2 in the shear field. The local angleα(θ 1 , θ 2 ) by which the coordinate axis needs to be rotated to overlap with the principal axes (given by the eigenvectors of the Hessian of the height function ψ) is A comparison of Eq. (8) with the definition of the shear components γ 1 and γ 2 in Eq. (3) shows thatα = α, the angle that specifies the orientation of the shear γ. Thus the principal direction of maximal curvature on the surface tracks the shear field. In the next sections we will show how this mathematical mapping leads to connections between the defect locations and the distribution of mass that generates the gravitational potential. In addition the identification between topological defects and umbilical points of a random surface will allow us to relate the density of defects to statistical quantities such as the two-point correlation function of the projected gravitational field. In practice, one would attempt to identify defects either in direct experimental measurements of shear using galaxy images, or ray tracing computer simulations that provide the shear field on a discrete grid. In both cases, it is necessary to consider how to interpret the notion of a defect (introduced via Eq. (5) in the continuum limit) on a square grid. We have carried out such a measurement using the net change in the angle α around closed loops at each pixel vertex, using techniques similar to those of [14]. Further details of the numerical aspects will be presented elsewhere. B. The ellipticity of haloes To illustrate the formal ideas presented in the last section consider the simple case of an elliptical potential field ψ(x, y) generated, for example, by an isolated distribution of mass which is not axisymmetric, namely an elliptical halo. (Note, however, that even if the mass distribution is locally axisymmetric the corresponding potential can be perturbed into an elliptical shape by the tidal field of nearby objects, see next section [15].). For simplicity, consider two simple models of elliptical haloes whose gravitational poten- tials are given in dimensionless variables by and ψ(θ 1 , θ 2 ) = − 1 This follows from the fact that the shear field is oriented along the principal direction of maximal curvature. The defects separation, s, measured in the same units of the angular distances {θ 1 , θ 2 } adopted for the potential in Eq. (9), reads while the separation for the potential in Eq. (10) reads In the limit of zero ellipticity, ǫ = 0 and the separation between the two defects, s ∼ √ ǫ, goes to zero. The two +1/2 defects coalesce into a single vortex of index +1 which is expected from an isolated axisymmetric mass distribution [9]. Qualitatively similar behavior is expected for ellipsoidal haloes with different potential profiles. We have also checked this for a non-singular logarithmic potential. Hence, the separation of the two nearby defects is a measure of the ellipticity of the haloes. This is potentially useful since large-scale structure studies have shown that the mass distribution in the universe can be well approximated (especially for statistical purposes) as a network of elliptical halos of varying mass and concentration [16,17]. Thus our approach facilitates the interpretation of observed shear fields, as we discuss further below. We note that the analysis presented here applies to non-singular mass distributions which generate a gravitational potential that is both continuous and finite everywhere, so that the induced surface is smooth. In the case of either a point mass or an isothermal mass distribution, the lensing potential diverges at the origin and would require further refined techniques. We expect that realistic mass profiles and the ellipsoidal nature of halos avoids such a divergence. C. Complex defects patterns in realistic shear maps The geometric analysis developed in this work can be applied to shear maps generated by more realistic mass distributions than the isolated halo considered in the previous section. For instance, tidal effects between nearby mass distributions can generate complex defect patterns that capture the skeleton of the contour lines of the corresponding gravitational potential. In order to demonstrate this point, we have generated shear lensing maps by means of N-body simulations combined with a ray tracing algorithm [18,19]. with the defect patterns reproduced in Fig. 2. The red and black dots indicate negative or positive defects whose index has magnitude 1/2 (top row in Fig. 2). They have been identified by measuring the net change, ∆α, accumulated by the shear angle in going around each pixel, see Ref. [14]. The top left part of the plot shows a prominent mass over-density of irregular shape which generates at large scales shear field lines topologically equivalent to a vortex. Upon magnification, two +1/2 defects can be resolved. The line along which the two defects lie is approximately oriented perpendicularly to the major axis of the nearly elliptical mass distribution, as indicated by our geometric analysis in the previous section. In addition, note the presence of the isolated −1/2 defect (red dot) generated by tidal effects at the center of three mass over densities. The local projected gravitational potential resembles a monkey saddle when plotted as a height function with an umbilical point (the -1/2 defect) at its center. Note, in addition, that the dipole of ±1/2 defects in close proximity (on the right side of the picture) has no significant effect on the field lines at large scales since the two defects cancel each other. IV. STOCHASTIC GEOMETRY AND MASS FLUCTUATIONS In this section, we consider the case of a random mass distribution and present a field theory from which the density of topological defects can be read off from the statistics of the stationary random function ψ(x, y). The gravitational potential is first assumed to be a Gaussian random variable (deviations from Gaussianity can be calculated using perturbation theory, this is beyond the scope of this paper). Light coming from very distant sources undergoes deflections from a number of intervening lensing planes. Assuming that the mass distribution in different redshift slices is uncorrelated, the projected gravitational potential resulting from a large number of such lensing planes is well described by a Gaussian random variable according to the central limit theorem. In addition, with a large smoothing scale, the lensing mass distribution itself is well approximated by a Gaussian random field. In practice, there are not sufficiently many uncorrelated lens planes and the Gaussian assumption is valid only on angular scales sufficiently larger than 10 arcminutes (it is a better approximation for sources at higher redshift). A. Defect density In order to count N, the number of topological defects (where the magintude of γ = 0) in a patch of the sky, we need to evaluate the surface integral where δ D indicates a delta function and the appropriate Jacobian determinant for the change of variable has been inserted [20]. This integral can be performed explicitly once the gravitational potential ψ(θ 1 , θ 2 ) and its derivatives are specified. Note that the third spatial derivatives are also needed to evaluate the determinant in Eq. (13). In the case of Gaussian fluctuations of the lensing potential, the average defect density can be obtained by performing a functional integral that averages over the unknown fields with a probability density which is simply given by the exponential of a quadratic function of ψ and its derivatives. Following the seminal work of Berry and Hannay [20], we can write down explicit formulas for the density and other statistical properties of the detects using standard field theoretic manipulations. The statistics of ψ( θ) are completely determined by specifying either the autocorrelation function C(θ) or its power spectrum P (k) defined by The number density of defects, d, can then be related to the autocorrelation function of the lensing potential d = 3C where the derivative C (6) 0 indicates the sixth spatial derivative of the correlation function C(θ) evaluated at θ = 0 [20,21]. This formula can also be cast in terms of the moments of the power spectrum P (k) from Eq. (14). The result reads where the n th moment of the power spectrum is defined as The two correlation functions of topological charges with equal or opposite sign, {g ++ (θ) = g −− (θ), g +− (θ)}, can be readily obtained from Eq. (13) in terms of θ, the angular distance in the projected sky between two defects of equal or opposite index. The resulting mathematical expressions are too complicated to list here, see [21] and references therein for more details. The topological defects correlation functions depend on the full functional form of C(θ) and its derivatives, not only its asymptotic value for θ → 0. As a result, they are a more sensitive probe of the underlying cosmological processes than the defect density in Eq. (15). The mathematical techniques adopted to study the topology of the shear field can be applied successfully to the study of the cosmic microwave polarization field [14,22,23]. The components of the lensing shear field are always correlated because in the standard weak lensing approximation they are derived, via Eq. (3), from the Hessian of the gravitational potential which is the fundamental physical field that controls the statistics. (Departures from this behavior are indicative of systematic errors in the data or physical effects distinct from lensing. ) The CMB polarization components have a B-mode contribution from primordial tensor mode fluctuations and from lensing along the line-of-sight, so one generally requires both a scalar and pseudoscalar potential to describe them [24]. B. Application to the cold dark matter model The relations in Equations (15)(16) express the number density of defects d in terms of the correlation function or power spectrum of the projected gravitational potential. We can evaluate these for the current cosmological best fit Λ-CDM model, with the caveat that the Gaussian approximation breaks down on scales below about 10 arcminutes (or angular wavenumbers ℓ above about 1000), the precise value of the angle requires careful tests and depends on the source redshift. The breakdown occurs due to nonlinear gravitational dynamics which couples the initially random Fourier modes that comprise the perturbed potential [10]. While nonlinear effects are evident in measures like the lensing power spectrum on larger scales, the Gaussian approximation may still be valid to study topological features which can be altered only by orbit crossings or mergers. The potential is obtained from the projected density field via the Poisson equation as discussed in §II. The density field is characterized by its power spectrum, which can be approximated analytically by where the transfer function T (k) can be approximated analytically in terms of q = k/Ω m h 2 (Bardeen et al 1984) This power spectrum, evolved in time using linear perturbation theory, and projected appropriately along the line of sight gives C(θ), the two-point correlation function of ψ [18]. Upon evaluating Eq. (15) above, we find (for comparison with simulations, evaluating the expression at an arcminute), the predicted number density of defects is ∼ 10 7 per steradian. The two main approximations made are the assumption of Gaussianity/linear evolution and the assumption that the asymptotic behavior as θ approaches 0 is recovered at scales of order an arcminute. We compare the analytical prediction made above with the measured number of ±1/2 defects in ten independent ∼ 6 square degree patches obtained from simulated shear maps. We estimate the measured density is 2 × 10 6 . (The error estimated from the standard deviation is less than 5% .). We may expect that non-Gaussianity, due to the mergers of several small halos into larger ones, tends to reduce the number of defects. Hence, a difference within a factor of order unity is perhaps not surprising. A more careful comparison will be attempted by making perturbative corrections to the Gaussian formula in future work. V. CONCLUSION In this paper we have studied topological defects in the shear field generated from weak gravitational lensing. Our geometric approach rests on the observation that topological defects correspond to umbilical point of an imaginary surface whose height function is given by the projected gravitational potential. This allows us, for example, to relate the ellipticity of a gravitational halo to the distance between two defects of index +1/2. We describe the overall pattern of the shear field in terms of the defects generated by clustered halos, as shown in Figure 4. Moreover the density of the defects can yield information on the two point correlation function of the gravitational potential if the fluctuations are assumed to be Gaussian. In this case, the statistical properties of the shear field can be readily calculated from a simple field theory. Topological defects in the shear field provide a different view than direct measurements of shear correlations, which is the standard approach in lensing cosmology. Whether the study of defects can provide cosmological information is an open question. We have not studied the robustness of defect identification or their statistical properties in the presence of measurement noise. This should be straightforward to carry out, at least with Gaussian noise added to simulated shear fields. The analytical results in Section IV rely on the statistics of a Gaussian random field. On small scales, the lensing shear has distinct non-Gaussian features, so the Gaussian description is valid only on large enough scales or alternatively from sources at very high redshift. It may however be a suitable starting point for perturbative calculations of the effects of weak non-Gaussianities associated with the onset of the non-linear gravitational dynamics regime. In general, measurements from simulations can be used to test analytical results, to compare with data and to find signatures of non-Gaussian features in the defect statistics.
2009-06-01T14:57:17.000Z
2009-06-01T00:00:00.000
{ "year": 2009, "sha1": "ba81edaf8f6ff2eb369bfbd1631099a76d290bbd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0906.0124", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ba81edaf8f6ff2eb369bfbd1631099a76d290bbd", "s2fieldsofstudy": [ "Geology", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8471186
pes2o/s2orc
v3-fos-license
Mast Cells and Histamine: Do They Influence Placental Vascular Network and Development in Preeclampsia? The physiological course of pregnancy is closely related to adequate development of the placenta. Shallow invasion of trophoblast as well as decreased development of the placental vascular network are both common features of preeclampsia. To better understand the proangiogenic features of mast cells, in this study we aim to identify the potential relationship between the distribution of mast cells within the placenta and vascular network development. Material and Methods. Placentas from preeclampsia-complicated pregnancies (n = 11) and from physiological pregnancies (n = 11) were acquired after cesarean section. The concentration of histamine was measured, and immunohistochemical staining for mast cell tryptase was performed. Morphometric analysis was then performed. Results. We noticed significant differences between the examined groups. Notably, in the preeclampsia group compared to the control group, we observed a higher mean histamine concentration, higher mast cell density (MCD), lower mean mast cell (MMCA) and lower vascular/extravascular (V/EVT) index. In physiological pregnancies, a positive correlation was observed between the histamine concentration and V/VEVT index as well as MCD and the V/VEVT index. In contrast, a negative correlation was observed between MMCA and the V/EVT index in physiological pregnancies. Conclusions. Based on the data from our study, we suggest that a differential distribution of mast cells and corresponding changes in the concentration of histamine are involved in the defective placental vascularization seen in preeclamptic placentas. Introduction Angiogenesis is a crucial process for the growth and development of new tissues. We can observe angiogenesis in neoplasms, during tissue repair after injury and in the placenta. Proper placental angiogenesis is necessary for the normal course of pregnancy and labor [1]. The pathogenesis of preeclampsia is still unclear, but it is known that shallow spiral artery invasion may contribute to preeclampsia development. Shallow spiral artery invasion results in poor placental perfusion and may lead to hypoxic stress in the fetus. Immaturity of extravillous trophoblastic cells has been identified as a cause of diminished spiral artery invasion [2]. The placental vascular network is defectively developed as well. In some preeclampsia-complicated pregnancies, the placenta and associated placental vascular network are diminished. Mast cells are found in the placenta in every stage of placenta development. Their potential role, apart from immunological properties, can be associated with proangiogenic activity. Mast-cell-derived mediators of known angiogenetic potential include vascular endothelial growth factor (VEGF), transforming growth factor beta (TGF-β), histamine, tumor necrosis factor alpha (TNF-α), interleukin-8, and basic fibroblast growth factor [3]. The activation and degranulation of mast cells in the place of angiogenesis stimulate vessel sprouting and sustain mast cell attraction and activation [4]. Data from the literature and our own experience suggest that mast cells may be involved in the pathogenesis of preeclampsia-complicated pregnancies [5,6]. In this study, we examined the relationship between mast cells (number and morphological features), histamine concentration, and microvascular density in placentas obtained after delivery from normal and preeclampsia-complicated pregnancies. Material and Methods The characteristics of the patients are detailed in Table 1. Placental samples were obtained in a standardized manner after the dissection of fetal membranes. Three samples were excised from the maternal side of the placenta and two were excised from the fetal side. Macroscopically changed areas, large vessels, and fibrous tissues were avoided. Samples were taken immediately after cesarean sections in each group: preeclamptic women (PE, n = 11) and healthy women (control group, n = 11). In the PE group, cesarean sections were performed due to severe preeclampsia. In the control group, cesareans were performed due to severe myopia and breech presentation of the fetus. None of the patients included in the study had contractile activity [7]. The study was reviewed and accepted by the local ethic committee. Immunocytochemical Stainings The tissue fragments were fixed in formaldehyde solution, dehydrogenized with 96% alcohol, acetone, and xylene and then paraffinized. Next, they were cut in microscopic slides and deparaffinized, and the intrinsic peroxidase activity was blocked with hydrogenium superoxide. The samples were then washed with PBS and incubated with normal human serum for 20 minutes. Excess antibody was removed, and the slides were incubated with mouse anti-tryptase antibody (Novocastra, 1 : 3000), followed by secondary anti-mouse biotinylated antibody and Novostain Super ABC Reagent (Novocastra). Both incubations lasted for 30 minutes. The slides were washed with PBS and exposed to 3,3diaminobenzidine (Immunotech) for 3 minutes as an electron donor and hydrogen peroxide as a substrate, resulting in a brown reaction product. The cells were then counterstained with Mayer's hematoxylin (Sigma) for 1 minute. Finally, the slides were mounted with DPX (Sigma). As a negative control, the slides were incubated with PBS instead of the primary antibody. Histamine Concentration Assay A fluorimetric method was applied as previously described [8]. The determination of histamine was based on a precolumn derivatization with o-phthaldialdehyde using reversed-phase high-performance liquid chromatography in perchloric acid extracts. A fluorescence detection system was used, with the excitation set at 360 nm and the emission read at 455 nm. The intra-and inter-assay coefficients of variation were 8.5% and 10.0%. Morphometric Analysis Morphometric analysis was carried out with the computer image analysis system Leica Quantimet 500C+ (Leica Cambridge Ltd. Cambridge, UK). The system consisted of an IBM Pentium computer operating at 120 MHz equipped with an ARK Logic 2000MT graphic card and graphic processor. The computer was connected to a CCD video camera JVC TK-1280E and Leica DMLB light microscope. Sections of placentas were imaged using a 20 : 1 objective and a 10 : 1,20 ocular. The optical image was focused by a video camera, and an analogue video signal was generated. An analogue to digital converter (ADC) produced a digitized video with distinct color level values in HSI system. The images were processed, and mast cells and placental vessels were clearly identified [9] (Figure 1). Two independent researchers were responsible for image acquisition and analysis. All measurements were recorded in a blinded fashion. Neither researcher had previous knowledge of the clinical data. For each case, 50 random visual fields were analyzed. After system calibration, the area of a single analyzed image (visual field) was defined as approximately 0,14 mm 2 . The following parameters were analyzed: mast cell density (MCD), defined as number of mast cells per mm 2 of placental tissue; mean mast cell area (MMCA), the mean area of mast cells cross-sections; shape of mast cells, defined as the ratio of long to short axis of a cell (with perfectly round cells defined as having 1.00 index); vascular/extravascular tissue index (V/EVT index), the ratio of vessel cross-section area to remaining placental tissue. Technical error caused by uniaxial sections of vessels was eliminated by accepting the lowest value of Ferret's diameter as the diameter for a single lumen. Vessels between 10 and 70 μm in diameter were included for analysis. Statistical Analysis Statistical analysis was performed with Statistica 8.0 (Stat-Soft, Poland). Groups were compared with Student's ttest. In each group analysis, correlation was measured between the histamine concentration, V/EVT index, and Morphometric assessment of placental circulature was performed and revealed a decrease in the V/EVT index in the PE group compared to the control group (0,15 ± SD 0,04 versus 0,23 ± SD 0,074; P = 0,005; refer to Table 2). The analysis revealed a positive correlation between the histamine concentration and the V/VEVT index as well as between MCD and the V/VEVT index. A negative correlation existed between the MMCA and V/EVT index in the control group, while the PE group showed no significant correlation between these parameters. Specific values of correlation for these parameters are provided in Table 3. Discussion Angiogenesis is the process of vessel growth from preexisting vessels, a process that requires stimulation by proangiogenic factors. Important stimulants of placental angiogenesis include VEGFs and placental growth factor, which act through the VEGF receptor family. VEGF production is stimulated by histamine acting through the H 2 receptor [10]. Mast cells are pointed to as a potential source of potent proangiogenic factors during angiogenesis, including histamine, VEGF, bFGF, TGF-beta, TNF-alpha, and IL-8. Additionally, mast cells are a source of extracellular matrixdegrading proteinases [4]. In vitro models of angiogenesis observed in hypoxic conditions provide us with information on increased angiogenesis, which occurs mainly through increases in VEGF synthesis [11]. Histamine proangiogenetic action is provided through H 1 -and H 2 -receptor-mediated VEGF synthesis. Mast cell degranulation leads to a local increase in histamine concentration and therefore an increase in VEGF synthesis. Mast cells, however, synthesize and secrete VEGF apart from histamine. The final effect is vigorous formation of new vessels in place of mast cell degranulation [12,13]. Decreases in mast cell density in connection with decreased histamine concentration correlated with lower V/EVT index values; nevertheless, this correlation was observed only in the control group. Decreased mast cell area may indicate changes in mast cell activation, perhaps as an effect of degranulation. Hypoxia, which is dominant during placenta formation, is a potent stimulator for mast cell activation and new vessel formation. The most important pathway through which hypoxia stimulates angiogenesis is the activation of hypoxia inducible factor-1α (HIF-1α) transcription and further synthesis of VEGF. It is also observed that the synthesis of histamine within mast cells and their degranulation is increased after stimulation with HIF-1α that is achieved through histidine decarboxylase (HDC, EC:4.1.1.22) [14]. Preeclampsia is a specific state of pregnancy associated with hypertension and proteinuria. Shallow trophoblast invasion of maternal spiral arteries results in an increase in systemic blood pressure. The leading hypothesis for preeclampsia pathogenesis suggests it may arise in order to maintain placental perfusion pressure at a satisfactory level [15]. The vascular bed of the placenta is diminished as a whole, with reduced branching and malformations observed; blood vessels are characterized by decreased number, lumen diameter, and total lumen area [16]. Data from our study support this previous finding, as the V/EVT index was decreased in the PE group compared to the control group. The reduced proportion of vascular area may reflect diminished placental angiogenesis in the first trimester of pregnancy. The decreased vascular network development is a result of a multifactorial pathogenetic course as well as inherited conditions. The differences in mast cell organization observed between the PE and control groups suggest that mast cells take part in the process of vessel development. Because mast cells are observed to gather close to blood vessels just before the process of angiogenesis begins (this is particularly characteristic for neoplasm growth [17]), we expect an expanded vascular network in preeclamptic placentas. In our study, we observed an increase in mast cell density and an increase in histamine concentration but a low V/EVT ratio. We conclude that in PE, susceptibility to histamine and/or other mast cell proangiogenic compounds may be decreased. In PE placentas, the mast cells had a different shape and smaller area in comparison to the control group. The data suggest that we observed mast cells after an intensive degranulation, as we also found an increased concentration of histamine [18]. Increased mast cell density and histamine concentration can be a compensation effect for incorrect vascular network development. On the other hand, we cannot exclude impairments in histamine receptor configuration. Functional predominance of intracellular histamine receptor (H IC ) over H 1 and H 2 receptors may be a causative factor in the observed decreased angiogenesis [19]. The reason for the decreased V/EVT index in preeclamptic placentas may be associated not only with decreased angiogenesis but also with fibroblast proliferation and fibrosis in the extravascular area. In the examined material, the V/EVT index was assessed in placentas obtained during the Mediators of Inflammation 5 third trimester. A remodeling of extravascular tissue during the pregnancy should also be taken into consideration. Mast cells are sources of matrix-degrading enzymes including collagenases and gelatinases [4]. Prolonged stimulation of mast cells with hypoxia leads to an increase in collagenolytic activity and an accumulation of low molecular collagen fragments, thus providing a stimulatory factor to fibroblasts and smooth muscle cell proliferation [20]. A dominance of activated fibroblasts may lead to a decrease in the V/EVT index. We conclude that mast cells are strongly involved in the pathogenesis of preeclampsia, as their concentration and activity are changed in preeclamptic placentas in comparison to physiological placentas. Low vascularization in preeclamptic placentas despite higher histamine concentration and accumulation of mast cells suggests that mast cells fail in their proangiogenic potential, concurrently increasing extravascular activity.
2017-06-09T20:26:38.503Z
2012-06-19T00:00:00.000
{ "year": 2012, "sha1": "71dad09c18d79ff67d9842eb5104adcd8f568104", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2012/307189", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7b0f2bfa3df62fe7aa383d8b84f12e99917f8cc", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15161969
pes2o/s2orc
v3-fos-license
A scenario for magnonic spin-wave traps Spatially resolved measurements of the magnetization dynamics on a thin CoFeB film induced by an intense laser pump-pulse reveal that the frequencies of resulting spin-wave modes depend strongly on the distance to the pump center. This can be attributed to a laser generated temperature profile. We determine a shift of 0.5 GHz in the spin-wave frequency due to the spatial thermal profile induced by the femtosecond pump pulse that persists for up to one nanosecond. Similar experiments are presented for a magnonic crystal composed of a CoFeB-film based antidot lattice with a Damon Eshbach mode at the Brillouin zone boundary and its consequences are discussed. The manipulation of spin-wave frequency and propagation characteristics is of great interest for the design of switching devices such as logic gates in the field of spintronics, and the number of studies in this field grows rapidly 1,2 . The most promising techniques include (i) current-injected magnetic solitons in thin films with perpendicular anisotropy 3 , and (ii) a change in the ferromagnet's temperature and therewith its saturation magnetization. The latter can either be brought about by direct contact with e.g. a Peltier element, demonstrated by Brillouin-Light-Scattering (BLS) 4 , or it can be optically induced: The authors of a recent study 5 show that by punctually heating up a ferrimagnetic stripline by up to Δ T = 70 K using a focused cw laser, magnetostatic surface spin waves propagating along the stripline are trapped in the resulting potential well. In this work, we address the generation of a spin-wave trap on a magnonic crystal by means of a temperature gradient induced by intense ultrashort laser pulses. In contrast to the experiments mentioned above, rich magnetization dynamics can be produced without any need for direct contact with the sample by using short optical pulses. One approach is using the inverse Faraday effect, which in combination with a spatially shaped pump spot can create propagating droplets of backward volume magnetostatic waves 6 . On the other hand, the technique applied in this work relies on local, short wavelength spin-wave generation by a thermally induced anisotropy field pulse to start magnetic oscillations. The spin-wave spectrum originating from such optical excitation is usually quite broad: Ultrafast demagnetization leads to a dense population of high energy excitations which then gradually decays into lower energy spin-wave modes on a timescale of a few picoseconds 7 . The result is an overpopulation of the lowest energy states which on a continuous film are given by the uniform precession or Kittel mode and by a series of perpendicular standing spin waves. Using microstructured magnetic films (magnonic crystals) energy is as well transferred into a Damon-Eshbach type mode whose frequency can be tuned in a wide range by choosing appropriate lattice parameters 8 . A common method to access these dynamics, described by the Landau-Lifshitz model of magnetization precession, makes use of the magneto-optical Kerr effect (MOKE) 9 for the detection of spin waves at ultrafast timescales. Both temporal and spatial information can be obtained by applying time resolved scanning Kerr microscopy (TRSKM). Using this technique, propagating spin-wave modes have been observed by focusing pump pulses with a full width half maximum (FWHM) of only 10 μm on a thin Permalloy film 10 . In this work, we use CoFeB as the sample material due to its low Gilbert damping (α = 0.006) and high saturation magnetization, resulting in a large group velocity v g  25 km/s 11 in an in-plane magnet- is applied at 20° to the sample plane. Due to a strong in-plane dipolar anisotropy field, the resulting magnetization will be canted 2-3° with respect to the sample plane, enabling a longitudinal MOKE detection scheme. Ultrashort laser pulses from a regeneratively amplified Ti:Sapphire system are used to (i) excite the magnetization dynamics, (ii) probe the magnetic response of the magnonic crystal, and (iii) create a spin-wave trap scenario. Results In order to find the conditions (e.g. laser pulse power) for spin-wave confinement, first the numerical simulation package COMSOL has been used to calculate the thermal response of a thin film to ultrafast laser excitation. The sample system for these calculations consisted of 3 nm of ruthenium capping a 50 nm cobalt-iron-boron (Co 20 Fe 60 B 20 ) magnetic film on a Si(100) substrate. The results of the simulation are shown in Fig. 1: In the beginning, the laser pulse produces a sudden rise in temperature. After thermalization of optically excited electrons and equilibration of the spin and phonon subsystems, known to take place on timescales of ≈ 100 fs and ≈ 1 ps respectively, the modeling yields an effective sample temperature, i.e the temperature of the magnetic system. During the first ≈ 100 ps the spatial as well as the temporal heat gradient are rather large, whereas at later times the temperature remains at a high mean value and a negligible depth profile. While the temperature is mainly homogeneous throughout the sample depth, it changes significantly across its plane, as shown in Fig. 1 (left). The Gaussian distribution of laser intensity in the pump spot produces a temperature profile that persists longer than the lifetime of the observed coherent spin-wave modes. During this time (up to 1 ns), no significant heat transport takes place on a micrometer scale and the FWHM of the lateral temperature distribution remains unchanged. In accordance with the Curie-Weiss law, the temperature increase quenches the sample's saturation magnetization which leads to a change in the spin-wave frequency spectrum. Experiments were performed separating the pump and probe spots on the sample and measuring the magnetization dynamics as a function of pump-probe distance, allowing us to determine the shift in magnetization oscillation frequency along the lateral temperature gradient. Using a variable time delay τ Δ between pump and probe pulses, the time-resolved magneto-optical Kerr effect (TRMOKE) reveals magnetization precession on timescales of up to 1 ns, that changes phase by π when reversing magnetic field direction and the resulting signal is analyzed in frequency domain (see Fig. 2). The frequency resolution is limited by the temporal scan length and Fourier transform to 0.5 GHz minimum line width; lateral resolution is given by the probe-spot diameter with 24 μm FWHM. The dataset presented in Fig. 2 has been obtained on a continuous CoFeB reference film of thickness d = 50 nm. Two modes of magnetic precession are observed: in-phase precession of all spins (uniform Kittel mode) at 12.6 GHz and a first order (i.e. n = 1) standing spin wave with wave vector k = nπ d −1 perpendicular to the sample plane (PSSW) at 18.2 GHz 8,9 . Both Kittel and PSSW modes have no wave vector components in the lateral direction, i.e they do not propagate on the sample but have a rather localized character at the spot of (optical) excitation. Consequently, spatially resolved measurements should show no significant precession outside of the pump laser spot. Fig. 3 (left) shows the color-coded Fourier spectrum of magnetization oscillation as a function of spatial separation Δ x between the centers of pump and probe spot parallel to the external field direction. The precessional amplitude (in the color code) depends on the distance to the center of the pump pulse, due to the laser intensity profile and the localized character of the observed modes. Additionally, the frequency is strongly position dependent. This is a consequence of increased disorder caused by the intense heating, which leads to a decrease in saturation magnetization and therefore to a change of the spin-wave spectrum. Using Fig. 3, left), a corresponding profile in magnetization (Δ ) M x S is calculated. The magnetization profile is then compared to the magnetization curve M(T) obtained for a CoFeB sample of equal thickness and composition using a Vibrating Sample Magnetometer (VSM) (Fig. 3, inset). The resulting position dependent temperature profile is shown in Fig. 3, right. While we expect that the Kittel and PSSW do not propagate across the sample plane, in magnonic crystals composed of periodically arranged antidots the optical excitation of dipolar Damon-Eshbach surface waves (DE) of selective wave vector has been shown 8 . The wave vector of excited DE surface and direction of magnetization M. For quantitative analysis, the difference between both field directions is calculated and an incoherent background which originates from high frequency and high-k magnons excited by the intense pump beam 9 is subtracted. A coherent oscillation of the magnetization is visible (center). A fast Fourier transform is performed (right). In the frequency domain, two modes are identified as the uniform precession (Kittel k = 0 mode) and perpendicular standing spin waves (PSSW). In order to determine the central frequency x c of each mode, a Gaussian is fitted to the data. The precession frequency observed after optical excitation is not constant across the pump spot. For the Kittel mode, a local magnetization can be calculated. Together with the magnetization curve shown in the inset, the temperature of the spin system can be derived (right). Closed diamonds correspond to a displacement of the probe with respect to the pump spot in a direction orthogonal to the applied field, squares depict a parallel displacement. Solid lines are Gaussian fits to the data. Curves are offset so that the frequency dip of the Kittel mode is centered at Δ x = 0. modes lies either perpendicular or at 45° to the external magnetic field, along the lattice vector of the magnonic crystal's primitive cell 8 . Since the most significant density of states is expected for DE spin waves with wave vectors close to the Brillouin zone boundary where the bands are flattening 2,12 , only short propagation distances are expected: within a certain band width we derive group velocities of 1.3 km/s at the middle of Brillouin zone reducing further towards higher k-vector. Therefore, the propagation length estimated in the antidot lattice is reduced sigificantly up to 1.3 μm towards the zone boundary for the Bloch mode. In addition we note that the DE spin-wave dispersion is derived for continous thin films. With the lateral variation of the temperature gradient here, it is a good local approximation for wavelengths much smaller than the magnetization gradient only. The DE spin-wave dispersion is locally modified by a temperature dependent magnetization ( (Δ , )) M T x t S as well. Magnetization dynamics measurements on a magnonic crystal and its analysis are presented in Fig. 4 where pump and probe beam were separated (a) parallel, (b) orthogonal and (c) at 45° to the external magnetic field. An additional (magnonic) Damon-Eshbach mode is visible (bottom images of Fig. 4), corresponding to a mode at the Brillouin zone boundary with k = π/a. The DE mode splitting at the Brillouin zone boundary is not observed due to the limited frequency resolution of the measurements. Similarly to the Kittel and PSSW modes, the DE precession frequency shows a Gaussian dependence on position with a minimal frequency at the position of maximal pump intensity (i.e temperature), where a shift by 0.5 GHz is observed. Consequently, spin waves in the antidot lattice would have to match the frequency shift while propagating, thus requiring frequency upconversion. This is in contrast to spin-wave wells created by a finite sized magnetic structure, resulting in a discrete energy spectrum and pointing to a coherent superposition of the reflected spin-waves 13 . Due to the large dimension of the laser spot size and the short spin-wave propagation distance no coherent effects are observed in our case. As a result we observe a continously varying frequency profile along the pump spot. The Fourier transformed spectra for each measurement are plotted in the top row of Fig. 4. Solid lines represent Gaussian fits to the experimental points, the fitted widths amount to around 50-80 μm. The sightly larger value for the horizontal measurement is due to an ellipticity of the pump focus. By comparison of the FWHM of the Kittel and DE mode, the surface mode's propagation characteristics can be determined. Propagation of the DE mode would be visible as an asymmetric broadening with respect to the Kittel mode for the scan direction perpendicular to the applied field. In Fig. 4(a), for the case of the scan direction parallel to the applied field, both modes show the same FWHM. Also for the orthogonal and 45° configuration ( Fig. 4(b,c)) the DE mode shows the same FWHM, which is somehow expected with the short propogation lenght of the DE mode and the resolution also determined by the probe spot diameter of 24 μm FWHM. However their intensities interchange for the 45° configuration and the DE mode becomes larger in its Fourier amplitude. Discussion The presented experiments and consequences from their analysis carry out two important points: Firstly, we observe a magnetization profile ( (Δ , )) M T x t S that follows the intensity profile of the optical excitation and allows to modify the spin-wave spectrum observed. Despite of the ultrashort character of the excitation, the temperature profile remains over the range of our observation of one nanosecond. This change in saturation magnetization impacts the position dependent eigenfrequency supported locally and a controlled magnetic non-uniformity can be formed by the local absorption of the femtosecond laser pulse in space and time. Secondly, the laser excited spin-wave excitation results in a large spin-wave density and is leading to a high probability for scattering between spin waves and a reduced mean free path, an effect known from hot phonon localization. In addition, spin waves traveling away from the spot of excitation would propagate towards an increasing effective saturation magnetization due to the heat gradient imposed by the pump laser, so that a frequency up-conversion is needed to adopt to the local spin-wave frequency at the boundary to the cooler region. This imposes additional scattering as spin waves are continuously reflected when entering a colder region with higher saturation magnetization. Thus, an interesting scenario for a dynamic modification of magnetic film properties via femtosecond laser pulses has been demonstrated. Methods To simulate the thermal response of the thin film, the heat diffusion equation is solved in rotational symmetry for isolating sample edges and a fixed temperature at the bottom of the substrate using the material parameters listed in table 1. Starting from equilibrium at room temperature, energy is deposited by an ultrashort laser pulse with a duration of 50 fs. The optical penetration depth is Λ = . 16 1 nm in accordance with the value for ruthenium 14 as well as with the average value of cobalt and iron, respectively 15 . In the film plane, a Gaussian intensity profile is assumed with a FWHM of 60 m. The energy carried by the pulse amounts to a total of 1.6 μJ (total fluence of 13 mJ cm −2 ). Magnetization dynamics experiments were conducted on amorphous 50 nm-thick Co 40 Fe 40 B 20 films magnetron-sputtered onto a Si(100) substrate and capped with a 3 nm Ru layer to prevent oxidation 16 . Ultrashort laser pulses (central wavelength λ = nm 800 c , pulse duration 50 fs) amplified by a Coherent RegA 9040 regenerative amplifier (250 KHz repetition rate) were used to excite and detect the magnetization dynamics in a pump-probe experiment. The angle of incidence of the probe beam was 25° to the surface, while the pump beam impinged the surface perpendicularly. The pump and probe beams are focused to Gaussian spots with 93 and 24 μm FWHM respectively. This is mainly limiting the spatial resolution. The expected experimental width will be given by pump-and probe beam profiles convoluted. However the width of the Gaussian can be determined with a much higher precision than that of the probe-beam's FWHM. Double-modulation technique was used to detect the time-resolved longitudinal component of magnetic precession: The pump intensity is modulated at 800 Hz by a chopper and the probe beam's polarization is modulated at 50 kHz by a photoelastic modulator. Table 1. Material parameters of the COMSOL simulation for 3 nm Ru/50 nm CoFeB/50 μm Si sample: Density ρ, heat capacity c p , and thermal conductivity κ and reflectivity R at λ = 800 nm. CoFeB values ρ and c p are average values for Co and Fe, CoFeB reflectivity is approximated by the value for Co.
2016-05-12T22:15:10.714Z
2015-08-17T00:00:00.000
{ "year": 2015, "sha1": "1f03ca84be8dfe952212d71eee6403811c0c84ec", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep12824.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f03ca84be8dfe952212d71eee6403811c0c84ec", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
255165452
pes2o/s2orc
v3-fos-license
Phospholipid composition and longevity: lessons from Ames dwarf mice Membrane fatty acid (FA) composition is correlated with longevity in mammals. The “membrane pacemaker hypothesis of ageing” proposes that animals which cellular membranes contain high amounts of polyunsaturated FAs (PUFAs) have shorter life spans because their membranes are more susceptible to peroxidation and further oxidative damage. It remains to be shown, however, that long-lived phenotypes such as the Ames dwarf mouse have membranes containing fewer PUFAs and thus being less prone to peroxidation, as would be predicted from the membrane pacemaker hypothesis of ageing. Here, we show that across four different tissues, i.e., muscle, heart, liver and brain as well as in liver mitochondria, Ames dwarf mice possess membrane phospholipids containing between 30 and 60 % PUFAs (depending on the tissue), which is similar to PUFA contents of their normal-sized, short-lived siblings. However, we found that that Ames dwarf mice membrane phospholipids were significantly poorer in n-3 PUFAs. While lack of a difference in PUFA contents is contradicting the membrane pacemaker hypothesis, the lower n-3 PUFAs content in the long-lived mice provides some support for the membrane pacemaker hypothesis of ageing, as n-3 PUFAs comprise those FAs being blamed most for causing oxidative damage. By comparing tissue composition between 1-, 2- and 6-month-old mice in both phenotypes, we found that membranes differed both in quantity of PUFAs and in the prevalence of certain PUFAs. In sum, membrane composition in the Ames dwarf mouse supports the concept that tissue FA composition is related to longevity. species contain tissues with less PUFAs but more monounsaturated and saturated FAs (reviewed in Hulbert et al. (2008)). Certain PUFAs are essential dietary compounds in the mammalian diet, make up cell membranes and affect a suite of cellular functions (Pond and Mattacks 1998). PUFAs, however, are susceptible to peroxidation and, therefore, may potentially reduce lifespan. PUFAs in the mitochondrial membrane are particularly vulnerable to oxidative damage and form highly reactive products that cause further oxidative damage (Esterbauer et al. 1991;Hulbert et al. 2007). Finally, PUFAs lead to the formation of hazardous DNA adducts and have a potential role for genome stability (Gruz and Shimizu 2010). The idea that membrane composition influences lifespan is encapsulated in the "membrane pacemaker hypothesis of ageing" Hulbert 2008;Pamplona and Barja 2007;Pamplona and Barja 2011). Whilst this hypothesis emphasises impact of total PUFA content and membrane peroxidisability, we concluded previously, based on a comparison across a wide range of mammalian species, that it is the ratio between the n-3 and n-6 PUFA subclasses that best explains the association between FA composition of membranes and maximum lifespan (Valencak and Ruf 2007). Differentiating between the n-3 and n-6 PUFA subclass when relating membrane composition to certain physiological traits has also been proven successful in the context of seasonal changes in membrane composition (Valencak et al. 2003) for maximum running speed in mammals (Ruf et al. 2006) and occurrence and characteristics of torpor and hibernation (reviewed in Ruf and Arnold 2008). Due to genetic basis of the many traits involved in ageing, however, tests of hypotheses in the context are arguably better performed within a species (Speakman 2005). Fortunately, ageing research in the past years has generated long-lived genotypes such as the Ames dwarf mouse that represents an interesting model to test the concept. Ames dwarf mice are mutant mice that are homozygous for a spontaneous mutation and were shown to live almost 50 % longer than their normal siblings (Brown-Borg and Bartke 2012; Bartke 2012), as they carry a "longevity gene", Prop1 df . Ames dwarf mice reportedly show a reduced body size, lower plasma levels of insulin, lower levels of the insulin-like growth factor IGF-1, lower glucose and lower thyroid hormone (Brown-Borg and Bartke 2012; Bartke 2012). Interestingly, they were shown to have a consistently lower body core temperature throughout the circadian cycle than their heterozygous siblings (Hunter et al. 1999). Together, the impact of all these traits relevant for a long lifespan has been identified in Ames dwarf mice, but to our knowledge, membrane FA composition or the content of PUFAs in this long-living mouse model has not been explored yet. We, thus, aimed to compare tissue phospholipid FA composition of homozygous, longlived Ames dwarf mice (Prop1df/Prop1df) with heterozygous, wild type (Prop1+/df) animals as controls. According to the "membrane pacemaker hypothesis of ageing", we predicted that homozygous Ames dwarf mice might show less membrane PUFA content than heterozygous siblings. Thereby, the long-lived phenotype might avoid excess lipid peroxidation in the membranes, which could favour the long life span. Amongst PUFAs, the n-3 subclass, and in particular docosahexaenoic acid (DHA), a PUFA with six double bonds, stands out for being highly susceptible to peroxidative damage (Turner et al. 2003). As a very dominant n-3 PUFA in membranes of small mammals, DHA is eight times more prone to peroxidation than linoleic acid (LA), which has only two double bonds and belongs to the n-6 PUFAs ). We, thus, hypothesised that n-3 PUFA contents might be lower in the long-lived phenotype than in their normal-sized siblings, as would be expected from the general association found in mammals (Valencak and Ruf 2007). To control for potential growth effects when individuals mature, we sampled tissues at 1, 2 and 6 months of age. Similarly, to avoid generalisations arising from the study of single tissues, we analysed membrane phospholipids in four different tissues, namely heart, muscle, brain and liver, and finally in isolated liver mitochondria. Animals and housing Prior to the study, we established a colony of mice consisting of males and females heterozygous for the gene Prop1 (Prop1+/Prop1df), purchased from Charles River Laboratories, Bad Sulzfeld, Germany. We crossed heterozygous individuals and selected the offspring that was homozygous for Prop1df. Homozygosis of Prop1 phenotypically results in dwarfism and extended lifespan. All mice were pair-housed by gender and genotype at 22±2°C on a 16 h:8 h L:D photoperiod in standard cages (Eurostandard Type II Long, Tecniplast, Italy). They were provided with a high-energy diet "V118x" (Ssniff, Soest, Germany), described in Table 1, and water ad libitum. Major murine pathogens were monitored regularly using co-housed sentinel animals. Dwarf offspring (Prop1df/Prop1df) could be easily distinguished from normal siblings by body size; thus, we refrained from genotyping in our study. Further, there was no need for genotyping the heterozygous littermates as heterozygous Prop1+/df and homozygous wild type mice show no phenotypic difference but can both be used as controls for comparison with Ames dwarf (Prop1df/df) mice (Helms et al. 2010). Total hearts, brains and livers along with the hindleg musculus vastus were sampled from a total of 21 heterozygous and 18 Ames dwarf mice. Both genders were used in our study both in control animals and Ames dwarf mice. Liver mitochondria were isolated from another 16 6-monthold animals (six Ames dwarf, 10 controls) to make sure the sampled material (liver tissue) would allow sufficient analyses of FA composition. All mice came from both sexes, all originating from the F1 generation of the colony. Note that we kept mothers and offspring together for 4 weeks after birth to make sure that even the Ames dwarf offspring would be viable. Tissue collection, preparation and analysis All animals were killed by cervical dislocation and tissues were rapidly removed and stored in Eppendorf tubes at −18°C until lipid extraction and analysis (<2 months). Tissue sampling, lipid extraction, analysis and computation of indices have been detailed in previous publications (Valencak et al. 2003;Valencak and Ruf 2007;Valencak and Ruf 2011). Briefly, lipids were extracted using chloroform and methanol (2:1 v/v), separated on silica gel thin layer chromatography plates (Kieselgel 60, F254, 0.5 mm, Merck) and then made visible under ultraviolet light with the phospholipid fraction isolated. Phospholipid extracts were transesterified by heating (100°C) for 30 min, extracted into hexane and were analysed by gas liquid chromatography (GLC) (Perkin Elmer Autosystem XL with Autosampler and FID; Norwalk, CT, USA). FA methyl esters were identified by comparing retention times with those of FA methyl standards (Sigma-Aldrich, St. Louis, MO, USA). Liver mitochondria were isolated according to standard isolation methodologies. In brief, livers were quickly harvested and then liver tissues were chopped with scissors and minced with a scalpel blade on a cold tile prior to homogenisation and differential centrifugation. Isolated mitochondria samples were stored in Eppendorf cups at −18°until analysis (<2 weeks). In all tissues and in liver mitochondria examined, we have measured the composition of total phospholipids that obviously combines all subcellular membranes in one measurement. All experiments described here were approved by the ethics committee of the University of Veterinary Medicine, Vienna (No. 10/12/97/2009) and comply with the current laws in Austria, where the experiments were performed. Statistical analysis Statistical analyses were conducted in R for Mac (2.13.1; R Development Core Team 2011). We compared individual FA contents and PUFA classes between mouse phenotypes using body weight, tissue type and age as covariates. Due to the fact that all four tissues were sampled from the same animals, we adjusted for repeated measurements by computing linear mixed effects models using individual intercepts as the random factor (library nlme; Pinheiro et al. 2012). F and p values for analyses of variances (ANOVAs) from these models were computed using marginal sums of squares. Interestingly, mouse phenotype still explained variance after the effect of body weight had been accounted for. In addition, we computed a principal component analysis, which indicated that the first principle component explained 82.7 % of the variance in the data set and was reflected by the ratio between the most abundant n-3 FA docosahexaenoic acid (DHA) (C 22:6 n-3) and LA (C 18:2 n-6; as well as arachidonic acid (AA C 20:4 n-6). When the principle component analysis was run amongst all four tissues (including the brain) the first principle component explained 48.8 % of the variance only but the analysis basically provided the same result. Therefore, we mostly concentrated on comparing DHA and LA contents. Multiple comparisons of FA contents at specific time points were computed using Tukey-type test with the R package "Multcomp" (Hothorn et al. 2008). Results Whilst total PUFA content was similar in both phenotypes (Tables 2 and 3), other aspects of membrane composition in the long-lived Ames dwarf mice differed from that of heterozygous normal-sized siblings across all four tissues analysed. Figures 1, 2, 3 and 4 illustrate the proportion of phospholipid DHA (C 22:6 n-3) and LA (C 18:2 n-6) of freshly weaned (1 month old), young adult (2 months old) and adult (6 months old) Ames dwarf mice compared to the heterozygous controls from the same strain. These two FAs were the most abundant PUFA in all tissues (except for LA in the brain) (Table 3). In heart, skeletal muscle and liver, we found increasing differences between the phenotypes as age increased, with lower amounts of DHA and higher proportions of LA in Ames dwarf mice. The relationship between these two FAs largely corresponds to the first principle component (based on heart, muscle and liver phospholipids of 6-month-old animals) that explained 82.9 % of the total variance in the data set (Table 4). Please note that some of the loadings also point to a close relationship between AA and DHA ( Table 4). Proportions of individual FAs in heart, skeletal muscle, liver and brain phospholipids are given in Tables 2 and 3. An ANOVA including all age classes and all four tissues along with body weight and mouse phenotype showed that the proportion of each single FA was dependent on tissue type (e.g., DHA: F 3,108 =173.6; p<0.0001). We observed a significant interaction between age of individual mice and tissue type for all single FAs (e.g., DHA: F 3,108 =7.78; p=0.0001), except for C 14:0, C 17:0 and C 18:0. The proportions of seven out of 13 FAs were affected by phenotype (C 18:0, C 18:1 n-9, C 18: 2 n-6, C 18:3 n-3, C 20:5 n-3, C 22:5 n-3 and C 22:6 n-3), even after the influence of body weight was accounted for (p<0.05 each time). Note that the amount of C 18:2 n-6 in brain phospholipids was below 1 % in both phenotypes (Fig. 4, Table 2), so brain differed in tissue composition from all the others. DHA content significantly differed between tissues (p<0.0001) in both phenotypes, except for brain and muscle amongst the Ames dwarf mice, where the proportions were similar (Tables 2 and 3). Also, all tissues differed significantly in the amount of n-6 PUFAs (p< 0.0001) with one exception. Amongst the heterozygous control animals, total n-6 PUFAs were not different between skeletal muscle and heart (z=−1.8; p=0.44, Tables 2 and 3) but reached significance in the longlived phenotype (z=3.1; p=0.02; Tables 2 and 3). We did not include isolated liver mitochondria FA composition in the above models, as we used another batch of animals to harvest mitochondria. The results from the liver mitochondria revealed again the same pattern with more n-3 PUFAs, namely DHA in the control animals compared with the Ames dwarf mice (Table 3). Interestingly, the proportion of DHA in liver mitochondria phospholipids amounted to 9 % and, thus, was equal to its proportion in liver tissue phospholipids (Table 3). Discussion PUFAs are most susceptible to lipid peroxidation and thus, if peroxidation significantly affects ageing, exceptionally long-lived mammals and birds should have tissues with low PUFA content according to the membrane pacemaker hypothesis of ageing (reviewed in Hulbert (2010). Indeed, membranes containing smaller proportions of highly unsaturated PUFAs have been reported from the extremely long-lived naked mole rat (Hulbert et al. 2006a), from the short-beaked echidna (Hulbert et al. 2008), from long-lived galliform birds (Buttemer et al. 2008) and, most recently, from bivalves (Munro and Blier 2012). Similarly, short-lived worker bees have been shown to have highly polyunsaturated membranes, whereas the long-lived queen has few PUFAs in the membrane (Haddad et al. 2007). Amongst rodents, wild-derived mice also show a membrane unsaturation correlated with their maximum lifespan (Hulbert et al. 2006b). Yet, ageing research in recent years has revealed that multiple mechanisms can explain the outstanding lifespan of long-lived mouse mutants such as the Ames dwarf mouse (reviewed recently in Bartke 2012). Ames dwarf mice are very low in circulating insulin-like growth factor 1 (IGF-1) (Bartke and Brown-Borg 2004) and low in insulin levels whilst, at the same time, having high insulin sensitivity (Sharp and Bartke 2005). Similarly, they have been shown to have reduced mammalian Target of Rapamycin (mTOR) signalling (Sharp and Bartke 2005), whilst anti-oxidative defence systems are up-regulated (Brown-Borg and Bartke 2012) as is the resistance to various forms of oxidative, toxic and metabolic stresses (reviewed in Bartke 2012; Brown-Borg and Bartke 2012). Does the tissue lipid profile of Ames dwarf mice also contribute to their extended lifespan amongst mice? According to the prediction of the membrane pacemaker hypothesis of ageing, Ames dwarf mice should have significantly lower PUFA contents in their membranes than wild type, non-mutant control animals. In our study, which to our knowledge is the first to address this question, in a mutant mouse model, we found that overall PUFA content was not significantly different between the two phenotypes in muscle, heart, liver and brain phospholipids (Tables 2 and 3). The only exception was liver mitochondrial composition, which revealed that PUFA content was significantly lower in the Ames dwarf mice than in the control animals (Table 3). Thus, liver mitochondrial phospholipids from Ames dwarf mice contain 5 % less PUFAs than those from controls (but notably Age (months) Fig. 1 Heart phospholipid docosahexaenoic acid (DHA) content (a) and linoleic acid content (b) in 1-, 2-and 6-month-old Ames dwarf mice and normal-sized littermates. Total n Ames dwarf mice =18, total n normal littermates =21; means±SEM still contain 52 % PUFAs). This difference in mitochondrial phospholipids clearly supports the membrane pacemaker hypothesis of ageing in isolated mitochondria, but total PUFA contents in other tissues gave no evidence for differences between phenotypes. Still, our data from heart, skeletal muscle and liver also seem to support a relation between membrane composition and ageing, since Ames dwarf mice had lower contents of DHA and, hence, lower degrees of unsaturation in these tissues. As illustrated in Fig. 5 (upper panel), the relation between membrane composition and maximum lifespan amongst strains of laboratory mice is very weak, however, and even factors that did differ between phenotypes in our study, e.g., n-3 PUFA content, have only little predictive power (Fig. 5 lower panel). In particular, the Ames dwarf mouse is much longer lived than predicted from its membrane composition alone. It should be noted that the large scatter shown in Fig. 5 is not due to large variation within strains. Generally, tissue phospholipid composition is a regulated trait in mammals and muscle phospholipid PUFA content varies from 34.54 % in cattle to 70 % in the ibex (Valencak and Ruf 2007). Within a species, strain differences are much smaller (Valencak and Ruf 2007, c.f. SEMs in Tables 2 and 3). Whilst our study from 2007 revealed that phospholipid DHA contents did not correlate with maximum lifespan in mammals, our recent data from the Ames dwarf mouse Fig. 3 Liver phospholipid docosahexanenoic acid (DHA) content (a) and linoleic acid content (b) in 1-, 2-and 6-month-old Ames dwarf mice and normal-sized littermates. Total n Ames dwarf mice =18, total n normal littermates =21; means±SEM contradict our earlier findings (Valencak and Ruf 2007) with the exact reason for this being unclear to us. It is possible that, again, some interspecific observations are not confirmed intraspecifically just as with the relationship between energy expenditure and lifespan (Speakman 2005). Data obtained in this study are from Ames dwarf mouse tissues and controls, and all other data points from Hulbert et al. (2006a, b). AD control refers to heterozygous control Ames dwarf mice Generally, Ames dwarf mice, indeed, have been reported to have a lower reactive oxygen species (ROS) production along with increased resistance to oxidative stress (Murakami 2006;reviewed in Brown-Borg and Bartke (2012)) although this has not been assessed in context with lipid peroxidation. Maybe Ames dwarf mice aged 2 months or older always, thus, were found to have lower levels of DHA than heterozygous controls, which should make them less susceptible to peroxidation, as DHA is thought to be eight times more prone to peroxidation than LA, for instance . This is often expressed as a peroxidation index, i.e., the relative susceptibility of the acyl chains, which was indirectly determined via oxygen consumption (Holman 1954;reviewed in Hulbert et al. 2007). The peroxidation index largely reflects the DHA content of a given membrane. A potential problem with this simple index is that research in humans and animal models has demonstrated that probably the most damageing and reactive product of lipid peroxidation is the aldehyde 4-hydroxy-2nonenal (HNE) (Esterbauer et al. 1991;Lakatta and Sollott 2002;Juhaszova et al. 2005). In contrast to other ROS species, HNE is relatively long-lived and acts not only in the immediate proximity of membranes but can diffuse from the site of its origin and damages even distant targets (Esterbauer et al. 1991;Lakatta and Sollott 2002). Importantly, HNE originates not from n-3 PUFAs, such as DHA, but is formed by superoxide reaction with n-6 PUFAs. Further, it has been shown that PUFAs are involved in the production of DNA adducts and, thus, the susceptibility for endogenous tissue DNA damage (Chung et al. 2000;reviewed in Gruz and Shimizu 2010). Finally, there is increasing evidence challenging the hypothesis that ageing is related to ROS production (reviewed in Speakman and Selman (2011)). Therefore, to infer a causal relationship between low membrane n-3 PUFAs levels and increased longevity, direct experimental research is required. Specifically, we suggest identifying the potential detrimental effects of certain single FAs such as DHA on lifespan. One experiment in this context, which has been carried out already, did not support such a causal relationship: whilst feeding C57BL/6 mice n-3-or n-6 PUFAenriched diets significantly altered their membrane compositions, it had no effect whatsoever on lifespan, compared to controls (Valencak and Ruf 2011). Therefore, we suggest there may be an alternative explanation why Ames dwarf mice differ significantly in their membrane n-3 PUFA content from their heterozygous siblings. Recently, Ames dwarf mice have been found to have fully functional mitochondria but lower mitochondrial activity (Choksi et al. 2011). This decreased mitochondrial metabolism in the homozygous Ames dwarf mice might be linked to lower membrane n-3 PUFA contents, since certain PUFAs such as DHA correlate with the metabolic activity of tissues in mammals (Turner et al. 2003). DHA and other n-3 PUFA are known to upregulate oxidative capacity (Weber 2009) and have been shown to be important activators for mitochondrial uncoupling proteins (Jezek et al. 1998). These functions of n-3 PUFAs could explain why their levels are decreased in Ames dwarf mice, if their reduction serves to decrease metabolism and enzyme activities rather than ROS production. Also, the lower body temperature of 34°C found in Ames dwarf mice (Hunter et al. 1999) indicates that metabolism is lowered, possibly due to a different membrane FA composition. Finally, relating to the potentially altered lipid peroxidation in the Ames dwarf mice, they might have increased DNA stability (Gruz and Shimizu 2010). In the present study, the single tissue deviating from the general pattern was the brain that contained almost no n-6 PUFAs but equal amounts of DHA in the Ames dwarf mice as in the wild type controls (Fig. 4). If Ames dwarf mice have lower DHA levels than heterozygotes in other tissues, why is it then not lower in the brain? DHA is a major structural component in sperm, retinal membranes and brain phospholipids, and regulates enzymes, receptors and transport proteins (Stillwell and Wassall 2003). Additionally, DHA is a precursor for important eicosanoids (Stillwell and Wassall 2003) and eicosanoids fulfil several functions in the brain (reviewed in Tassoni et al. (2008)). Therefore, it is likely that a high DHA content is essential for brain functionality and, thus, is observed in all mammals independent of their lifespan. Similarly, our new data confirm that brain tissue is very rich in oleic acid (C 18:1 n-9) due to it being a major constituent of myelin lipid (Rioux and Innis 1992). Hulbert et al. (2006b) also reported relatively high DHA contents of 11 % from the very longlived naked mole rat and concluded that brain tissue requires high levels of n-3 PUFAs to ensure intracellular signalling processes (Hulbert et al. 2006b). We found that membrane PUFA composition in all tissues studied, even in the brain, significantly changed with the age of animals (Figs. 1, 2, 3 and 4). Age-related changes in PUFA levels, especially of the n-3/n-6 PUFA ratio, are also well known from humans (Lakatta and Sollott 2002). Also, older European hares have altered FA composition in comparison to young individuals (Valencak unpublished;Valencak et al. 2003) and the same effect was found in humans (Baur et al. 2000). However, it seems that in comparative studies (e.g., Hulbert et al. 2006b;Valencak and Ruf 2007;Munro and Blier 2012), the influence of age on membrane FA composition has been largely overlooked in the past. Our current data from the Ames dwarf mice presented here point to the need for including individual age in general models relating membrane FAs to certain traits. We assume that, as membrane composition is tightly regulated in mammals with little variance between individuals (Valencak and Ruf 2007), the differences between mice at 1, 2 or 6 months of age observed here are caused by differential up-and down-regulation of acyltransferases, elongases and desaturases involved in membrane remodelling (Sprecher 2000). More specifically, the FAspecific glycerol-3 phosphate acyltransferase 1 (GPAT1) represents a likely candidate enzyme causing the membrane compositional differences between 1-, 2or 6-month-old mice (Coleman and Mashek 2011). Yet, this is speculative, as Coleman and Mashek (2011) refer to triacylglycerol metabolism and our tissues under test were membrane phospholipids. Thus, future studies to identify the role of all enzymes involved are needed and with specific attention to differences between membrane phospholipids and triacylglycerols. Conclusions We conclude that tissues as well as mitochondrial membranes from long-lived Ames dwarf mice have low proportions of n-3 PUFAs and, thus, may have lower oxidative stress due to their tissues being more resistant to lipid peroxidation (except for brain), specifically, tissue n-3 PUFAs related to lifespan. This observation does not, however, necessarily indicate a causal relationship between in vivo ROS production and membrane composition. Rather, given the lower body temperature (Hunter et al. 1999) and increased resistance to oxidative stress (Murakami 2006) in Ames dwarf mice, we suggest that its altered membrane composition is caused by altered activity of certain acyltransferases to downregulate n-3 PUFA content and, hence, mitochondrial activity in the Ames dwarf mice in order to match their slow pace of life.
2022-12-28T14:25:29.787Z
2013-05-03T00:00:00.000
{ "year": 2013, "sha1": "487d63a15b83804311cad7e8f915a0c482011a0a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11357-013-9533-z.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "487d63a15b83804311cad7e8f915a0c482011a0a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
121738578
pes2o/s2orc
v3-fos-license
Effect of Upstream Wake on ShowerHead Film Cooling The present study aims to investigate the effect of an upstream wake on the convective transport phenomena over a turbine blade with shower-head film cooling. A naphthalene sublimation technique was implemented to obtain the detailed mass transfer distributions on both suction and pressure surfaces of the test blade. All mass transfer runs were conducted on a blowing-type wind tunnel with a six-blade linear cascade. The leading edge of the test blade was drilled with three rows of equally spaced injection holes. The upstream wake was simulated by a circular bar with the same diameter as that of the trailing edge of the test blade. INTRODUCTION he effects of shower-head film cooling and upstream wake on the turbine blade heat transfer and passage flow characteristics have been extensively studied in re- cent years.To date, measured results have been reported on the theory of wake-induced transition and unsteady heat transfer over the turbine blade area under the influ- ence of passing wake and film cooling.Of particular in- terest for future studies are the complex flow field and heat transfer phenomena around the leading edge of the blade when the passing wake encounters the injection flows ejected through the injection holes.A knowledge of this complex flow structure resulting from the interaction of the passing wake and the laminar boundary-layer flow is essential to the reliable design of the modern turbine blade.In addition, measurements should be further ex- tended to study the effect of such complex flow on the boundary-layer flow characteristics and heat transfer phe- nomena over the turbine blade surfaces. A theory for wake-induced transition over a rotor blade was proposed by Mayle and Dullenkopf [1990].They introduced a correlation, as evaluated from the measured heat transfer coefficients with Emmon's tran- sition model, to predict the intermittency distribution over the wake-induced transition region.According to their correlation, the onset of wake-induced boundarylayer flow transition occurs much closer to the leading edge of the turbine blade than non-wake-induced bound- ary-layer flow transition because of an increase in the production of turbulent spots by the upstream wake. Due to the natural complexity of the flow field within the blade passage, most prior experimental studies have focused on the measurements of heat transfer character- istics over the turbine blade surfaces for a system with either the injection flows or the upstream wake.On a circular cylinder, inferred as the leading edge of a turbine blade, Magari and LaGraff [1994] measured the stagna- tion heat transfer rate under the influence of an upstream wake.When the upstream wake was absent, the mass transfer distribution over a circular cylinder with injec-tion flows ejected from one row of injection holes was studied by Karni and Goldstein [1990] and Chen et al. [1994b]. Chen et al. [1994a] employed the naphthalene subli- mation technique to investigate turbine blade mass trans- fer under the influence of an upstream wake, but without the presence of film cooling.The mass transfer distribu- tions near the endwalls of a turbine blade with turbulent approaching boundary-layer flow was investigated by Goldstein et al. [1995] with the same mass transfer technique. The combined effects of blowing ratio and mainstream turbulence level on turbine blade heat transfer and on turbine blade mass transfer for injection flows ejected from the multi-rows of injection holes located on the leadign edge of a turbine blade were respectively re- ported by Camci and Arts [1990] and Chen and Miao [1995].A series of studies on heat transfer rate and film cooling effectiveness have been conducted by Ou and Han [1994], Ou et al. [1994], and Mehendale et al. [1994] which address the combined effects of upstream wake and blowing ratio.Their results indicate that the optimal blowing ratio for minimum blade surface heat load is 1.2 and 0.8 for CO2 and air injectants, respec- tively.However, their technique was not able to generate a spanwise distribution of the local heat transfer rate. There are some studies (Dunn et al. [1989, 1994]; Abhari and Epstein [1994]) which were conducted in a short duration turbine test facility rather than in a low-speed wind tunnel with a linear cascade for simu- lating full engine conditions.Heat transfer data on an uncooled rotor blade coupled with an upstream cooled nozzle guiding vane were reported by Dunn et al. [1989].Dunn et al. [1994] used the same experimental technique and equipment as above in a two-stage uncooled turbine to validate predictions based on a quasi-three dimen- sional numerical code that they developed.Abhari and Epstein [1994] performed their investigation on both fully cooled and uncooled rotating transonic turbine stages.The heat transfer rate over the suction surface is reduced by as much as 60 percent for the cooled blade, as indicated from their measured results. Despite the numerous experimental studies performed on turbine blade heat or mass transfer under the influence of upstream passing wake and film cooling, few have addressed the spanwise variations of local heat or mass transfer in the downstream region of each injection hole.It can be argued that the variations will be large.In addition, the mass transfer over a first-stage rotor blade is expected to be time-dependent and periodic because over times different regions of the turbine blade are immersed in the upstream passing wake flow, which originates from the trailing edge of the upstream nozzle guiding vane.Accordingly, the turbine blade mass transfer dis- tribution varies with the trace of the upstream passing wake.Of interest here are the combined effects of the upstream wake and the injection flows on local mass transfer over the blade surface.With respect to the upstream wake generation location, the variation of turbine blade mass transfer is also explored in the present study.Both spanwise-averaged and local mass transfer results are reported and discussed in detail.Experiments are conducted with the popular naphthalene sublimation technique. DESCRIPTION OF EXPERIMENT The present experimental apparatus consists of a wind tunnel, an injection flow system, and a test section with a scaled-up blade linear cascade.A schematic view of the experimental apparatus is shown in Figure 1.An auto- mated data acquisition system was implemented to measure the surface profiles of the test blades that was originally coated with a layer of 1.5 mm solid naphthalene.The local naphthalene sublimation rate can be extracted from the measured surface profiles before and after test blade installation into the wind tunnel.Then, the local mass transfer rate can be evaluated from the measured naphthalene sublimation rate at the corre- sponding position. The mainstream velocity can be easily varied by changing the rotating speed of the blower.The test section with the linear cascade, connected to the exit of convergent nozzle, has a dimension of 30 cm in width and 45 cm in height.At the end of the test section, the linear cascade has six scaled-up blades.The test blade is the fourth blade counted from the top of the linear cascade.The mainstream approaching velocity is mea- sured by a pitot tube at a position of 100 mm upstream from the test blade leading edge.In addition, a slot is located at 60 mm downstream from the cascade in which a pitot tube is installed to measure the exit velocity of air IOWo A coating process was adopted for the preparation of a smooth exposed naphthalene surface with a span of 150 mm and a thickness of 1.5 mm on the test blade.A T-type thermocouple was embedded onto the naphthalene sur- face in order to evaluate the diffusion coefficient of naphthalene into air.A schematic view of the leading edge region of test blade was shown in Figure 2. The geometric data of the linear cascade are listed in Table I. In Figure 2, S and S denote the curvilinearcoordinate along the suction and pressure sides, respectively.The centerlines of the three rows of injection holes are (denoted by the row PS), S/S.u 0.000 (denoted by the row SL), and S/'S,u 0.056 (denoted by the row SS).The curvilinear distances at the trailing edge along the suction and pressure sides, denoted respectively by Ssf and $1,f, are equal to 188 mm ( 1.236 C) and 154 mm ( 1.012 C).A staggered row arrangement is used in the multi-rows of injection holes.On each injection row, there are seven equally spaced injection holes, with a hole diameter of 3.33 mm and a pitch of 10 ram.The injection angle to the test blade surface in the streamwise and spanwise directions are 90 degrees and 30 degrees, respectively.Because the test blade width (176 mm) is less than that of the test section (300 mm) the test blade with smooth naphthalene surfaces was first assembled to a supporting metal blade then carefully installed into the linear cascade. To simulate an upstream wake a circular bar was installed at four separate locations upstream of the blade cascade, marked as A, B, C, and D, shown in Figure 3. The circular bar has a diameter of 4.5 mm which is the same as the diameter of trailing edge of test blade.For mass transfer results reported below, the tests denoted as wake A, B, C, and D were conducted with the upstream wake generated at the corresponding locations.Between the cascade and the circular bar, 4 cm upstream from the cascade an instrumentation hole was located for inserting a hot-wire probe to measure the turbulence fluctuations between two adjacent blade passages.The turbulence level was measured by an IFA-100 constant temperature anemometer with a TSI model 1210-T1.5 hot-wire probe. In addition, the mainstream turbulence level was mea- sured by the same hot-wire probe at a position of 10 cm upstream from the leading edge of the test blade for the case without the wake-generating circular. The injection air flows were supplied by a screwed- type compressor.Since the injection flows may contain excessive water vapor after being compressed, a low- temperature condenser was installed at the exit of corn- pressor to eliminate water vapor in the injection flows.Downstream of the condenser, the injection flows were then heated to ambient temperature by a dryer.Further downstream, two air-filters were installed in the flow path to filter out droplets larger than 6 Mm and Mm, respectively.The volumetric flow rate of injection flows was measured by a calibrated flange-type orifice.Based on the measured volumetric flow rate and the air density upstream of the orifice, the mass flow rate of the injection flows was evaluated.Accordingly, the mass flux through the multi-rows of injection holes can be used to deter- mine the blowing ratio, M, defined as 0202 M (1) where p2U2 denotes the mass flux of injection flows, and 9 and Us are the approaching mainstream air density and velocity, respectively. In the naphthalene sublimation tests, the temperature difference between injection flows and mainstream should be kept at less than 0.2C.To achieve this temperature requirement, heated strips were wrapped around the upstream duct that was connected to the injection flow measurement orifice during the mass transfer run. An automated data acquisition system was used to determine the sublimation depth of the coated naphtha- lene surface on the test blade after exposure to the mainstream in the wind tunnel.As shown in Figure 4, the automated data acquisition system consists of a four-axis positioning table coupled with four stepping motors, a linear variation differential transformer (LVDT) probe, four motor drivers, a gauge meter, a multimeter, and a Macintosh IIx computer.Details of the operating proce- dure and components for the data acquisition system can be found in Chen et al. [1994a,b].The measured subli- mation depth can be substituted into the following equation for determining the mass transfer coefficient, where Ls, p., P,,,N, and At denote the naphthalene sublimation depth, the solid naphthalene density, the naphthalene vapor density on the test blade surface, and the time duration for the exposure of test blade surface to the mainstream. Generally, the local mass transfer rate can be ex- pressed as the local Sherwood number, defined as LVDT Probe Four-Axis Positioning Table Gouge Meter Step Motors HP 3478A Motor Drivers Multimeter IMacintosh IIX] Computer FIGURE 4 The automated data acquisition system. hlll C where C is the chord length of the test blade and Df is the diffusion coefficient of naphthalene into air.A correlation that relates the diffusion coefficient of naphthalene into air to temperature as well as pressure is given by Cho et al. [1994]. In all mass transfer runs, the blowing ratio is M 0.8, the mainstream turbulence level is Tu 0.4% and the exit Reynolds number is kept at Re 2 397,000.For each run, the sublimation depth of test blade naphthalene surface are measured at 74 streamwise locations along the blade surface; 39 on the suction side and 35 on the pressure side.At each streamwise location, the measured region spans from Z 61 mm to Z 97 mm with a spacing of 1.5 mm.The position of Z 0 corresponds to the endwall.The local mass transfer results within the region whose span covers three neighboring injection holes from Z 61 mm to Z 91 mm are used to evaluate the spanwise-averaged Sherwood number (Sh). Based on the Kline and McClintock's [1953] equation at a confidence level of 95% yields resulting uncertainties for M, Re2, and Sh of 5%, 0.5%, and 6%, respectively. RESULTS AND DISCUSSION Pitchwise Turbulence Distribution in the Wake Figure 5 shows the turbulence level along the pitchwise direction measured at a location of 4 cm upstream from the linear cascade under the upstream wake conditions. Measured results indicate the highest turbulence level in the wake flow is as large as 15%.The fact that the turbulence distribution for the case of wake A is much higher than that for the case of wake B suggests that the turbulence level in the wake flow falls sharply as the wake flow propagates downstream.Remember that the upstream wake for B originates only 10 cm upstream from that of case A. In all cases around the centerline location of the circular bar (Y/P 0 for the cases of wake A and B), a symmetric distribution in the turbulence level can be observed. Spanwise-averaged Mass Transfer Results Figure 6 compares the present measured mass transfer results with those in prior studies on the test blades with the same geometric profile without the presence of shower-head film cooling or an upstream wake.The blow-up of mass transfer distributions over the leading Y/P The turbulence level along the pitchwise direction at 4 cm upstream of the linear cascade under the upstream wake conditions.T.E. Influence of wake on the spanwise Sh distributions over the both surfaces of the test blade for Tu 0.4% and Re FIGURE 6 397,000. edge region between Sp/Spf -0.2 and S/S,u 0.2 is shown in Figure 7. On the horizontal axis, three arrows denote the centerline locations of the injection rows SL, PS, and SS.The three solid lines without symbols are the measured results given by Chert et al. [1994a] and Chen and Miao [1995].The runs denoted with M 0 were conducted with no injection holes.The runs denoted with no wake were conducted without the presence of an upstream wake.Detailed discussion on the cause for such dramatic variation in the streamwise distributions of turbine blade mass transfer, as documented in.Figures 6 and 7, can be found in the aforementioned two papers.Apparently, the injection flows have a more significant effect than the upstream wake on the turbine blade mass transfer.An upstream wake with the injection flows actually further enhances the turbine blade mass transfer, especially on the suction side. In the region just downstream of the front stagnation line, a recirculated flow zone is expected to occur be- tween injection rows SL and SS when the mainstream di- rectly encounters the injection flows ejected from the in- jection row SL.The injection flows act as solid flow blocks which cause such a recirculated flow zone on the blade surface.Figure 7 shows a rise in the spanwise-av- eraged mass transfer rate for the cases of wake A-D (S/ S,u 0.033) that is probably due to reattachment follow- ing separation of the mainstream.On the suction surface, maximum mass transfer is caused by the injection flows from the injection row SS.A sharp fall following the high- est peak is probably associated with the growth of the laminar boundary-layer flow.The minimum value of Sh is at the transition from the laminar boundary-layer flow to turbulent flow.Beyond this transition point, there is a rise in the mass transfer distribution.After the boundarylayer flow becomes fully turbulent, a slow decline in mass transfer is observed.The large variation in curva- ture near the trailing edge of the blade results in the for- mation of a trailing-edge wake (so named to distinguish it from the upstream wake).This in turns leads to the extremely sharp variation in the spanwise-averaged mass transfer rate as observed in Figures 6 and 7.At the sepa- ration of the turbulent boundary-layer flow, the mass transfer rate reaches a local minimum.However, vortex shedding in the wake flow from the trailing edge of the blade significantly increases the mass transfer rate, so the mass transfer rate again reaches a peak whose value ap- proaches that near the leading edge. On the pressure surface, there are two peaks in the spanwise-averaged mass transfer distribution near the leading edge, as shown in Figure 7.These two peaks result from the injection flows from the rows of injection holes, SL and PS.Downstream from the injection row PS, the boundary-layer flow becomes fully turbulent.Consequently, the mass transfer rate monotonically de- creases to a minimum at a streamwise position near Sr/@f -0.7, as illustrated in Figure 6.Beyond this Sh 2500 1500 l 1000 "'.,.point, a slow increase in the mass transfer is observed, probably, due to the acceleration of the boundary-layer flow in this region.Near the trailing edge, the separation of boundary-layer flow and the vortex shedding in the wake flow result in a sharp variation in the streamwise distribution of Sh.Despite variations in upstream wake generation loca- tion, the difference in mass transfer results among all four cases is insignificant except in the region just downstream of the injection holes, as observed in Figure 7.It is worth noting that although in cases of wake C and D the flow characteristics are different, the mass transfer distributions are similar.For example in the case of wake C, the wake flow travels downstream with the main- stream then enters into the flow passage between the suction surface of the test blade and the pressure surface of its neighboring blade.On the other hand, in the.case of wake D the wake flow combines with the mainstream and moves into the flow passage where it is confined by the pressure side of the test blade and the suction side of its neighboring blade.One would expect that the case of wake C would affect the turbine blade mass transfer more on the suction surface of the test blade but less on the pressure surface.However, this phenomena does not occur when the injection flows are ejected from the leading edge region of the blade.This is because the flow field in the blade passage is severely disturbed by the presence of injection flows therefore the promotion of the turbulence level by the upstream wake C becomes insignificant. As shown in both Figures 6 and 7, the mass transfer rate for the case of wake A is highest among all four cases, probably because of the upstream wake turbulence level is the highest among the 5 cases experienced at the leading edge region of the test blade.A prior study [1994a] has already revealed that the position of the upstream wake significantly effects the mass transfer distributions on both surfaces of the test blade without film cooling.However, the present study shows that the mass transfer distributions are not strongly dependent on the position used to generate the upstream wake if the injection flows are ejected in the leading edge region, as indicated from Figure 7. Spanwise Distributions of Local Mass Transfer Rates Near the leading edge of the turbine blade, the spanwise distributions of Sh are expected to be periodic as a result of injection flows through multi-rows of injection holes.The measured results are presented to show the significance in the spanwise variation of the local mass transfer rate and the effect of upstream wake generation location on the local mass transfer rate.At Re 2 397,000, M 0.8, and Tu 0.4%, Figures 8(a)-(f) show the spanwise distribution of the local Sherwood number on the suction surface of the blade at different curvilinear distances for all four runs (A-D).On the pressure side, local mass transfer results are presented in Figures 9(a)-(e).In the present study, injection flows are ejected towards the endwall where Z 0. There are groups of arrows on the horizontal axis in these Figures.These arrows are used to indicate the centerline and boundaries of each row of injection holes. On the suction surface, the variation in the spanwise mass transfer distributions is notable and periodic, particularly in the region near the leading edge.The wave- length of the periodically spanwise mass transfer distri- bution corresponds to the pitch of the injection holes.At Ss./Ssf 0.033, the turbulence level in the wake encoun- tered by the leading edge results in a difference in the measured mass transfer rate among these four cases.Downstream of the injection row SS, twin peaks in the local mass transfer at S/'S,f 0.102 appears at locations between two neighboring injection holes.This twin-peak appearance is probably due to the formation of a pair of counter-rotating vortices.Previous observation of these two vortices was reported by Chen and Miao [1995] and Kurse [1985].As the injection jets move downstream, these two vortices quickly merge to become a single vor- tex, as indicated in Figure 8(d) by a single peak in the local mass transfer rate at S/S,u 0.179 between two neighboring holes.One can also notice that the peak in local mass transfer does not lie on the centerline of the injection holes.This is probably due to the injection flows ejected at 3 degrees toward the endwall.Thus, the in- jection flows travel downstream, the peak shifts towards the endwall (Figures 8(d)-(e)).Additionally, the span- wise distribution of Sh is already quite uniform for runs of both wake A and B but it still sustains a distinctly pe- riodic behavior for the other two cases at S/Ssf 0.534, as shown in Fig. 8(e).The periodic behavior in local mass transfer indicates the mixing between the injection jets and the boundary-layer flow on the suction surface is not completed yet.Figure 8(e) clearly illustrate the difference in the local mass transfer rate among various tests.However, Figure 6 shows almost the same spanwise-averaged value for Sh.At the location of S/Sf 0.823, shown in Fig. 8(f), the uniform distribution in the local mass trans- fer rate for all runs indicates that the fully turbulent boundary-layer flow is completely mixed with the injec- tion flows and has become two-dimensional in space. On the pressure surface, the local mass transfer rate at Sp/Sps -0.071 which is the location just upstream from Z (cm) (a) S p/Spf -0.162 Wake A Wake B Wake C 0------Wake D Z (cm) (e) FIGURE 9 The spanwise distributions of Sh over the pressure surface of the test blade at different distances for cases of wake A to wake D" (a) Sp/Spy -0.071, (b) Sp/Spf -0.102, (c) Sp/Spf -0.162, (d) Sp/Spf -0.221, and (e) S/Spy -0.613.the injection row PS, is unexpectedly uniform (Fig. 9(a)).This fact may be due to the formation of recirculated flow between the injection rows SL and PS.Again, twin peaks are observed in the local mass transfer distribution at Sp/Spf -0.102, probably as a result of the counter- rotating vortices in the injection flows, as shown in Fig. 9(b).Downstream of injection row PS, the variation in the local mass transfer distribution is periodic and significant.The highest value of the local mass transfer rate can be four times as high as that of the minimum in Sh.However, the spanwise variation in the "local mass transfer reduces as the injection flows propagate down- stream (Figures 9(b)-(e)).The twin-peak characteristic in the local mass rate still remains even at a location S/Spy -0.613 for both wake C and D, as shown in Fig. 9(e).However, the uniform distribution for Sh already appears for both wake A and B. This fact indicates a better mixing between injection jets with the boundarylayer flow on the pressure surface for the cases of wake A and B than that for the cases of wake C and D. CONCLUSIONS blade leading edge.The locations with uniform spanwise distribution in local mass transfer are closer to the leading edge for wake A and B than those for wake C and D. This fact indicates that the injection flows ejected from the injection holes mix more quickly with the boundary-layer flow on the blade surface for case of wake A and B. FIGURE FIGUREA schematic view of the experimental apparatus. FIGURE 3 A schematic view of locations for installing wake- generating cylinder. FIGURE 7 FIGURE 7The blow-up of Sh distributions over the leading edge of the test blade. the approaching mainstre'am
2019-01-08T03:26:59.105Z
1996-01-01T00:00:00.000
{ "year": 1996, "sha1": "e3d190c06137d26889e8f6f8a85ce76a2cbc47c3", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijrm/1996/653674.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e3d190c06137d26889e8f6f8a85ce76a2cbc47c3", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
268738154
pes2o/s2orc
v3-fos-license
Successful treatment of a unique case of solitary primary iliopsoas abscess caused by Streptococcus dysgalactiae subspecies equisimilis: A case report Rationale: Iliopsoas abscess, mainly caused by Staphylococcus aureus, occurs via the bloodstream or spread from adjacent infected organs. Although a few cases regarding primary iliopsoas abscess caused by Streptococcus dysgalactiae subspecies equisimilis (SDSE) with accompanying disseminated foci have been reported to date, there has been no case report on solitary primary iliopsoas abscess caused by SDSE. Patient concerns: An 85-year-old Japanese woman presented with worsening right hip pain and fever after an exercise. Hip computed tomography revealed a right iliopsoas abscess (iliac fossa abscess), and intravenous cefazolin was started as a treatment based on the creatinine clearance level on admission. Diagnoses: Blood cultures were positive for β-hemolytic Lancefield group G gram-positive cocci arranged in long chains, which were identified as SDSE by matrix-assisted laser desorption/ionization. No other disseminated foci were found upon performing whole computed tomography and transthoracic echocardiography. The patient was diagnosed with an SDSE solitary iliopsoas abscess. Interventions: The antimicrobial was appropriately switched to intravenous ampicillin on day 2, with the dosage adjusted to 2 g every 6 hours based on the preadmission creatinine clearance, followed by oral amoxicillin (1500 mg, daily). Outcomes: The abscess disappeared without drainage on day 39, and the patient remained disease-free without recurrence or sequelae during a 6-month follow-up period. Lessons: SDSE can cause a solitary primary iliopsoas abscess, which can be successfully treated with an appropriate dose of antimicrobials without draining the abscess. Introduction Iliopsoas abscess is a collection of pus in the iliopsoas compartment and is classified as primary iliopsoas abscess, mainly caused by hematogenous seeding from a distant site, and secondary iliopsoas abscess, which occurs due to underlying diseases, including adjacent vertebral osteomyelitis. [1]Staphylococcus aureus is the most common pathogen isolated from over 88% of patients with primary iliopsoas abscess; meanwhile, Streptococcus dysgalactiae subspecies equisimilis (SDSE) YF and HT contributed equally to this work. Written informed consent was obtained from the patient and her daughter for publication of this case report and the accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request. The authors have no funding and conflicts of interest to disclose. has rarely been isolated from with primary iliopsoas abscess. [1]Moreover, SDSE, belonging to Lancefield groups C and G, can cause skin and soft-tissue infections, intraabdominal and epidural abscesses, and infective endocarditis in immunocompromised patients, such as those with diabetes mellitus. [2]revious cases of iliopsoas abscess caused by SDSE have been accompanied by other disseminated foci. [3,4]o the best of our knowledge, there has been no case report of solitary primary iliopsoas abscess caused by SDSE; therefore, the pathogenesis and appropriate therapeutic strategy for such patients remain uncertain.Here, we describe an immunocompetent patient with a solitary iliopsoas abscess caused by SDSE infection and the treatment regimen administered to the patient. Case An 85-year-old Japanese woman was admitted to our hospital because of aggravating right hip pain and fever after performing an exercise of lifting both legs while lying on the back 2 days ago.Her height and weight at the first visit were 153 cm and 66.4 kg, respectively (body mass index: 28.4 kg/ m 2 ).She had undergone bilateral total hip arthroplasty for hip osteoarthritis 20 years ago and had a history of mitral regurgitation, hypertension, atrial fibrillation, and chronic kidney disease with 22.9 mL/min of creatinine clearance (CrCL).She took apixaban (5 mg, daily) as her regular medication for atrial fibrillation. On admission, she was alert and her vital signs were as follows: body temperature, 38.0 °C; blood pressure, 105/45 mm Hg; heart rate, 76 beats/min; respiratory rate, 16 breaths/ min; and oxygen saturation, 90% on ambient air.Physical examination showed that passive extension of the right hip joint caused pain and she had bilateral leg edema without an apparent wound.Laboratory findings showed an elevation of White blood cell (WBC) count with a left shift of neutrophils, a high C-reactive protein (CRP) level, a high erythrocyte sedimentation rate (ESR), normocytic anemia, and renal dysfunction (Table 1).Hip computed tomography (CT) revealed a right iliopsoas abscess (iliac fossa abscess) sized 1.9 cm × 6.0 cm × 5.2 cm without disseminated foci in other sites (Fig. 1A and B).Transthoracic echocardiography showed no evidence of infective endocarditis. The patient was administered intravenous cefazolin (0.5 g every 12 hours), as initially methicillin-susceptible S aureus was suspected as the causative agent of the iliopsoas abscess.Two blood cultures were obtained, and after incubating the cultures for 11 hours, gram-positive cocci arranged in long chains were detected using the BacT/Alert system (bioMérieux, Marcy I'Etoile, France) (Fig. 2).The isolates were grown on 5% sheep blood agar (Nihon Becton Dickinson, Tokyo, Japan) and showed β-hemolysis with Lancefield group G antiserum (Kanto Chemical Co., Inc., Tokyo, Japan).The pathogen was identified as S dysgalactiae with a high score value (>2.0) using matrixassisted laser desorption/ionization (MALDI Biotyper; Bruker Daltonik GmbH, Bremen, Germany). The patient was diagnosed with a solitary iliopsoas abscess caused by SDSE.The isolates were found to be susceptible to penicillin using the VITEK 2 system (bioMérieux) (Table 2).The antimicrobial treatment was appropriately switched to intravenous ampicillin (2 g every 8 hours) on day 2 when the acute kidney injury (AKI) recovered to 22.8 mL/min, as measured using the Cockcroft-Gault formula.The patient became apyrexial on day 3, with gradual relief of right leg pain.Two sets of follow-up blood cultures on day 4 were negative for SDSE.Drainage of the abscess was not performed, considering the risk of hematoma due to apixaban.A follow-up CT on day 16 showed a shrinking abscess (Fig. 1C and D).The abscess disappeared without drainage on day 39 after intravenous administration of ampicillin for 10 days, followed by oral amoxicillin (1500 mg, daily) for 4 weeks.The patient remained disease-free without recurrence or sequelae during a 6-month follow-up period. Discussion This report illustrates 2 clinical themes: the potential of SDSE to cause solitary iliopsoas abscess and the appropriate antimicrobial strategy to treat such patients with or without draining the iliopsoas abscess.SDSE, typically β-hemolytic, may express Lancefield group C or G antigens. [2]The mortality rate of SDSE bacteremia has been reported to be 15% to 18%, [5] whereas that of S aureus bacteremia is 34%. [6]Because the iliopsoas muscle has enriched blood supply, a bloodstream infection can easily affect this muscle. [1]Primary iliopsoas abscess has been reported to be caused by S aureus mainly (88%); in contrast, Streptococcus species have rarely been reported to cause primary iliopsoas (4.8%). [7] PubMed search using the Medical Subject Headings "Psoas abscess" and "Streptococcus dysgalactiae" with the Boolean operator "AND" found only 2 articles in English each describing a case of iliopsoas abscess caused by S dysgalactiae (Table 3).[3,4] We excluded case reports regarding infections caused by group C or G streptococci, as they may include not only S dysgalactiae but also other streptococci, such as S. anginosus.In both the cases of iliopsoas abscess we found, [3,4] blood cultures were positive for SDSE, and there were other sites of SDSE infection in addition to SDSE iliopsoas abscess.A retrospective, multicenter surveillance study in Japan reported 6 cases of iliopsoas abscess with SDSE bacteremia.[8] All cases included other sites of infections: 2 cases with vertebral osteomyelitis and 1 case with each of the following co-infections: cellulitis, septic arthritis, vertebral osteomyelitis pyogenic lymphadenitis, and vertebral osteomyelitis and urinary tract infection.To the best of our knowledge, the present study is the first case report on solitary iliopsoas abscess caused by SDSE.Thus, SDSE should be considered a causative microorganism even in cases of solitary iliopsoas abscess.Iliopsoas abscesses are classified as primary, caused by bacteremia, or secondary, caused by bacterial spread from an infected organ.[1] The patient was diagnosed with a primary iliopsoas abscess because the 2 sets of blood cultures of the patient were positive for SDSE and the CT scan did not find any infected organ. The mt common causative microorganism for a primary iliopsoas abscess is S aureus, whereas pathogens responsible for a secondary iliopsoas abscess are more commonly from the gastrointestinal and genitourinary tracts, such as Escherichia coli, Klebsiella species, and Streptococcus species.[9] The mortality rate of patients with gas-forming iliopsoas abscesses, in which gram-negative rods are predominant, is higher than that of patients with non-gas-forming iliopsoas abscesses, in which Fuchita et al. • Medicine (2024) Medicine gram-positive cocci are Additionally, advanced age, WBC count, platelet count, blood urea nitrogen, creatinine, potassium, and secondary iliopsoas abscesses have been reported to be associated with the mortality rate of patients with iliopsoas abscesses. [10]Tabrizian et al [11] reported that patients with bacteremia and small abscess (<3.5 cm) responded well to antimicrobial therapy alone.In gas-forming or non-gas-forming cases without contraindication for surgical intervention, surgical intervention, including percutaneous drainage or operation, remains the first choice for treating iliopsoas abscesses. [10]In the cases where S dysgalactiae was the causative agents of iliopsoas abscess, [3,4] the patients underwent drainage or aspiration in addition to antimicrobial administration (Table 3).However, considering the contraindication of CT-guided drainage and the result of follow-up blood cultures, our patient was treated with intensive intravenous ampicillin for 10 days, followed by 4 weeks of oral amoxicillin administration.As a result, the abscess disappeared without drainage on day 39.This suggests that large solitary iliopsoas abscesses caused by SDSE bacteremia can be treated with antimicrobial therapy alone without needing to drain the abscess.The high blood supply to the iliopsoas muscle may have contributed to our case's excellent response to antimicrobial therapy.Generally, inflammation markers in patients with myalgia and fever should be evaluated, [12] as inflammation markers such as WBC, CRP, and ESR are often elevated in patients with iliopsoas abscesses. [1]The duration of intravenous antimicrobial administration is generally 2 weeks, followed by oral antimicrobials for 4 to 6 weeks, depending on inflammatory marker levels, clinical improvement, and follow-up CT results. [13]The patient was treated with intravenous ampicillin for 10 days and oral amoxicillin for 4 weeks, according to WBC count, CRP, ESR, clinical improvement, and follow-up CT findings, without draining the abscess. The antimicrobial dosage adjustment based on creatinine prior to admission may contribute to the success of the treatment.One-fifth of patients with acute infections develop AKI on admission; however, more than 50% of the patients with AKI have a resolution of renal injury by 48 hours. [14]Therefore, during that time, the dose of beta-lactam drugs with a comprehensive safety zone should not be reduced. [14]Antimicrobials with a wide safety margin, such as β-lactam and β-lactam/βlactamase inhibitor combinations, allow dose adjustments to be deferred until 48 hours after the initiation when the patient's renal function is better characterized. [14]In addition, appropriate antimicrobial treatment for community-onset bacteremia within 48 hours is negatively correlated with the mortality rate. [15]Our patient was treated with cefazolin as an empirical therapy with the dosage designed based on the CrCL on admission.However, the antimicrobial was switched to ampicillin the following day, with the dosage adjusted based on the preadmission CrCL.In our case, CrCL on day 2 improved from 11.4 to 22.8 mL/min; therefore, the patient was diagnosed with reversible AKI and did not require the readjustment of the antimicrobial dosage.Clinicians should pay attention to the information on preadmission CrCL for appropriate antimicrobial administration. A limitation includes that SDSE was not directly proven as the causative organism of the iliopsoas abscess because of the high risk of hematoma. In conclusion, SDSE can cause a solitary iliopsoas abscess.Clinicians should remain aware that the abscess caused by SDSE bacteremia can be treated with an appropriate dose of antimicrobials and may not require drainage.A = alive, CRP = C-reactive protein, F = female, M = male, WBC = white blood cell. Figure 1 . Figure 1.(A and B) Computed tomography (CT) on admission revealed a solitary right iliac fossa abscess sized 1.9 cm × 6.0 cm × 5.2 cm (red arrow and arrowhead) (A, sacroiliac level; B, lower sacral level).(C and D) A follow-up CT on day 16 showed a shrinking abscess (yellow arrow and arrowhead) (A, sacroiliac level; B, lower sacral level). Table 1 Laboratory data. Table 2 Antimicrobial susceptibility test results. MIC = minimum inhibitory concentration, R = resistant, S = susceptible.* Interpreted according to the Clinical and Laboratory Standards Institute criteria (Documents M100-Ed32).
2024-03-30T05:21:43.967Z
2024-03-29T00:00:00.000
{ "year": 2024, "sha1": "aea1286253314d3d9f3d41c9c9d0a49fd37cd939", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "aea1286253314d3d9f3d41c9c9d0a49fd37cd939", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261567565
pes2o/s2orc
v3-fos-license
The Use of Canva Application on Material of Advertising Text yang INTRODUCTION The orientation of education is the realization of a generation with character, this is in response to the moral decline of the younger generation. The process of character education is to provide direction for children to have good character and become people with strong character (Lestari & Fathiyah, 2023;Rahmatullah, 2020). In order to improve character, the implementation of character education needs to implement the right moral values into daily life and various other aspects of life. The integration of character in the teaching and learning process shows the importance of character in raising a high-quality generation (Buchori & Setyawati, 2015;Rahmatullah, 2020). Indeed, the implementation of education raises a highquality, characterful, and moral generation. Education can be a solution to various life problems that students will face in the future. Therefore, education plays an important role in creating a generation that has a mindset, attitude, and actions that are aligned with the nation's identity (Dewi & Ulfiah, 2021;Normina, 2017). Education is an inseparable part of social life, and also always develops or changes along with the changes that occur. The technological developments that exist every year can affect developments and changes in the field of education (Başaran, 2013;Supriatna, 2013). In the implementation of education, there are components that influence each other, such as students, teachers, materials, learning media, evaluation, and classroom environment and conditions. Based on the results of observations and interviews at SMP Negeri 3 Kota Tangerang Selatan, students' ability to understand Bahasa Indonesia lessons, especially advertising texts, is still relatively low. The students had difficulty in understanding the information contained in the advertisement text by only using the books available at school and the limitations of currently accessible learning media in assisting their understanding of advertising messages. The test results of students on advertising text material also show the same thing. This is evidenced in the aspect of the structure of the ad text, which has an average score of 69.5, and in the aspect of the linguistic rules of the ad text, which has an average score of 71.6. The low score of the students was caused by their difficulty understanding the structure and linguistic rules of advertising texts that do not attract the attention of the audience effectively. The solution can be found through providing creative and innovative learning media to support the learning process. During teaching and learning activities, an attractive media is needed to make students interested and not easily bored. The role of an innovative teacher is needed as a facilitator in order to be able to help students in developing their abilities to gain new knowledge and experiences during the learning process. In this case, learning that is designed interestingly will result in effective learning (Arianti, 2019;Sapriyah, 2019). According to previous study one of the factors determining the quality of learning is an attractive learning design, which is systematically designed (Rahmatullah, 2020). Learning involves a system and a collection of interrelated elements to achieve the desired results. As a system, learning consists of certain components such as targets, materials, students, teachers, methods, conditions, and assessments (Muhlisin & Aeni, 2019;Papadakis et al., 2020). In each lesson, whether it is Bahasa Indonesia lesson or other lessons, it is necessary to have one supporting tool in the learning process at the educational level, namely learning media (Huerta et al., 2018;Ratminingsih, 2016). Learning media are concrete objects used to convey information, such as subject matter, to students. Learning media are all materials or tools that can be used in the teaching and learning process to help students understand and master the lesson material more easily and effectively (Blumberg & Fisch, 2013;Hamalik, 2019). Through these tools, the message is delivered effectively from the teacher to the students. In this study, the utilization of the Canva app is used as a function to create visual learning media. According to research conducted by previous study using Canva as a learning tool can improve students' writing skills by utilizing the support of work visualization (Utami & Djamdjuri, 2021;Yundayani et al., 2019). This helps improve s' creativity in today's digital era. Creativity is one of the most important skills for s so that they can succeed in a future full of change and complexity. Canva is one of the applications from the world of technology that can be used by anyone. 21st century teachers are required to have proficiency in digital media as a necessary form of literacy (Junaedi, 2021;Wardani et al., 2022). This Canva app provides a variety of design tools such as presentation templates, social media, infographics, posters, resumes, and more (Pelangi, 2020;Zettira et al., 2022). The templates are also provided with several other options such as educational, design, technology, advertising, business presentations and more. Canva has several advantages over other applications in the context of its use as a learning medium for advertising text materials. Canva was designed with an easy-to-use interface, allowing users with no graphic design experience to create engaging content with ease. This is especially beneficial for teachers and s who want to create learning materials quickly and efficiently. Canva provides a variety of well-designed templates for various purposes, including templates for ad design. With these templates, users can easily choose a design that suits their needs and use it as a basis for creating learning materials. Canva offers a variety of visual elements, such as images, icons, shapes, and backgrounds, that can be easily added to the design (Amrina et al., 2022;Supradaka, 2022). It allows users to create eye-catching display ads and combine visual elements to convey messages more effectively. Canva also provides collaboration features that allow users to work together on design projects. In a learning context, this allows teachers and s to work together to create creatives in teams, share ideas, and provide feedback to each other. Canva has a feature that allows users to instantly share their designs on social media platforms. This makes it easy for teachers and s to publicise the creative works they create and expand the reach of potential audiences. With the combination of these features, Canva becomes an effective tool to produce interesting and creative learning materials in the form of advertising texts (Fauziyah et al., 2016;Wahyuni et al., 2023). According to researchers, the Canva app can be used as a suitable learning media or can be used as an interesting alternative and increase teacher creativity in teaching Bahasa Indonesia, especially in the learning of advertising texts (Pelangi, 2020). Through the Canva app, the message that will be delivered to students will be more effective. Previous study relevant to this research showed that the use of Canva learning media has a positive impact on s' engagement in learning (Lastari & Silvana, 2020). Other research findings of this study show that the use of Canva can increase students' motivation in learning, activate students' enthusiasm and creativity in learning, and reduce students' discomfort with the material delivered by the teacher (Purba & Harahap, 2022). Research results of the study state that the Canva app-based animated video media on force and motion material can increase motivation and achievement and is suitable for use in the learning process (Hapsari & Zulherman, 2021). From some of the research, it can be concluded that the use of Canva app as a learning media can optimize student learning outcomes and can be an effective alternative learning media. The aims and objectives of this research is to help teachers and students explore and make it easier to understand the learning of advertising text material with visual representations that can be built with the Canva application. This study help students create advertising text better through templates and features provided by the Canva application. Moreover this study provides space for students to explore and show their creativity in understanding and presenting advertising text material. This research is expected to optimise the learning quality of advertising text materials and ensure that the Canva application is optimally used as an effective learning medium. METHODS This research used descriptive research methods with a qualitative approach. The data used in this research was qualitative, in the form of descriptions or words about the facts or phenomena being observed. The descriptive method is a method of researching a human group, object, condition, system of thought, or event in the present. Descriptive qualitative methods were used to develop theories that were built on data obtained from the field or research site. In this case, the researcher used descriptive data to build a better and more detailed theory about the object under study. Qualitative research methods refer to a research approach based on the philosophy of post positivism (Sugiyono, 2016). This method was used to conduct research on natural object conditions with the researcher as the main instrument. Research subjects referred to individuals or groups that were sampled in research and provided information related to the research topic. In this study, the research subjects were 12 students of class VIII at SMP Negeri 3 in South Tangerang City. This class was divided into three groups so that the number of participants only reached 12 people. To collect data, the researcher observed the learning process conducted at SMP Negeri 3 in South Tangerang City. Then, the researcher conducted direct interviews with the students to get information about the learning of advertisement text and the learning process that has been done. The researcher also used questionnaires given to students. The evaluation was conducted online through Google Forms and disseminated to class VIII students. The evaluation results showed that there were several factors that hinder Indonesian language learning, especially in advertising texts, one of which was students' difficulty in absorbing information and understanding the learning of advertising texts, resulting in low student understanding results. After conducting observations, interviews, and distributing questionnaires, the data would be processed by reducing it so that only the important parts and related variables were taken. Reducing data means describing information concisely, focusing on important aspects, and looking for important patterns. In this way, the reduced data will provide a clearer picture and make the data collection process easier for researchers. Data reduction was done through the selection of data from the results of observations, interviews, and distributing questionnaires and focusing on all raw data in order to have a stronger meaning. Once reduced, the data would be presented qualitatively in the form of descriptions that allowed conclusions to be drawn. The researcher used narrative text and short descriptions, as well as graphs, to present the data. Result This research would discuss two things, namely the implementation of advertising text learning using The Canva application in class VIII at SMPN 3 Kota Tangerang Selatan and the response of class VIII students at SMPN 3 Kota Tangerang Selatan to the use of the Canva app for advertising text learning. This research would present the results and discussion in descriptive form, which includes a summary, graph, and data description. The first stage was observation, interviews, and a test at SMP Negeri 3 Kota Tangerang Selatan. The results that researchers got were that the understanding of students in learning Bahasa Indonesia, especially advertising texts, was still relatively low. The students have difficulty understanding and absorbing the information contained in the advertising text only with the books available at school, as well as the limited learning media available at this time. The problems in teaching and learning activities in the advertising text lessons above require the right solution. Creative and innovative learning media were needed to support the learning process. In teaching and learning activities, interesting media are needed to keep students interested and not easily bored. The role of an innovative teacher was needed as a facilitator in order to be able to help students develop their abilities to gain new knowledge and experiences during the learning process. In this case, learning that is designed in an interesting way will create effective learning. In this study, researchers used the Canva app by taking advantage of some interesting features that can be used in Bahasa Indonesia materials, including advertising texts, slogans, and posters. According to there were several functions for using the Canva app, as follows: 1.) Stimulation increases interest or builds a sense of interest in the lesson so that there is a desire to understand and explore the lesson. 2.) As a mediator or liaison between teachers and students. 3.) Can make it easier for teachers to display material during the learning process. If the functions were adjusted to the learning material, it would be sufficient to help the teacher achieve learning success. The following stage was the application of the Canva app to learning Bahasa Indonesia advertisement text in class VIII. In this stage, the teacher would provide an explanation of each advertising text structure and its language elements by taking an example of an advertising text that has been created using the Canva app. In addition to explaining the ad text, the teacher would also provide information related to each feature in the Canva app and how to use it. In its use, the learning theme of advertising texts, slogans, and posters using the Canva application can be used by teachers as a presentation medium for students and can also display interesting examples of advertising texts, slogans, and posters. The design contained in the Canva app makes it easy for users to create a job as needed because it has so many interesting and free design options. The placement of icons in the design could also be arranged according to users' taste or desire. The Canva app can be described as follows: 1.) Display: when opening the application or web of Canva, it can be seen that there are many choices of forms to be used, such as presentations, social media, infographics, posters, resumes, offices, creating CVs, and more. 2.) Design: if you choose one of the forms to be used, for example: presentations, in presentation design there are still various other choices such as creative presentations, business presentations, speaking presentations and more. There is even one for studying. 3.) Icons and shapes: when users are creating or editing a design, there are many choices of icons that can be adjusted to the needs they want to make. For example: squares, circles, and more. 4.) Photos: a large selection of free photos was provided in the Canva app to add to the design to make it more attractive and deliver the information. 5.) Types of letters: the Canva app has many types of letters that can be customized according to user needs, ranging from non-formal, formal and more. When using it, teachers could use the "Presentation" template to create a presentation medium to present the material. The template can be easily searched because various choices have appeared in the main display of the Canva app. Furthermore, if the teacher wants to create examples, they can use the "Resume" template to create examples of interesting advertising texts, slogans, and posters. Last, if the teacher wants to create exercises or assignments that look interesting or are not monotonous, they can also use the templates in the Canva app. The use of the Canva app in learning advertising texts, slogans, and posters used in class VIII at SMP Negeri 3 Tangerang Selatan was by selecting the "marketing" template, then selecting brochures or posters. After that, the teacher can choose the design that will be used to create advertising texts, slogans, or posters. In the core activity, the teacher would provide an explanation of each advertising text structure and its language elements by taking an example of an advertising text that has been created using the Canva app. In addition to explaining the advertising text, the teacher would also provide information related to each feature in the Canva app and how to use it. During the lesson, the teacher would directly show the appearance of the Canva app along with an example of an ad design that has been made using the app. After that, students would be asked to try using the features of the application on their devices. In addition, the teacher would also provide opportunities for students to ask questions about the structure and linguistic elements of advertisements as well as obstacles that arise when using the Canva app. The designs contained in the Canva application have many variations according to the needs and tastes of users. Besides that, the advantages of the Canva application were that it provided these attractive designs for free, even though there were several paid designs. Not only teachers, but students can also use the Canva application to create advertising text assignments, slogans, or posters to make them more attractive. The advantage of using Canva apps for learning is that, in addition to gaining knowledge, students also learn to be skilled, creative, and innovative in developing a lesson or material being taught. The assessment of students' results in working on advertising text consists of two aspects: the structure of the advertisement and its linguistic elements. The structure of the advertisement would be assessed through three parts: orientation, body of the advertisement, and justification, with a focus on its suitability, relevance, and uniqueness. As for the linguistic aspects of the advertisement, they would be assessed through the uniqueness of the use of language in promoting the advertised product or service as well as the clarity of each language element contained in the advertisement. In addition, the linguistic aspect also emphasises the language of advertisements that use literary elements such as rhymes, proverbs, poems, and more to maintain literary values in students and society. According to the data from the assessment of students' ad designs, the final test in this study showed very good results. This can be seen from the existence of ad designs with scores on aspects of ad structure and linguistic elements reaching 90 and 87.5, respectively. The final stage of this research is the distribution of student response questionnaires. The results of the questionnaire distribution were analyzed based on four indicators, which are (1) the benefits of the Canva app used by students, (2) students' interest in using Canva in Bahasa Indonesia lessons, especially advertising texts, (3) the effectiveness of learning advertising texts using the Canva app, (4) knowing the obstacles to students using Canva in the learning process. The results of the analysis of the response questionnaire distributed to respondents can be presented as show in Figure 1. Figure 1. From the 12 students who were used as research subjects, it can be seen that 100% of students agreed that the use of the Canva app can make the learning process more exciting or more interesting. Figure 2. Response Sheet Diagram Based on Figure 2. From the 12 students who were used as research subjects, it can be seen that 100% of students agreed that the use of Canva has many things that can be used for learning Indonesian advertising texts. Figure 3, from the 12 students who were used as research subjects, it can be seen that 100% of students agreed that the use of Canva can make the learning process more effective. Next response sheet is show in Figure 4. Based on Figure 4 from 12 students who were used as research subjects, it can be seen that 33.3% of students said that using the Canva app requires an internet network, as well as 8.3% who have the same problem. Furthermore, 8.3% have the problem of requiring ability and creativity to operate the Canva app. Then, there are 50% of students who do not have any obstacles. Based on the results of the analysis of the students' responses to the questionnaire presented in the diagrams above, it can be concluded that the use of Canva app media in learning Indonesian advertising text is considered effective and interesting by 100% of the students who were used as research subjects. However, some students experienced obstacles in using the Canva app, such as a limited internet network (33.3%) and limited ability and creativity in operating the app (8.3%). However, 50% of the students did not have any obstacles using the Canva app. In addition, there are various advantages to the Canva app that are appreciated by students learning Indonesian advertising text. Therefore, it could be suggested to continue implementing the use of the Canva app in learning to optimise the effectiveness and attractiveness of the learning process. Discussions The increase in students' scores was due to Canva's spelling and grammar help feature, which can help users recognise and correct grammatical errors in their ad texts. This feature will provide alerts or correction suggestions if any errors are detected (Lastari & Silvana, 2020;Santiana et al., 2021). Canva provides well-designed ad design templates, including well-formatted text in accordance with grammar rules. By using this template, users can learn from example texts that are already linguistically correct. The score on linguistic elements is obtained by paying attention to the completeness of the linguistic elements needed in promoting the advertisement, the use of unique language, and the use of language that has literary elements. These factors give the advertisement a unique appeal and increase interest in the advertised product (Amrina et al., 2022;Tanjung & Faiza, 2019;Titiyanti et al., 2022). Based on the data presented, it showed that students were able to use various features in the Canva application, such as text, templates, elements, gallery features, and others. The scores obtained came from the ad texts created by the students. This proved that teaching the use of the Canva app in composing ad texts cannot be considered an absolute success in motivating students (Lastari & Silvana, 2020;Nurviyani et al., 2020). Therefore, the use of the Canva application was very helpful for students in the learning process of advertising text. The product test data above also indicated that the features in the Canva application were very helpful for students in designing advertisements. This study showed that the use of Canva as a learning medium can help students understand the material presented. Previous study state Canva offers various features and templates that can clarify and present information in an interesting way, making it easier for students to understand Indonesian language concepts (Monoarfa & Haling, 2021). Students could easily understand advertising text in learning materials after using Canva. The use of Canva as a learning medium also increases students' involvement in the learning process. The interactive features in Canva motivate students to actively participate in learning and create interesting work. This increases students' interest and motivation towards Indonesian language learning. Other study state Canva allows users to create interesting and creative visualisations (Christiana & Anwar, 2021). In Indonesian language learning, the use of visual elements such as images, diagrams, and infographics helps students understand and remember information better. With Canva, students can combine text, images, and other design elements to create more visually appealing learning materials. The implication of this research is that by using Canva, students can develop their graphic design skills and creativity when creating creative text ads. As a result, they may be more engaged in learning and may produce more visually appealing work. Using the Canva app in an educational context can help convey information in a more engaging and easy-tounderstand way. This can increase students' absorption of advertising material and strengthen their understanding. The research results may not be fully applicable in general, especially if the research is carried out in a limited educational environment or with specific groups of students. The results need to be confirmed through further studies with larger and more representative samples. CONCLUSION Based on the results and explanation above, researchers could conclude that the Canva app could help in the education world, especially in the Indonesian language learning process. Canva learning media helps make it easier for students to understand the material delivered through technology. The Canva app attracts the attention and interest of students in learning because of its features that are not boring. To find out other benefits of using the Canva app in Bahasa Indonesia learning, further research is needed to support the achievement of using Bahasa Indonesia learning media.
2023-09-07T15:13:13.909Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "2864618ec868f7debb27777d1fe9fb131da459b1", "oa_license": "CCBYSA", "oa_url": "https://ejournal.undiksha.ac.id/index.php/JPP/article/download/63543/26894", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bfd25f656c55f0c43d324aa4499f06d9df61dabf", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
10400453
pes2o/s2orc
v3-fos-license
Endometrial stromal nodule of the vaginal wall with a review of vulvovaginal endometrial stromal neoplasms Highlights • It is reported the first endometrial stromal nodule (ESN) in the vagina.• This is an excepcionall ESN because it was not associated with endometriosis• It was successfully treated by local resection.• Primary vulvovaginal endometrial stromal neoplasms are rare (only 5 reported) Introduction Endometrial stromal tumors (ESTs) are usually found in both uterus and ovary. Histologically, they can exhibit a wide range of differentiations (Chew and Oliva, 2010), and their malignant potential is often defined by their mitotic activity and presence of invasive borders and eventual lymphovascular invasion. Low grade sarcomas represent the more frequent types, while benign endometrial stromal nodules (ESNs), which are non-invasive, are less common (Chew and Oliva, 2010). We have done a systematic review on PubMed with the key words "extrauterine endometrial stromal tumor"; 77 results were found and 15 were selected (4 of them are included in our references), we also read 11 articles regarding other extrauterine locations. Indeed extrauterine locations are also rare but not exceptional (Chang et al., 1993) with most cases originating in endometriotic foci. Thus, primary cases have been reported in the colon (Chen et al., 2007), the rectovaginal septum (Bosincu et al., 2001), and even implanted in the placenta of a newborn (Karpf et al., 2007). Vulvovaginal involvement is much more exceptional and there is not any article that summarizes them. This is the reason why we made the revision of the cases reported on this localization. We made searches with "vulvovaginal endometrial stromal tumor/sarcoma/nodule", "vaginal endometrial stromal tumor/sarcoma/nodule" and "vulvar endometrial stromal tumor/sarcoma/nodule" and we only found ten reported cases of sarcomas in the vagina and two in the vulva (Androulaki et al., 2007;Berkowitz et al., 1978;Corpa et al., 2004;Irvin et al., 1998;Kondi-Paphitis et al., 1998;Liu et al., 2013;Masand et al., 2013), half of them being metastases from other sites. Bibliographical analysis demonstrated various types of endometrial stromal sarcoma, often associated to endometriosis, but no cases of stromal nodules like the one we are submitting. This paper reports, for the first time, the occurrence of an asymptomatic, non-recurring, polypoid primary endometrial stromal nodule (ESN) in the vagina of a 47 year-old female that was not associated with endometriosis and that was successfully treated by local resection. We also review the available cases of vulvovaginal ESTs. Case report A 47-year-old nulliparous patient, with an otherwise unexceptional gynecological history, consulted for a sensation of foreign body in the vagina. On clinical examination the vulva was unremarkable and a polypoid, pediculated mass of approximately 2 cm on the posterior aspect of the vagina, practically at the introitus was found. The rest of the female genital tract was unexceptional. The nodule was completely resected. No other lesions were detected in either vagina or vulva. Hysteroscopy and abdomino-pelvic MRI performed after resection were normal. Ca125 serum levels were within normal range. The patient has been followed regularly for a period of six years without recurrence. Pathology On gross examination, the 2 cm round, white yellow mass was homogeneous and elastic. Microscopically the vaginal epithelium lined surrounded the external circumference of the nodule which enclosed a homogeneous proliferation of endometrial stromal-type cells that had clear-cut lineal borders. No invasion of the pedicle or any surrounding vessels (Fig. 1A) was seen. Cells without atypia and lacking mitoses grew in diffuse sheets and had abundant collagen tracts and extensive hyalinization of perivascular distribution (Fig. 1B). Minor foci of calcification and foamy macrophages were present. Immunohistochemistry confirmed the endometrial stromal nature of the tumor by its Contents lists available at ScienceDirect Gynecologic Oncology Reports j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / g o r e conspicuous coexpression of CD10 and estrogen and progesterone receptors ( Fig. 2A-B). Smooth muscle markers such as h-caldesmon and desmin were negative. Discussion We report a case of an endometrial stromal nodule in the vagina in a premenopausal patient. Our case is unique, since no instances of this variant of EST, a benign endometrial stromal lesion, have been previously reported outside the uterus. Histologically, ESNs have lineal, smooth, pushing margins, minimal atypia or mitoses and often present a yellow color; possibly due to the presence of foamy macrophages. Characteristically they are non-invasive and do not permeate the capsule or adjacent vessels. The present case exemplifies such a lesion in a highly unusual location. Its benign nature is confirmed by the absence of local recurrence after a long follow-up. However, it must be borne in mind that some ESSs may experience very late recurrences (Chew and Oliva, 2010). In contrast, all cases of ESNs reported in the vulvovaginal region (Table 1) have corresponded to endometrial stromal sarcomas of various grades of differentiation. In at least five cases (Androulaki et al., 2007;Berkowitz et al., 1978;Corpa et al., 2004;Kondi-Paphitis et al., 1998;Liu et al., 2013;Masand et al., 2013) the tumors appeared to be primary neoplasms, while the remaining ones were likely to be metastatic in nature (Irvin et al., 1998;Kondi-Paphitis et al., 1998;Liu et al., 2013;Masand et al., 2013), since tumor was also found in pelvic locations, colon and lung. Association with endometriosis, locally or elsewhere, was found in 6 cases. In the present case, a metastasis from any other site was discarded since there was no evidence of either utero-ovarian primaries. Furthermore, an origin from endometriosis is unlikely since no such areas were detected in the vicinity of the neoplasm or elsewhere. This would support the possibility of a locally originated endometrial stromal neoplasm that could represent a type of Müllerian differentiation in the vagina, an organ whose partly Müllerian origin is well known (Sanchez-Ferrer et al., 2006). Histopathological diagnosis was relevant to the management of this case, with characterization of its endometrial stromal cellularity based on both cell morphology and coexpression of characteristic markers such as CD10, estrogen and progesterone receptors. In the differential diagnosis, the immunohistochemical absence of smooth muscle markers excluded the more commonly found leiomyoma of the vagina (Imai et al., 2008), a tumor, especially their cellular variants, that may resemble ESN. An accurate diagnosis of ESN is important since local conservative surgery is curative. Core biopsy is not indicated since the diagnostic features of a pushing, non-invasive margin cannot be diagnosed with this procedure. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
2016-05-15T07:24:20.865Z
2014-11-20T00:00:00.000
{ "year": 2014, "sha1": "9e22325be715f4cce6db5fe18c642b2ab7e2432c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.gore.2014.11.003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e22325be715f4cce6db5fe18c642b2ab7e2432c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246902609
pes2o/s2orc
v3-fos-license
Associations of Telehealth Care Delivery with Pediatric Health Care Provider Well-Being Background  The rapid, large-scale deployment of new health technologies can introduce challenges to clinicians who are already under stress. The novel coronavirus disease 19 (COVID-19) pandemic transformed health care in the United States to include a telehealth model of care delivery. Clarifying paths through which telehealth technology use is associated with change in provider well-being and interest in sustaining virtual care delivery can inform planning and optimization efforts. Objective  This study aimed to characterize provider-reported changes in well-being and daily work associated with the pandemic-accelerated expansion of telehealth and assess the relationship of provider perceptions of telehealth effectiveness, efficiency, and work–life balance with desire for future telehealth. Methods  A cross-sectional survey study was conducted October through November 2020, 6 months after the outbreak of COVID-19 at three children's hospitals. Factor analysis and structural equation modeling (SEM) were used to examine telehealth factors associated with reported change in well-being and desire for future telehealth. Results  A total of 947 nontrainee physicians, advanced practice providers, and psychologists were surveyed. Of them, 502 (53.0%) providers responded and 467 (49.3%) met inclusion criteria of telehealth use during the study period. Of these, 325 (69.6%) were female, 301 (65.6%) were physicians, and 220 (47.1%) were medical subspecialists. Providers were 4.77 times as likely (95% confidence interval [CI]: 3.29–7.06) to report improved versus worsened well-being associated with telehealth. Also, 95.5% of providers (95% CI: 93.2–97.2%) wish to continue performing telehealth postpandemic. Our model explains 66% of the variance in telehealth-attributed provider well-being and 59% of the variance for future telehealth preference and suggests telehealth resources significantly influence provider-perceived telehealth care effectiveness which in turn significantly influences provider well-being and desire to perform telehealth. Conclusion  Telehealth has potential to promote provider well-being; telehealth-related changes in provider well-being are associated with both provider-perceived effectiveness of telemedicine for patients and adequacy of telehealth resources. Background and Significance The novel coronavirus disease 19 (COVID-19) pandemic created opportunities and imperatives to introduce and expand telehealth as quickly as possible. Health services across the United States were swiftly transformed to virtual care delivery, often within a few weeks, encouraged by federal and state emergency actions. 1 Prepandemic, 15% of physicians offered telehealth services and 10% of patients had participated in a telehealth encounter. Between spring and summer 2020, 85% of physicians and 46% of patients reported use of telehealth. 2 Physician stress and burnout, particularly associated with technology use, were issues of considerable interest prepandemic. [3][4][5][6][7] In the context of a public health crisis that placed extraordinary responsibilities on health care professionals and contributed to deterioration of work-life happiness, clinician well-being has become an even greater concern. 8,9 As telehealth adoption and availability grows, so does the need to understand providers' perceptions of its benefits, burdens, and barriers. Few studies have examined the impact of telehealth on provider well-being. Limited evidence from the pre-COVID-19 era on telehealth user and provider wellness and attitudes, largely focused on hospital-based providers, found telehealth decreased on-call travel burdens of physicians and improved their perceptions of teamwork and safety climate. 10,11 Other studies highlighted the salience of usability and effectiveness for physicians' willingness to adopt telehealth and suggested variability in attitudes between telehealth-experienced and telehealth-inexperienced providers. 12,13 This study was guided by a National Academies of Medicine (NAM) conceptual model that links cross-disciplinary clinician resilience and well-being with outcomes for health care professionals, patients, and the broader health system. 14 Like other frameworks of workplace well-being, this model incorporates individual and environmental inputs. It proposes clinician well-being is determined by a balance between demands and resources, with external system factors more influential than internal individual ones. The model consciously situates the health care professional, patient relationship at its center. The terms "telemedicine" and "telehealth" have been defined in many ways. 15 For this manuscript, we employ the term "telehealth" to refer to synchronous encounters between providers and patients, inclusive of video plus audio, as well as audio only (telephonic) interactions. Objective This study sought to describe provider-reported changes in daily work and well-being, accompanying the rapid expansion of telehealth for health care delivery. Provider perceptions of telehealth effectiveness, efficiency, and work-life balance were examined to identify factors associated with improved provider well-being and provider desire for future telehealth. Methods Setting A cross-sectional observational survey study was conducted in fall 2020 by three pediatric health care systems in the United States: Connecticut Children's (CC) Hospital, Nationwide Children's Hospital (NCH), and East Tennessee Children's Hospital (ETCH). These institutions represent two academic tertiary care medical centers (CC: 187 beds and 400,000 annual outpatient visits; and NCH: 625 beds and 1.3 million annual outpatient visits), and one community hospital (ETCH: 152 beds and 120,000 annual outpatient visits). The two larger institutions use an enterprise Epic Electronic Health Record (EHR) platform (Epic Systems Corporation, Verona, Wisconsin, United States) with integrated Zoom video functionality (Zoom Video Communications, San Jose, California, United States); the smallest institution uses a Meditech EHR system (Medical Information Technology Inc., Westwood, Massachusetts, United States) with a nonintegrated Zoom video platform. The study protocol was (49.3%) met inclusion criteria of telehealth use during the study period. Of these, 325 (69.6%) were female, 301 (65.6%) were physicians, and 220 (47.1%) were medical subspecialists. Providers were 4.77 times as likely (95% confidence interval [CI]: 3.29-7.06) to report improved versus worsened well-being associated with telehealth. Also, 95.5% of providers (95% CI: 93.2-97.2%) wish to continue performing telehealth postpandemic. Our model explains 66% of the variance in telehealth-attributed provider well-being and 59% of the variance for future telehealth preference and suggests telehealth resources significantly influence provider-perceived telehealth care effectiveness which in turn significantly influences provider well-being and desire to perform telehealth. Conclusion Telehealth has potential to promote provider well-being; telehealthrelated changes in provider well-being are associated with both provider-perceived effectiveness of telemedicine for patients and adequacy of telehealth resources. Survey Design and Sampling Since most existing telemedicine instruments are intended to assess the patient experience, a brief platform-agnostic provider experience questionnaire was developed by the collaborative research team to inform organizational strategic plans for continuous improvement in telehealth. 16 The questionnaire, which is freely available, 17 drew upon findings of a 2016 American Academy of Pediatrics survey of pediatrician telehealth attitudes and experience. 18,19 Items were adapted from existing validated clinician technology user and patient telemedicine experience instruments, including the KLAS Electronic Medical Records User Experience Survey, the Telemedicine Satisfaction Questionnaire, and the Telehealth Usability Questionnaire. [20][21][22] Novel items were created in domains of telehealth demands, resources, benefits, and barriers. The questionnaire collected demographic information and a self-reported burnout measure previously shown to correlate with the Maslach Burnout Inventory's emotional exhaustion subscale. 23 The survey was pilot tested with 10 providers who suggested edits and confirmed time burden and comprehensibility of the final version. The survey was deployed using REDCap version 10.6.10 software (Vanderbilt University, Nashville, Tennessee, United States) via e-mail to a list-based population of 947 primary care and subspecialty health providers, excluding trainees, at the three children's hospitals. NCH chose to include psychology providers, while CC and East Tennessee did not. Consent to participate was implied by survey completion. The provider population for this analysis was limited to physicians, advanced practice nurses, physician assistants, and psychologists who reported active engagement in telehealth between March 1, 2020, and October 31, 2020. Statistical analysis was completed in SPSS version 27 (IBM Corporation) and in R 4.0.2 (R Core Team), using the polychor, lavaan, and psych packages. [24][25][26] Provider telehealth-associated well-being was operationalized as the 6-point Likert scale response to "How has the introduction/expansion of telemedicine affected your professional well-being?" with responses of "improved" or "greatly improved" considered positive attributed change. Preference for future telehealth was captured as a 5-point Likert's scale response to "In an ideal world, how much telemedicine would you provide after COVID-19?" Burnout was determined via a five-choice single-item validated measure that has been extensively described in previous literature. 27,28 Statistical Analysis Survey responses were analyzed as standard summary statistics, overall, and by respondent organization to assess for institutional effect. Logistic regression was used to estimate the odds of provider-reported improved telehealth-attributed well-being and the odds of provider-reported desire for future telehealth with demographic factors and responses regarding telehealth resources, demands, benefits, and barriers. Analyses made maximal use of available data by omitting missing values by pairwise deletion. Factor analysis and structural equation modeling were performed to incorporate multiple independent variables, extract latent constructs, and simultaneously analyze interrelationships with multiple outcomes of interest. Well-being, future telehealth preference, and burnout were treated as binary and ordered categorical variables without substantive difference between methods. Narrative comments were examined inductively and coded to generate leading qualitative themes by members of the study team. Results Survey response rate was 53.0%. Of 502 respondents, 467 (93.0%) met inclusion criteria by reporting active engagement in telehealth between March and October 2020. Responding providers were not statistically different from nonresponding providers in terms of gender or role. Also, 458 respondents (98.1%) answered at least 9 of 10 designated "core" survey questions and 105 (22.5%) supplied additional narrative comments. Core survey questions included responses to items about telehealth use, change in wellbeing attributed to telehealth, and information about role, gender, specialty, career stage, and clinical practice setting. Providers volunteered 69 free-text comments regarding telehealth benefits and 57 comments on telehealth barriers and burdens. Respondent demographics are summarized in ►Table 1. Telehealth Usage Most survey respondents were telehealth-naïve, having no previous telehealth experience, prior to COVID-19. For all institutions, median provider telehealth experience at baseline was 0% (95% confidence interval [CI]: 0.0-0.0%). Providers experienced dramatic changes in daily work as they rapidly adopted telehealth; median peak telehealth use during the time period between the outbreak of COVID-19 and the survey across all institutions was 77% (95% CI: 75.0-80.0%) of patient interactions. At ETCH, providers did not rely as heavily on telehealth as the larger institutions; median peak usage there was 7.0% (95% CI: 5.0-22.5%). The telehealth transformation at participating institutions was largely focused on ambulatory settings. NCH was not using telehealth in their emergency or inpatient areas at the time of survey. Median peak telehealth use for exclusively hospital-based acute care physicians was 50.0% (95% CI: 2.0-79.0%) of patient encounters. Provider Well-Being and Desire for Future Telehealth A total of 378 of 388 (83.1%) providers characterized the shift to telehealth as impactful to well-being. Over one-third of respondents, 35.8% (167/388), described "improved or greatly improved" telehealth-associated well-being. Respondents were 4.77 times as likely (95% CI: 3.44-7.26) to characterize telehealth-related changes in well-being as positive versus negative. Gender, role, career stage, practice setting, and institution were not significantly related to positive wellbeing ratings. Surgical providers were significantly less likely than behavioral health specialty providers to report improved well-being (odds ratio [OR] ¼ 0.46; 95% CI: 0.22-0.96). Providers with any degree of burnout were significantly less likely to report improved telehealth-attributed well-being (OR ¼ 0.53; 95% CI: 0.34-0.081). Absent burnout symptoms, there was no statistically significant difference in likelihood of improved telehealth-associated well-being between providers who denied and those who endorsed professional stress. Also, 411 of 467 (95.5%) providers wished to continue telehealth post-COVID-19, with most, (317/467 [67.9%]), preferring to perform at least "some" telehealth. Provider role and specialty were significantly associated with desire for postpandemic telehealth (►Table 2). Providers with improved telehealth-attributed well-being were 9.96 times as likely (95% CI: 5.41-18.32) to prefer some or more telehealth going forward. Telehealth Successes from a Provider Perspective There was widespread provider consensus of the usefulness of telehealth for clinical care delivery. Furthermore, 60.7% (283/467) described themselves as "mostly" or "completely" able to deliver high quality care through telehealth. Only 4 of 467 respondents (0.9%) reported telehealth is "not at all able" to meet their patient needs. Also, 67.5% (315/467) reported they can "mostly" or "completely" meet patient needs through telehealth. Again, 73.0% (341/467) also felt "mostly" or "completely" able to engage meaningfully with patients via telehealth. Moreover, 84.8% (396/467) do not believe virtual care undermines the provider-patient relationship, reporting telehealth had a positive or net neutral influence on the provider-patient relationship. Prompted to describe the benefits of telehealth versus inperson care "to you as a provider," respondents most commonly endorsed ways in which telehealth enables them to expand access to patients (341/467 [73.0%]) and renews their focus on patient-centric care (228/467 [48.8%]). Providers also indicated their own work-life balance (flexibility or control) was a leading benefit 204/467 [43.7%]). Providers' free text comments described benefits that mapped to themes of improved patient-centered care, improved quality and safety, decreased no-show rates, and personal convenience. Provider-Identified Telehealth Stressors and Shortcomings Across the three organizations, provider respondents identified emergent telehealth demands not matched by available telehealth resources as presented in ►Fig. 1. Despite differ-ences in platforms and implementation at participating hospitals, 85% of providers (397/467) named patient and/or provider technology problems (devices, connectivity, sound/image quality, and patient portal issues) as a barrier to effective use of telehealth. Of these, 72.6% (339/467) identified novel technology burdens for providers or patients and 52.7% (245/465) reported inadequate technical support for themselves and their patients. At all organizations, training was a universal unmet need; regardless of institutional curriculum or teaching modality, 56.8% of providers (264/465) rated telehealth training less than "good." Further, 43.1% of providers (200/464) noted inadequate clinical support for telehealth encounters, and 32.5% (152/467) described inefficiencies in telehealth clinical processes. Only 9.6% (45/467) of respondents agreed "collaboration" was a benefit of telehealth, suggesting workflows Providers also offered additional details in their narrative comments regarding leading telehealth stressors which indicted workflow inefficiencies, technical problems, and challenges regarding the appropriateness of telehealth for particular patients. Provider-reported telehealth benefits and barriers/burdens are presented in ►Fig. 2. Factors Associated with Improved Well-Being and Desire to Continue Telehealth The results of univariate logistic regression modeling of positive telehealth-attributed well-being and preference for substantial future health with provider-reported benefits, barriers, and burdens are presented in ►Table 3. To better understand the interplays among telehealth characteristics and outcomes of interest, we performed factor analysis and structural equation modeling to reduce data dimensionality while preserving variable information and accounting for correlations. Six latent factors were initially identified through exploratory factor analysis and three were retained after confirmatory analysis. "Telehealth Patient Effectiveness" includes three variables: (1) the abilities to engage with patients meaningfully, (2) serve patients' needs, and (3) deliver high quality care. "Telehealth Provider Satisfaction" includes three variables that capture providers' opinions of telehealth's impact on their own well-being and the provider-patient relationship, as well as the desire for continued telehealth. "Telehealth Resources" includes five variables related to the adequacy of infrastructure as follows: (1) training, (2) space/equipment, (3) video software, (4) clinical support, and (5) technical support. These three factors account for 66% of the variance in provider wellbeing and 59% of the variance in provider desire for future telehealth. Each factor has high internal consistency with composite reliability of 0.91, 0.89, and 0.82, exceeding the recommended level of 0.70, 29 and average variance extracted (AVE) of 0.78, 0.62, and 0.61, exceeding the recommended level of 0.50, 30 respectively, for patient effectiveness, resources, and provider satisfaction. The absence of variable crossloadings and moderate between factor correlations of 0.66 ("Provider Satisfaction" and "Patient Effectiveness"), 0.41 ("Patient Effectiveness" and "Resources"), and 0.37 ("Provider Satisfaction" and "Resources") suggest each factor is distinct enough to be a separate construct. Structural equation modeling revealed significant relationships between "Resources" and "Patient Effectiveness" and between "Provider Satisfaction" and "Patient Effectiveness." "Resources" are related to "Provider Satisfaction" in a statistically significant way, but the direct relationship is much weaker than the indirect one through "Patient Effectiveness" (i.e., "Resources" to "Patient Effectiveness" then to "Provider Satisfaction"). ►Figs. 3 and 4 show the structural model results with estimated path coefficients and residual variance. Path analysis revealed an indirect relationship between "Resources" and "Provider Desire for Future Telehealth." There was a statistically significant direct relationship between "Resources" and "Provider Burnout." There was not a statistically significant relationship between perceived "Patient Effectiveness" and "Provider Burnout." Discussion The Pediatric Provider Telehealth Work and Well-being Survey offers one of the first large scale assessments to focus on the provider telehealth experience and associations between telehealth use and provider well-being. Our findings identify potential benefits of telehealth for provider well-being across roles and specialties. Structural equation modeling demonstrates provider perception of patient telehealth effectiveness that is significantly associated with a provider's telehealth-attributable well-being. Consistent with "Patient Well-Being" and the "Clinician-Patient Relationship" as central constructs in the NAM model, our findings indicate telehealth's capability to engage patients to meet their needs and enable delivery of high-quality care contributes importantly to provider satisfaction with telehealth. Path analysis reveals close ties between the patient- provider relationship and provider professional satisfaction. Notably, the importance of telehealth resources to provider well-being seems to be largely indirect, mediated by the relationship between resources, and perceived telehealth patient effectiveness. Our study results confirm worrisome widespread stress among pediatric providers, as well as prevalent indications of early burnout. Though pre-COVID-19 burnout estimates are not available for our sample population, the overall rate of burnout (33%) is compatible with existing literature. 31 The single-item burnout measure used in this study, incorporated based on its validation as a stand-alone measure, inclusion in several major studies, and minimal respondent burden, is useful for clinical assessment but tends to under-identify burnout. 32 Providers reporting burnout in this study were not ascribing such symptoms to telehealth, whereas they were specifically estimating improvement in well-being attributable to the rapid shift to telehealth. In our analysis, provider burnout was not directly related to telehealth effectiveness, but was instead directly related to telehealth resources. We interpret this to reflect burnout arising from perceived demands that chronically outweigh perceived resources. 33 The high-residual variance in burnout (0.92) in our structural model indicates that there are other significant burnoutimpacting factors that this survey did not capture. Our model also underscores that clinician well-being is not merely the absence of burnout, and each is sensitive to specific determinants. Health care organizations were obliged to accelerate implementations of telehealth solutions during the public health crisis, without the planning and stakeholder involvement that typically characterize such operational endeavors. Our findings highlight opportunities with actionable potential to sustain use of telehealth and advance provider wellbeing. First, health care systems are likely to benefit from investing in robust telehealth training for provider and patient users to promote skills acquisition and enhancement. The National Consortium of Telehealth Resource Centers and groups including the American Board of Pediatrics and the American Association of Medical Colleges are developing telehealth competency standards and curricula for health professionals and students. 34 Further research is needed to evaluate telehealth content and instructional modes such as e-learning and simulation in meeting the needs of adult learners. Patient-directed education efforts are also needed to prepare patients and families for digital literacy navigation. As virtual service offerings evolve, telehealth usability testing with at-risk patient populations should be considered. Second, our data suggest both proactive and on-demand technical supports are important to successful telehealth programs. Connectivity and audio/visual performance issues are prevalent, pernicious challenges to telehealth success. Designing clinical processes to anticipate and troubleshoot technical issues and making staff available to resolve problems has potential to minimize disruption to care. Healthcare organizations must assess and respond to broadband access and other prerequisite challenges in the communities they serve to support initial and ongoing telehealth use. Third, providers indicated clinical workflows require reengineering for telehealth-only, as well as hybrid telehealth/in-person, care models. While telehealth is at a disadvantage in detecting many physical examination findings and performing procedural interventions, management of diverse acute and chronic conditions can leverage a combination of in-person and virtual services, and providers are interested in accommodating both. In narrative comments, providers reported challenges with follow-up of tasks emerging from video visits, likely reflecting a need to optimize immature teamwork processes specific to telehealth. Providers also described challenges maintaining clinic momentum when telehealth and in-person visits are intermixed, noting different pacing of appointment types complicates transitions between patients. Attention should be given to decision tool development to guide modality choice, implementation of appropriate provider schedules and support staffing ratios to permit smooth throughput, and clear lines of health team communication and coordination during and after video visits. Providers reported prolonged performance of telehealth led to eye strain and screen fatigue. Virtual visit "bookending," scheduling telehealth appointments at the start or end of the day, might enable short blocks of consecutive video visits to achieve efficiency and productivity, as well as work-life flexibility. Limitations Our study findings should be interpreted considering key limitations. Statistical modeling was employed to explore plausible pathway associations, but experimental studies are required to establish causality among constructs. Certain survey variables were excluded from structural modeling because dichotomization produced low-frequency items that the analysis could not support. This study was conducted during the height of the COVID-19 pandemic. Telehealth was at times the only available modality for nonemergent care, possibly inflating ratings for service delivery, and patient engagement (compared with an implicit alternative of no/foregone care). Our sample did not permit meaningful subanalysis by extent of telehealth use. Selfreported outcome measures create potential for social desirability bias and recall error. Additionally, voluntary samples introduce self-selection bias. The majority of respondents practiced in multiple environments; however, we did not specifically assess whether telehealth was available or leveraged in all practice settings. Our results are most applicable to ambulatory pediatric providers and may not be generalizable to nonpediatric providers or other clinical settings. Finally, the pandemic disrupted providers' personal, as well as professional lives, although the former was not specifically investigated in our analysis. Conclusion System-based commitments to provider's well-being create resilient organizations. Evaluations of the recent telehealth surge and its lessons are beginning to accumulate and should inform data-driven and evidence-based best practices improvement efforts. This study offers evidence that telehealth can promote rather than detract from professional wellbeing for providers by enabling provision of high quality, patient-centric care and offering added flexibility and worklife balance to providers. We found widespread provider's desire to continue telehealth related to a positive perception of how effectively the medium met their patient's needs which, in turn, was influenced by how well an institution provided necessary technical and clinical resources for providers to do their jobs well. Optimizing telehealth structures and workflows to improve reliability, efficiency, and clinical excellence will benefit individuals and health care institutions. Clinical Relevance Statement Providers who transitioned to telehealth during the novel coronavirus disease 2019 (COVID-19) pandemic have a desire to continue telehealth. Associated factors for provider well-being in a telehealth care delivery model include provider-perceived telehealth effectiveness and adequacy of telehealth resources. Health systems can use these findings to optimize telehealth structures and workflow while planning for a postpandemic care delivery model. well-being include provider-perceived effectiveness of telehealth and adequacy of telehealth resources. Multiple Choice Questions 2. The likelihood of improved telehealth-associated wellbeing was a. Highest among providers who also reported one or more symptoms of burnout b. Not statistically different between providers who endorsed and denied professional stress c. Lower than worsened telehealth-associated well-being d. Lower among providers of advanced career stage Correct Answer: The correct answer is option b. Absent symptoms of burnout, there was no statistically significant difference in likelihood of improved telehealth-associated well-being between providers who denied and those who endorsed professional stress. Providers with any degree of burnout were significantly less likely to report improved telehealth-attributed well-being. Career stage was not associated with telehealth-associated wellbeing. Protection of Human and Animal Subjects The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects and was reviewed by the Institutional Review Boards of all participating organizations: Connecticut Children's (registration no.: IRB00000703), Nationwide Children's Hospital (registration no.: IRB00000568), and East Tennessee Children's Hospital (registration no.: IRB00002221). Funding None. Conflict of Interest None declared.
2022-02-18T06:23:23.749Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "20657c5926b7032c16a60c21097a948f41a732dd", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "86d2fdc32445bcce9a7f572e25a7d78b51052f6a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207824986
pes2o/s2orc
v3-fos-license
A Spirituality Discourse in Treating Substance Use Disorders with Marginalised Persons A spirituality discourse in substance abuse treatment offers useful unconventional constructs in treatment services to ethnic minority groups with substance use disorder (SUDs). It is important to locate spirituality within culture, place, and history in order to understand the spiritual needs of persons from minority groups with SUDs. There are many studies that merit a spiritual approach in treatment for ethnic minority groups with SUDs. However, spirituality is a broad concept that means different things to different people. Therefore, such an unconventional approach should be approached critically and cautiously. This chapter looks at the utilisation of an integrated eclectic approach with a focus on inclusion of spirituality in treatment services from a biopsychosocial-spiritual perspective. Tapping into the spiritual needs and the meaning that people ascribe to spirituality and religion (S&R) in treatment services is often more valued than conventional secular treatment services. Also, the client’s spirituality is generally overlooked by professionals offering such services simply because it is so controversial. This chapter proposes an integrated eclectic methodology calling for a biopsychosocial-spiritual perspective to address the needs and well-being of ethnic minority groups with SUDs as a com-prehensive person-centred and holistic approach, utilising mindfulness techniques. Introduction Similar to international trends, substance abuse is a huge concern and growing phenomenon in South Africa. Similarly more and more persons seeking treatment for substance use disorders (SUDs) present with dual diagnoses of which mental health problems are significantly high [1]. Treatment for SUDs thus has become more complex in that diagnoses and expectations of clients are multifaceted [1,2]. Equally complex is treating clients with SUDs while paying due diligence to their ethnic, cultural and religious orientation. This chapter presents examples from a research study conducted in South Africa that supports the urgency for an integrated approach that incorporates spirituality in treating coloured persons with SUDs. It needs to point out that coloured people are not part of a minority group as described in conventional terms. However, they are marginalised and at a disadvantage in terms of their low socioeconomic status largely due to the remnants of apartheid, 1 laws in which the wealth and resources of the country were qualitatively and quantitatively unequally distributed to benefit a minority group in the country at the time [5]. The narratives that will be shared in this chapter are that of persons whose families were forcefully removed from urbanised towns to the outskirts of these towns to areas known as townships in terms of the then South African Group Areas Act of 1950 [3,4]. While these township communities were characterised by social cohesion and a spirit of Ubuntu 2 [6], often still prevalent today, there were high crime rates and excessive use of alcohol and other drugs (AOD). Today, 25 years after apartheid has been abolished in South Africa, townships continue to be plagued by poverty, unemployment and a high prevalence of SUDs [4,7,8]. As such treatment services offered by community-based non-profit organisations (NPOs) to coloured persons with SUDs are complex firstly, because people have deep psychosocial problems (such as previous disadvantage and high prevalence of poverty), and secondly, these township communities lack resources, such as clinics and hospitals that are overcrowded). Adding to these challenges is the fact that social workers rarely delve into the spiritual and religious (S&R) needs and well-being of clients leaving a gap in addressing the holistic needs of clients [8]. However, there is a growing body of literature that supports working with clients' spirituality, as an effective strategy in treating SUDs [8][9][10][11][12][13]. The literature also indicates an acknowledgement to facilitate ethnic minority groups' need for inclusion of their spirituality in treating SUDs [8,[14][15][16]. Such an approach calls for a broader perspective that pays equal attention to the biological, psychological, social and spiritual needs of people with SUDs. Furthermore one cannot look at such an integrated approach without considering mindfulness techniques which is an integral part of such spiritual conventions. It is against this backdrop that this chapter is presented. An integrated eclectic approach for treating SUDs There is no singular practice model in social work and/or SUD treatment that can be applied in all contexts. As such the intervention model that counsellors, such as social workers, select is unique to the setting and client often culminating in an eclectic approach [17]. Eclecticism is commonly used in social work and SUD treatment. It is the use of a wide range of theories and techniques regardless of its theoretical origins and orientation as long as the client, a group or a community's needs are met [18,19]. In other words a social worker drawing on eclectic knowledge therefore uses a wide range of theories and techniques that are appropriate for a particular case. Similarly, an integrated approach is also the use of a combination of theories and techniques to address complex needs of individuals, groups and communities. The difference is that eclecticism does not necessarily result in the emergence of a new theory or model, while this is certainly the case with an integrated approach [18,20]. There are approximately 400 new approaches that have evolved as a result of the integrationist movement, referred to as 'an ubiquitous process of conjunction that comes from relationship and conflict' [20]. Two perspectives that are important to understanding an integrated approach are chaos and complexity science and contiguous integration. Chaos and complexity science is driven by relational 1 Apartheid was institutionalised laws and policies by the National Party in South Africa in 1945 which instituted policy and legislation that separated people on the basis of skin colour during 1945-1994 [3,4]. 2 Ubuntu is a Zulu word that refers to humanity of an individual and/or society and principles of respect for the worth, dignity and humanity of self and others [6]. A Spirituality Discourse in Treating Substance Use Disorders with Marginalised Persons DOI: http://dx.doi.org /10.5772/intechopen.89073 dynamics and systems theory, while contiguous integration holds that a person is understood in relation to larger groups, organisations or society, a perspective based on a metasystem view of the integration phenomenon [20]. This phenomenon is similar to the concept of Ubuntu in South Africa that holds that a person's worth and dignity is engrained and embedded in relation to others; therefore the saying in Ubuntu: 'I am because you are' [6]. The term integrated approach seems to have replaced the term eclectic approach [18,19]. However, the two should not be confused and used interchangeably because it means different things. While an eclectic approach is more focused on the techniques used, an integrated approach focused on the theories and techniques and the emergence of a new theory [18]. There is, however, 'no technical eclectic that can totally disregard theory and no theoretical integrationist can totally ignore technique' [18,20]. For example, social workers often use a biopsychosocial approach to treating SUDs. When social workers are confronted by clients to integrate clients' S&R, social workers are required to explore clients' S&R. Hence a fourth dimension, namely, spirituality, is necessary in an existing model or approach such as in a biopsychosocial model. This would involve not only knowledge of diverse S&R but also interrogating one's own S&R as a therapist. It is appropriate to link what has been said thus far to the integrated relational model which places emphasis on the relationship of the client/patient and counsellor/doctor. Clients' response to treatment is the best indicator of treatment outcomes. An empathic counsellor/doctor improves and enhances treatment outcomes [21] especially when a strengths-based and problem-solving approach is adopted [22]. The integrated relational approach combines and has several principles in common with patient-centred, person-centred and problem-solving approaches. A patient-centred approach is a commonly used approach in health settings and takes into consideration the patient's choices and decisions for accepting or declining medical care or procedures [23,24]. Often a patient's S&R determines and affects his/her choices and decisions for medical intervention. Patient-centredness is often used interchangeably with person-centredness; however, the two are different [24,25]. Similar to the patient-centred approach, this approach places the focus of intervention on developing relationships and care plans based on the client's preference [24,26,27]. However, a person-centred approach goes further and takes into consideration the ethical and legal rights of clients as important factors when providing holistic service [27]. A person-centred approach is therefore more holistic than a patient-centred approach [28]. Adding to the two aforementioned approaches is the problem-solving approach which is a generalist approach in social work that consists of distinct steps for effective problem-solving ( [27,28] Mc and Mc 2015). In short, the steps that a social worker will follow are: (1) Determining the exact nature of the problem-If the problem seems too complex, break it up into smaller manageable parts that can be managed one at a time. (2) Finding as many solutions to the problem as possible-Ask for input from clients and colleagues. (3) Narrowing down solutions-Anticipate possible outcomes of each choice, both negative and positive, and list anticipated consequences. (4) Making decision-Mutually decide with the client on what do. (5) Implementing the plan-Be cognizant of the outcome and use the success and challenges to improve and reassess the intervention goals [17]. The three approaches share similar principles worth exploring when working with persons having SUDs. The common principles in the three approaches are summed up in Table 1 as follows: Table 1 presents the principles embedded in patient-centred [24,25], personcentred [27,28] and problem-solving [17] approaches. For the purpose of this chapter and since references to persons with SUDs do not only refer to such persons in Effective Prevention and Treatment of Substance Use Disorders for Racial and Ethnic Minorities 4 medical or health settings but also in community-based settings, I will use the terms client (as in service user) and worker (as in service provider) whether in health or community-based setting. The client-worker relationship forms the foundation of the treatment process [17,24,27,28]. The responsibilities and commitment of the worker is the second layer in the treatment process because an emphatic worker encourages clients' motivation for change. The last layer is the clients' role and responsibility as the 'expert' in his/her own life. The principles are not static but interlinked and fluid. The similarities of the principles in these approaches compliment an integrated eclectic model. While the principles may not specifically address spirituality per say, the inference to respect for the client's worldview, beginning where the client is at, honing in on the client's strengths and working in the client's frame of reference could be linked to clients' S&R. Biopsychosocial model for treating SUDs Originally developed for the medical sciences, George Engel (1933Engel ( -1999 first introduced the biopsychosocial model in the health sciences. Engel [23] laid the foundation for a biopsychosocial model in healthcare. He argued that there is a distinctive interaction between the biological, psychological and social needs of patients that determine the cause, effects and outcomes of disease and well-being [23]. The concept of biopsychosocial model is eloquently described by Borrell-Carrió et al. [29] who propose that the model is a philosophy of clinical care and a practical guide for health practitioners. These authors argue that the model is philosophical on the one hand because it is a means of understanding how multiple levels of organisation, from the societal to the molecular, affect disease, illness and suffering. They further contend that the biopsychosocial model is practical in that it is a way of understanding the patient's subjective experience as an essential contributing factor to accurate diagnosis, health and humane care [29]. White, Williams and Greenberg [30] took this approach even further by introducing an ecological model of care that added the person in his/her environment context. White et al.'s model thus proposes that the biological, psychological and interpersonal relationships that surround a person require equal attention to achieve a state of health and well-being [30]. The two models, however, did not address the person's spirituality. Client The 'expert' in his/her own life Has the freedom to make his/her own choices The addition of a spiritual dimension to define health was tabled at the 52nd Assembly of the World Health Organization (WHO). The 1948 WHO definition is 'Health is a state of complete physical, mental and social well-being, and not merely the absence of disease or infirmity' . Thus the proposed definition would be: 'Health is a dynamic state of complete physical, mental, spiritual and social well-being and not merely the absence of disease or infirmity' . Despite the latter being approved at the 1999, the 52nd WHO Assembly, it was not implemented [30]. Katerndahl [12] and Sulmasy, [31] while in favour of the spiritual dimension to the biopsychosocial model, however, warn that spirituality is a complex phenomenon and therefore should be approached critically when practitioners adopt a biopsychosocial-spiritual model in any context. Biopsychosocial-spiritual model for treating SUDs In the current milieu of treating ethnic minorities with SUDs, the reductionist scientific model is inadequate to meet the holistic needs of clients [32]. Therefore, a biopsychosocial-spiritual model for treating ethnic minorities with SUDs which utilises mindfulness techniques is proposed. Mindfulness approaches in treating persons with mental health and related conditions are rooted in Buddhist Vipassana meditation, which was introduced by Kabat-Zinn in 1979 [33]. Mindfulness approaches involves 'paying attention in a particular way; on purpose in the present moment' in a non-judgemental way [34,35]. It involves being aware of and accepting of thoughts and acknowledging and accepting livid experiences, thoughts and feelings instead of modifying and/or suppressing such experiences, thoughts and feelings [35]. In other words clients are encouraged to practise 'reperceiving' (think of SUD, e.g. differently, as an issue externalised rather than internalised) and 'attentional control' (how to externalised SUD) which could facilitate a more mindful response to SUD [33]. So, mindfulness is the practice and process of beginning where the client is at, being cognizant of the 'here and now' and 'being in the present moment' [33][34][35][36]. Furthermore, focusing on the here and now could help the client to enhance and improve focus, have a greater awareness and gain perspective regarding the SUD and the adverse consequences associated with it. Using mindfulness techniques could assist the client in recognising risks associated with relapse and could thus assist in avoiding relapse [36]. Skills to facilitate mindfulness techniques can be taught to diverse people regardless of cultural S&R backgrounds and can be used in a variety of intervention approaches such as biopsychosocialspiritual models [34,35,37]. While the need for a biopsychosocial-spiritual model utilising mindfulness techniques in SUDs has been well established [34][35][36][37], it is not clear how this new model can be integrated within the reductionist scientific conception of the client. Several empirical studies and systematic literature reviews [9,20,29,31] are drawn on explaining how a biopsychosocial-spiritual model for treatment in SUDs is worth perusing as a feasible approach for working with ethnic minorities. But first it is imperative to explain the distinction between spirituality and religion. Spirituality and religion Spirituality and religion (S&R) is often used interchangeably as if it means the same thing. What is more, there is not a universal definition for either mainly because the two respective constructs are so diverse [8,[38][39][40]. S&R has to do with one's beliefs, emotional state of mind experiences and conduct associated with the search for the sacred [10,39]. At the same time, it can be described as a worldview that places emphasis on the divine, a higher power or being whose followers promote spiritual and human well-being in which care and compassion for others take a centre stage as apposed to self-centred materialistic gains [13,39,41]. In this chapter, however, briefly, I differentiate between spirituality and religion (see Table 2). In reference to Table 2, religion for most part is about a set of beliefs about the moral code governing human conduct, while spirituality is not constrained by theological barriers and/or any particular ideology [38]. Rather it is characterised as the quest to understand and find answers to definitive questions about life, about meaning and about relationship to the divine, sacred or God and may (or may not) emanate from or lead to the development of religious rituals and rules [39]. Spirituality is thus a more holistic and inclusive approach as apposed to religion. Spirituality is rooted in multiculturalism and is therefore diverse in terms of cultures and beliefs [38]. The search for meaning, purpose and morality and fulfilling relations with self, others, the universe and ultimately with reality are central to spirituality [40,41]. Ubuntu shares similar principles. Spirituality (however a person understands it) has always been part of indigenous and culturally sensitive substance abuse counselling [29]; consider, for example, Alcoholics Anonymous (AA) programmes. I propose that the spiritual dimension (which includes religiosity) should be recognised and incorporated in treatment models regardless of the field of practice. Understanding the spiritual needs of people with SUDs "What treatment, by whom, is more effective for this individual with that specific problem and under which set of circumstances?" [42] Social workers treating ethnic minorities with SUDs are confronted with complex challenges experienced by clients. Never has Paul's [42] provocative question been more valid than in the current milieu of SUDs. Understanding the spiritual needs of persons with SUD is important if we are truly holistic in our approach to service delivery and should thus not be perceived as separate from attending to biopsychosocial needs. To holistically assess people with complex challenges associated with SUDs, knowledge about their spirituality is important [43][44][45] and can thus not be avoided, especially when client themselves raise the need to delve into their S&R. Several qualitative studies [14, 38,40,43,44,46] that investigated religious coping and spirituality in relation to SUDs indicate that positive religious coping and dimensions of spirituality protect against SUDs. In a qualitative study [47] that focused on barriers and facilitators to successful transition from long-term residential substance abuse treatment, the researchers found that clients having faith in the Divine as a facilitator for transitioning from in-patient treatment to reintegration back into the family and community played a pivotal role during the reintegration process. Various studies [8,[45][46][47][48] indicate the value of addressing S&R as a factor that enhances and aids treatment for SUDs. However, several studies with ethnic minorities [8,46,48,49] have found that the S&R needs of clients are not generally addressed by counsellors (such as social workers), and instead this role is more likely to be facilitated by clergy and/or recovering addicts. This raises the question though: who should be facilitating this role if clients make explicit their need for S&R well-being? If such high value is placed on the S&R of clients, a counsellor will require some understanding of S&R albeit at a theoretical level in order to effectively provide treatment services. It stands to reason therefore that a biopsychosocialspiritual approach requires the counsellor to reflect and interrogate his/her own S&R as well as his/her own ambivalence for not wanting to venture into clients' S&R. A spiritual dimension in treating SUDs Employing a spiritual dimension in treating SUDs is not a new phenomenon. The complexity of dual diagnoses and the multifaceted challenges associated with SUDs necessitated a need to intervene beyond the biological, psychological and social needs of clients with SUDs [1, 2, 8, 10, 15]. Spirituality and religious coping South Africans are not averse to managing complex life challenges through prayer, meditation and rituals as coping measures. Such practices are often embedded in people's spirituality which is commonly rooted in their religion and/or culture [6,9,16]. The reflections shared next is that of participants in a recent qualitative research study conducted in the Western Cape of South Africa on the experiences of coloured adults seeking substance abuse services at non-profit organisations (NPOs) [8]. John *3 , 32, has been in and out of rehab since he was 14 years old. Most of the rehabs he has been to in the past were what he refers to as secular rehabs, meaning that it did not have a spiritual component in its treatment model. According to the facility manager, this is the longest that John has been sober and attributes this to the fact that John is in a faith-based rehab that employs a biopsychosocial-spiritual model. This is John's narrative of his experience of S&R coping: I started using when I was very young, I must have been like fourteen years old. I went to an organisation that was an out-patient programme where they basically counseled me on a weekly basis. This time is different because there is a strong focus on the spiritual side of the addict. Because of my religious background I am more at home at this rehab and I know if I keep to the programme I will stay sober. John's situation is indeed complex as SUD cases generally are. Apart from the SUD, he experienced marital problems and homelessness, and his estranged wife refused to grant him visitations with their children. The complexity of SUDs often leaves people discouraged, and many, such as in the case of John, acknowledged drawing strength from God, a higher power, and being part of a religious group that meets on a weekly basis [49]. Like many people with SUDs, John felt that delving into S&R of clients provides a more holistic treatment approach than secular approaches that avoid S&R completely [49][50][51][52]. Spiritual mindfulness While mindfulness theories originate from Buddhism, people of different religious affiliations have become more open to use these techniques because of its usefulness especially in treating SUDs. The use of meditation is a common practice in mindfulness techniques [34,35,37,48]. A case example of a client with dual diagnoses explains the use of prayer as a form of mindfulness technique. James * , 26, admits being addicted to drugs and sex. He says the sex craving started when he was rehabilitated and during his first treatment programme completion. He believed that when he gave up methamphetamine, the craving for smoking cigarettes started. While in the programme which was an in-patient treatment programme for adults with SUDs, James gave up cigarettes and methamphetamine. However, when he reintegrated back into his community, his cravings for sex started, something according to him that was never an issue in his life before being treated for SUDs. He related that he started smoking after having sex and then later reverted back to using methamphetamine to the point where he felt that he could not cope without using methamphetamine on a daily basis. He explained that he felt that his cravings for methamphetamine were worse than before because he failed God in falling back into drugs. He explains: 'The righteous will fall seven times…' which is a Bible first quoted from Proverbs 24:16-18. Making reference to the quoted scripture, he explained his relapse as follows: …I didn't believe at first but I have experienced it firsthand. I was worse off in a space of a few months after beating my addiction to meth. … I first started stealing. I sold my personal belongings. In a space of a few months I lost so much weight. I knew where it was going to end, because my mind was constantly on how to get my next fix. My family did not confront me, but they could clearly see that I was back on drugs again. I told my sister that the addiction was out of my control and that I wanted to go back to the rehab. James was reflecting and had a greater sense of spiritual mindfulness regarding the SUD and relapse because he described himself as follows: James attributes his recovery to his spiritual awakening more than the intervention by social workers. It is not uncommon that the need for close relationships with others and/or an encounter with a divine being or higher power is a motivating factor for maintained sobriety in people with SUDs [53]. Whatever the client's reason for wanting to maintain sobriety, the social worker should tap into the motivating factors and amplify it as strengths [54,55]. Motivation is a state of readiness to change in which a predictable course is followed. This is where client-centred approaches such as motivational interviewing (MI) and motivational enhancement therapy (MET) are appropriate models because it is aimed at bringing about and enhancing change in the problem situation. These methods emphasise resolving clients' ambivalence [54,55]. Honing in on client's motivating factors such as restoring relationships with significant others is important in enhancing motivation and resolving ambivalence. When clients are treated as partners, they are more likely to respond to the counsellor. MI and MET do not represent any particular theoretical perspective and are thus useful to contextualise in terms of an integrated eclectic approach. Furthermore MI and MET are brief treatment strategies that can be as short as four sessions but can be prolonged depending on the client's level of motivation [53][54][55]. Thus intervention is time limited and goal directed, when the client reaches a level of high motivation where he/she is able to take responsibility for his/her own recovery. Spirituality as a component in treatment programmes Many community-based organisations in South Africa offer a dual focus, meaning the treatment service includes both a secular social work intervention approach and a spiritual approach. However, the spiritual component is mostly offered by volunteers who are religious leaders in the communities where the organisations are situated. During the course of the day, most of the programmes make provision for meditation and prayer. So clients gather in groups in separate venues or those who preferred to meditate on their own would find a private space in the organisation to engage in prayer and meditation [8]. Strategies employed in self-help groups [56,57] that focus on the cognitive, spiritual and behaviour changes of the persons with SUDs are more accessible because they are found across communities and are free of charge [52]. However, organisations should caution against whom and what such rituals entail so as not to exploit and/or impose religion or spirituality on clients. Therefore general training should be available for all people involved in substance abuse services including volunteers. It is imperative for such persons to have basic standards and knowledge for practice to avoid possible harm to clients. With the review and implementation of the current White Paper on Health (NHI) [58] and the norms and standards for social welfare services in South Africa, this type and methods of interventions are worth pursuing as services become more expensive and therefore inaccessible to the clients who come from disadvantaged communities, are unemployed and have low incomes. Spirituality of the counsellor Social worker rarely set out to gage clients about their spirituality. However, this topic more often than not emerges during interviews and thus requires social workers to be knowledgeable not necessarily on every spiritual and/or religious practice out there, but at least being able to engage client's expression of his/ her S&R needs [59,60]. This unconventional way of looking at treating SUDs is particularly important in the South African context, where spirituality is ingrained in the culture and value systems of many South Africans and more so in the light of current policy and legislation in South Africa calling for evidence-based, culturally sensitive and indigenous practice and research [8]. As counsellors treating ethnic minorities with SUDs, social workers are encouraged to interrogate their own spirituality, as clients more often than not express their own spiritual needs during treatment services [61,62]. It is not uncommon that group work offered by NPOs is generally facilitated by laypersons such as spiritual counsellors [8,63,64]. In some instances these would be trained clergy [63]; however, in most cases these would be recovering addicts [8,51,57] who have had some 'supernatural' experience. This is similar to approaches used in self-help groups such as Alcoholics Anonymous. The focus of such programmes is that most of the group work interventions are on spiritual growth and life skills [8,65]. For example, in a study conducted by carelse [8] that focused on ethnic minority groups in treatment for methamphetamine, all the participants reported on the important role played by religious clergy and recovering addicts. This is what they had to say: We have two spiritual counsellors …They focus more on the spiritual things like the Christian principles. And then we have a lot of ministers and pastors, and … priests, since the organisation is a faith-based organisation. We do spiritual growth which is run by pastors. These narratives concur with studies conducted [50,57,63] that focused on professionals and laypersons' contention with issues of power, oppression and privilege in service delivery. These authors conclude that there must be differentiating functions of professionals such as social workers and laypersons such as recovering addicts in the helping relationship. As a general rule, training should be available for all people involved in services to people with SUDs including volunteers, in particular training on how to engage the service user's spirituality [60,61]. Therefore it is imperative for clergy and recovering addicts in treatment in SUDs to have basic standards and knowledge with regard to spiritual intervention to avoid possible harm to clients. More importantly service providers will have to interrogate their own spirituality (however they perceive it) in order to engage meaningfully with the spirituality of others. Spirituality and maintained sobriety In pursuing a state of equilibrium, clients feel that it was important for them to take the first step of the 12-step programme [57,62] and admit that they were powerless over SUDs and that their lives had become unmanageable. In particular, clients in the 12-step programmes believed that a power greater than themselves could restore their emotional and spiritual well-being [62] where spirituality and a connection to a higher power are pivotal to their recovery process. These are some of their perspectives in this regard from a study on the coping resources of a minority group of adults in a low socioeconomic community on the outskirts of Cape Town, South Africa [8]: It brought me closer to my higher father and relying on him and to acknowledge that he took me out of, how can I say, I was lost totally. I believe it's prayer that God is opening for me. And I never prayed when I was using … my mind was all over the place but now I pray with sincerity and without any mind-altering. The clients' livid experiences are confirmed in the study by Miller and Rollnick [55] that explored the role of spirituality in the intervention outcomes after a 12-step programme. As Miller and Rollnick [55] points out, clients experienced an increased spiritual awareness and growth after completing the treatment programme. The findings also suggest that spirituality may have a positive effect on maintained sobriety if the person continues to engage in mindfulness strategies. In a study by Amaro and Black on the role of spirituality in helping Black women with histories of trauma and substance abuse, healing and recovery of participants also expressed more hope and motivation to maintain their sobriety because of their spiritual awareness and growth after their involvement in a programme incorporating mindfulness strategies linked to spirituality during treatment [35]. Spiritual complementary therapies Treatment services for SUDs sometimes involve alternative therapies too [34,37,62]. For ethnic minorities in low socioeconomic community, alternative therapies such as reflexology are not common [8]. Participants could experience being overwhelmed with such unfamiliar strategies as one client indicated [8]: 'I could not believe that someone so decent touched me. Me, I am an addict. People normally treat us as dirty and filth, the low lives of society'. Therefore clients must be introduced to such new methods in a client-centred manner that respects their S&R and cultural believes. The social worker's personal religiosity, training and sensitivity to the client's spirituality help in using an integrative approach that includes clients' alternative therapies [10,20,23,64]. Therefore, educating and sensitising social workers in terms of S&R and alternative therapies are of paramount importance [23,40,44,50,55,63,65]. Ongoing training and intrinsic spirituality on the part of the social worker offering services to people with SUDs could be a catalyst when using an integrated approach [60,[63][64][65]. There is a growing body of literature on spiritual complementary or alternative therapies; however, there is a dearth of research of its efficacy for treating ethnic minorities with SUDs [?]. Still, an integrated body, mind and spirit approach to intervention in social work practice that is researched-and therefore evidencebased can be an advantage in treating such groups [62]. A combination of Eastern and Western philosophies as well as current research in integrated practice approach offering guidelines for assessment and intervention and not limited to spiritual beliefs appears to be a viable approach. Spirituality as a foundation for restoring human dignity The role of the social worker in any setting is to provide support and guidance. Participants in Carelse's study [8] reported that the counselling, interest and compassion from the social workers motivated them to stay in the programme and to pursue their recovery goals. They described the service provided as very good work, noting that all the things they had learnt would help to prevent relapse, to stay positive and to keep their focus on their recovery goals. Participants' views about the benefits of utilising social work services provided by the NPOs offering treatment to adults with SUDs from low socioeconomic backgrounds are summed up as follows: The benefits of prayer and meditation seem to have developed the client's problem-solving techniques and efforts to manage triggers for relapse. Coping with stress and stressors such as in the case of SUDs involves deliberate efforts such as in mindfulness strategies, which in this study were prayer and meditation to combat and deter SUDs by influencing the environment and using the resources such as mindfulness strategies in the form of prayer and meditation [34,35,37,62]. Therefore it can be deduced that maintained sobriety is largely dependent on the nature and scope of the treatment programme and more so when mindfulness strategies in a biopsychosocial-spiritual approach is embedded in treatment. Incorporating spirituality in the treatment programmes challenges clients to reflect on their quality of life, for example, learning new ways of dealing with SUDs enhances their self-worth and dignity as they gain higher levels of person: environment fit or a state of equilibrium. Conclusion There is a growing demand for integrated approaches to treating minority and marginalised people with SUDs. Such treatment requires a continuous process of interrogating theories and related approaches to suit clients' needs. Social work services have evolved from a generalist approach to a person-centred approach over the past 20 years. In this process, the spiritual dimension to persons with SUDs gained progressively more prominence. Currently, a biopsychosocial-spiritual approach has paramount importance for offering integrated and holistic treatment for ethnic minority persons with SUDs. Thus a biopsychosocial-spiritual approach is proposed particularly in the South African context where spirituality is ingrained in the culture and value systems of coloured people. This chapter highlighted the importance of an integrated eclectic approach and the feasibility of a biopsychosocial-spiritual model in treating SUDs in marginalised communities in South Africa. Lessons could be learnt from the experiences shared for integrating spirituality in SUDs in similar contexts. What is clear is that the value of a biopsychosocial-spiritual approach in substance abuse treatment in South Africa cannot be ignored. By no means, this is the only model that can be used with marginalised communities, but it is one that is emerging strongly in treating SUDs when working with such ethnic minorities. The value of spirituality as it relates to person-centredness, in treating SUDs in minority groups, is a topic worth pursuing for future research. The inclusion of a person-centred approach and mindfulness strategies in treating SUDs should also be further investigated. Similarly, the Bachelor of Social Work (BSW) as well as continued professional development training should incorporate aspects of students' and practitioners' personal spiritual beliefs, the role this has on the professional relationship, as well as the impact of spirituality on the intervention process. Therefore it will be imperative that due diligence is given to the personal S&R beliefs (whether they believe in the transcendent, a higher power or not) of students and practitioners because it is as important as that of the clients that they serve.
2019-10-17T08:56:29.410Z
2019-09-27T00:00:00.000
{ "year": 2020, "sha1": "c8de70db8927a244c9d165af02adda239c7f7210", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/68935", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "435ade6fbf63da4a2ce433447cc4298788044cae", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
73521444
pes2o/s2orc
v3-fos-license
Optimal Measurements for Tests of EPR-Steering with No Detection Loophole using Two-Qubit Werner States It has been shown in earlier works that the vertices of Platonic solids are good measurement choices for tests of EPR-steering using isotropically entangled pairs of qubits. Such measurements are regularly spaced, and measurement diversity is a good feature for making EPR-steering inequalities easier to violate in the presence of experimental imperfections. However, such measurements are provably suboptimal. Here, we develop a method for devising optimal strategies for tests of EPR-steering, in the sense of being most robust to mixture and inefficiency (while still closing the detection loophole of course), for a given number $n$ of measurement settings. We allow for arbitrary measurement directions, and arbitrary weightings of the outcomes in the EPR-steering inequality. This is a difficult optimization problem for large $n$, so we also consider more practical ways of constructing near-optimal EPR-steering inequalities in this limit. I. INTRODUCTION It is one of the most well-known and unintuitive features of quantum mechanics that entangled quantum systems can, in a way that disturbed Einstein, instantaneously affect each other. Specifically, the famous Einstein, Podolsky, and Rosen (EPR) paper of 1935 [1], which made the first prediction of this feature, used it to argue that quantum mechanics itself must be incomplete. The EPR paper presents a thought experiment involving a maximally entangled state of two systems, for which measurement of the first (Alice's) system forces the second (Bob's) system into one of a set of basis states, with the basis depending on the choice of measurement made upon the first. That is, Alice's choice of measurement determines which of Bob's observables is predictable by her. But EPR implicitly rule out instantaneous action-at-adistance, assuming that "no real change can take place in the second system in consequence of anything that may be done to the first system," (that is, Bob's system is not disturbed [2], explaining why Einstein was). Hence they conclude that these different observables must have well-defined values regardless of Alice's choice of measurement. But quantum mechanics forbids simultaneous values for non-commuting observables. Thus, they say, "the wave function does not provide a complete description of the physical reality." Contrary to EPR, Schrödinger argued, in the same year [3], that quantum mechanics was not incomplete, but idealised. He used the term "steering" for the effect EPR identified, namely that "as a consequence of two different measurements performed upon the first system, the second system may be left in states with two different [types of] wavefunctions." But he thought this was unrealistic when describing systems that are spatially distant, because some sort of decoherence would prevent the entanglement from being established in such situations. In this way, he, too, thought that instantaneous action at a distance could be kept out of the most fundamental description of reality. The EPR paper advocated the possibility of local hidden variables (LHVs) in quantum systems which would account for the illusory (in their view) nonlocality in the theory [4,5]. However, it was proved by Bell in 1964 [6] that there exist predictions of quantum mechanics for which no possible LHV model could account. Finally, in 1982, examples of Bell nonlocality were experimentally realised [7]. Even without a loophole-free test of Bell nonlocality, it has become widely accepted that (contrary to Schrödinger's hope) entanglement can exist over long distances, and that Bell nonlocality is real. Entanglement and Bell nonlocality have been rigorously defined for decades; however, it was not until relatively recently (2007 [8,9]) that the particular class of nonlocality described in the EPR paper was actually formalised. The ability of an entangled quantum state to nonlocally affect another (though not necessarily vice versa [10][11][12]; see also [27]) has come to be known as EPR-steering [13][14][15][16]. The nonlocality described in the EPR paper had been studied mainly in the context of their positionmomentum example (see, e.g., [17,18]) but the formal notion introduced in Ref. [9] has opened the door to a series of new experiments. Following the first demonstration of this general notion of EPR-steering in [14], three experiments have each closed the detection loophole in tests of EPR-steering. One did so while also closing the locality loophole over 48 m [19] (thus definitively disproving Schrödinger's suggested resolution of the EPR paradox). Another closed the detection loophole with only two different measurements (as in the original EPR scenario) by employing state-of-the-art transition edge detectors [20]. The remaining paper closed the detection loophole using commonplace photon detectors while also enduring the losses of transmitting the measured photons through an arXiv:1404.5707v2 [quant-ph] 4 Aug 2014 extra kilometre of fibre-optic cable [15]. The accomplishments of this third paper are due to the highly loss-tolerant EPR-steering criteria that it employed to rigorously close the detection loophole. Reference [21] describes the formulation of these criteria in more detail, also showing them to be more loss-tolerant than another class of EPR-steering criteria (which includes those used in Refs. [19,20]). In this paper, we reconsider those criteria and reveal that they are actually not optimally loss-tolerant EPR-steering criteria. In doing so, we demonstrate a method for optimising similar tests of EPR-steering, and show that the optimal measurement strategies for such an experiment are just as practicable, significantly more more loss-tolerant in some regimes, and are (unlike those used in Ref. [15]) applicable for an arbitrary number of different measurements by Alice. In Sec. II of this paper, we briefly review the operational definition of EPR-steering and the family of states we consider in this paper. In Sec. III we review linear EPR-steering criteria, including postselection, then identify and close the inefficient detection loophole this potentially incurs [22,23]. We then review, in Sec. IV, the EPR-steering criteria obtained when using Platonic solid measurement strategies. We discuss the limitations of Platonic solid strategies, including their inherent restrictions in measurement number n (i.e., n ≤ 10), and consider geodesic solid strategies (introduced for n = 16 in Ref. [15]), which circumvent this restriction. Going from Platonic solids to geodesic solids is a more radical step than it may first appear. Because it is no longer the case that every vertex is equivalent to every other, a non-trivial constraint can be used to obtain stronger criteria (than those in Ref. [15]): that, when post-selection by Alice is allowed, the probability of a null result be independent of Alice's measurement choice. Moreover, there is no longer any symmetry-based justification for all vertices to be equally weighted; for a geodesic solid comprising two dual Platonic solids (such as the n = 16 of Ref. [15], and n = 7 here) even tighter criteria will result from weighting the two sets differently. All this is introduced in Sec. IV, and serves as a springboard to the completely general consideration in Sec. V. There, we allow arbitrary arrangements of n vertices, with arbitrary weighting of each vertex, and find still tighter criteria for n ranging from 4 to 8. For the states we consider, these are the most loss-tolerant EPRsteering criteria possible for any chosen number of measurements, n. We conclude in Sec. VI with a discussion of experimental practicalities and future work. Therein, we address the benefits and difficulties presented by the most optimal measurement strategies for each n, and consider whether optimality alone necessarily makes these the best possible choices for constructing experimental tests of EPR-steering. II. TESTS OF EPR-STEERING The operational definition of EPR-steering that we employ in this paper is such that one experimental party, Bob, possesses a quantum state, and another party, Alice, claims to possess a state that is entangled with Bob's. Bob asks Alice to make one out of a pre-specified set of measurements on her state, and inform him of her results. Using both Alice's results and the results of his own measurements on his system, Bob then calculates the value of some EPR-steering parameter and is only convinced that Alice is telling the truth if there is no local hidden state (LHS) model which could attain the same value. LHS models assume that Bob's quantum state is preexisting, and can only depend on Alice's results as much as can be explained by some local (to Alice) hidden variable that may be correlated with Bob's state. This is used to define EPR-steering bounds by constructing a theoretical limit on some property of Bob's system, based on the assumption that Bob's quantum system cannot be nonlocally affected by Alice's measurements. Thus, a violation of this limit demonstrates EPR-steering. The EPR-steering criteria that we will use are based upon measurements of qubit observables (typcially photon polarisation, but we will also use the terminology of spin). Moreover, we specialise to criteria suitable for two-photon entangled states that are Werner states: where |ψ s represents the spin singlet state: . The α and β superscripts respectively denote properties of Alice's and Bob's subsystems. The second term represents pairs of qubits that are uncorrelated, and the first term represents qubits that are maximally entangled. Thus the purity parameter µ ≤ 1 determines the degree of entanglement in the ensemble ρ αβ , with entanglement being present for µ > 1/3 [24]. III. LINEAR CONVEX CRITERIA We will consider EPR-steering criteria that are analogous to (linear) entanglement witnesses [25]. That is, the expectation value of a correlation function between Alice and Bob's spin measurements, summed over the measurement settings. Since in tests of EPR-steering we cannot trust Alice's detectors or the results she states [8,9], this correlation function must be defined generally as a classical expectation value over Alice's reported result A r , denoted by E Ar , as follows: where each r denotes a particular measurement setting on the Bloch sphere, and n denotes the total number of such settings. Bob's qubit observable isσ β r , and A r ∈ {−1, 1} is the result Alice submits for her measurement. We can restrict Alice's results to these values of equal magnitude because of the symmetry of the Werner state. If Alice, and her detectors, were trustworthy, then the result A r would correspond to a measurement of her qubit observableσ α r . Then the correlation function between Alice and Bob's results can be written as where ρ β Ar is the state of Bob's system, conditioned upon A r being the result of Alice's measurement. If Alice and Bob share an entangled state as in Eq. (1), andσ α r =σ β r , then the value of this function is easily shown to be µ. However, Bob must consider that Alice might not share an entangled state with him, and could be employing an LHS model, in which case S n would be calculated from where ξ represents the local hidden variable(s) inherent to Bob's system, upon which Alice bases her knowledge of Bob's state. In this scenario, Bob receives each state ρ β ξ with probability P (ξ), and Alice submits results A r,ξ dependent upon both r and ξ. This expression relies on the assumption that there is an LHS model of Bob's system, the existence of which means that there is a bound upon Eq. (5) that is not present in a quantum mechanical system [13]. In order to ensure that this is as rigorous a test as possible, in defining our EPR-steering bound we will assume that Alice controls anything that depends upon the hidden variable(s), ξ; namely, P (ξ), ρ β ξ , and A r,ξ . Note that the only thing that does not have any dependence upon ξ is Bob's choice of measurement. The assumption of locality in this LHS model is manifested in Alice's inability to influence or predict Bob's measurement choice. To this end, we must assume that Bob randomises the order in which he performs each of his measurements, and that Alice does not have foreknowledge of, or access to, his random number generation (this is referred to in other works as the Free Will Assumption [6], which we will not be further addressing). Under the above conditions, it is apparent that − r A r,ξ σ β r ρ β ξ is bounded above by r | σ β r ρ β ξ |, which is always achievable by choosing a suitable sign for A r,ξ . A proof of this, and of which ensembles of states a cheating Alice can use to attain this optimal value, are given in Ref. [21]. But if the only concern is to maximise S n (an assumption to which we will return in Sec. V) then this can clearly be achieved for a single state ρ β ξ . Even if there were more than one state that maximised S n , there is no reason (at this stage) for Alice to use more than one. Therefore, we can take P (ξ) = 1 for that state, and ξ will now denote any choice that maximises S n . The A r,ξ values corresponding to this choice are obviously A r,ξ = −sign( σ β r ρ β ξ ). However, to evaluate the bound on S n it is more convenient to keep A r , writing with the representation on the right being included to highlight that this entire value can be considered as the expectation value of an operator. To seek out the largest possible value of this expression, we will use the fact that the largest possible expectation value of any operator is equal to the largest eigenvalue of that operator. Therefore, the EPR-steering bound we can derive for S n is where λ max denotes the maximum eigenvalue of this operator, and the other maximisation is over the n values of A r . It should be noted that the normalisation factor of 1/n in all of the above expressions, stemming from its introduction in Eq. (2), is generally paired with the sum over n measurements so that the values of S n (and related quantities) are limited to −1 ≤ S n ≤ 1. This restricts the values of S n to the same range for any n-value, allowing meaningful comparison between them. While it seems logical to weight each measurement result equally, by applying 1/n to each term or to the whole sum, we will re-evaluate this assumption in Sec. IV.C. A. The Inefficient Detection Loophole In keeping with our assumption of locality, any null results that Bob obtains for his measurements cannot be predicted by, or used to any advantage by a cheating Alice in an LHS model. Because we trust that Bob's state, and his measurement thereof, is governed by our quantum mechanical model of it, we can assume that Bob's probability of missing any result is independent of the value that result would have taken (had it not been null). Therefore, we will assume that the probability distribution of the results Bob did not obtain would have been the same as the probability distribution of Bob's measured results. This is known as a fair sampling assumption (FSA), and is generally valid for quantum systems as it is based upon the principles of quantum mechanics (in the behaviour of detectors). However, since we cannot assume that Alice's results are generated through measurement of a quantum state, we cannot apply any FSA to her results in any test of EPR-steering (which is, in part, a test of quantum mechanics itself). To simply postselect out any of Alice's null results would open an inefficient detection loophole in our test. B. Inequalities Allowing Post-selection Even though the FSA cannot be made for Alice, this does not mean that it is not permissible to postselect on Alice getting (or claiming to get) a non-null result. This postselection is permissible as long as the bound k n in the inequality Eq. (6) is adjusted (to a higher value, naturally), to take into account the extra flexibility offered to a dishonest Alice if she is allowed to submit null results with a certain probability 1 − . Since Bob has no way of knowing whether this probability is due to genuine inefficiencies or not, we refer to (such that 0 ≤ ≤ 1), as Alice's apparent efficiency. Alice's optimal cheating strategies, which gives us the new bounds k n ( ) for the post-selected correlation function, were derived in Ref. [15], with more details in Ref. [21]. The analysis in the remainder of the present paper builds on this, so we briefly review it here. If Alice chooses to submit non-null results only for a predetermined set of m measurement settings, with m ≤ n, her optimal ρ β ξ is defined by the values of these m settings. Such a strategy can be referred to as a deterministic strategy, and the maximal S n values obtainable with such a strategy are calculated to be where m = m/n is the apparent efficiency associated with any such strategy, which is necessarily constrained to be m ∈ {1/n, 2/n, . . . , (n − 1)/n, 1}. The sum in the above expression can be over either n or m settings, since the maximisation over {A r } is constrained such that a portion m n = m of the A r values will be nonzero. An experimental determination of S n would require many repetitions for each of the n settings, and Alice is not constrained to choose the same measurements to be null in every iteration, nor even to choose the same number of nulls in every iteration. If Alice uses a combination of deterministic strategies-a nondeterministic strategy-she is also able to avoid constraining her apparent efficiency to be ∈ { m }. If using a nondeterministic strategy, the maximal S n value attainable for any apparent efficiency is where w m defines the weighting with which Alice uses each deterministic strategy, each of which is defined by its apparent efficiency, m . Thus, the sum over m indexes all optimal deterministic strategies Alice could use (there is no benefit for Alice to ever use suboptimal deterministic strategies, so they are not considered). The weightings w m are normalised by n m w m = 1, and constrained such that n m=1 w m m = . It can be seen from the form of The above construction gives the bound a dishonest Alice can achieve for the non-postselected correlation function. Since she declares non-null results with probability (which is a quantity Bob directly calculates from the statistics of her declared results), the bound on the postselected function S n will be IV. BOB'S MEASUREMENT STRATEGIES Linear EPR-steering criteria of the above form have been studied before, both with the FSA for Alice [14] and without (i.e., closing the detection loophole) [15,21]. In all of these works, measurement orientations that are regularly spaced about the Bloch sphere were used. That is, the spacing between vertices is the same for any pair of nearest neighbours. The only such arrangements that exist are those with 2, 3, 4, 6, or 10 different measurement axes, which correspond to the vertices of the threedimensional Platonic solids (with the exception of n = 2 for which the tetrahedron, whose vertices do not come in antipodal pairs, was replaced by the square). Regularly spaced measurements are as far apart as it is possible to be from their nearest neighbours on the Bloch sphere, and in this sense are as different as possible. This minimises the ability of Alice to choose a state ρ β ξ that leads to high values of σ β r ρ β ξ for manyσ β r . Intuitively, this seems like a good choice for making it as hard as possible for a cheating Alice to obtain high S n values, thereby making the rigorous EPR-steering bounds as low as possible, and thus making it as easy as possible for an honest Alice to violate the bound. It should be noted that this reasoning would not necessarily apply for all kinds of photon polarisation states, as it relies on the symmetry of Werner states, which are invariant under identical unitary transformations performed on both sides. Figure 1 displays the EPR-steering bounds calculated from Eq. (9) with measurement orientations defined by Platonic solid vertices. Looking closely at this graph, one can observe that the Platonic solid measurements for n = 4 are clearly not optimal in general, since they give a bound above that for n = 3 for 0.48 0.58. An optimal set of four measurements would never require a higher degree of correlation to demonstrate EPR-steering than any set of three measurements. We will return to this issue in Sec. V. Recall that for a Werner state the degree of postselected correlation S n is µ, which can approach unity. Thus we see that for µ close to one, the bounds k n ( ) are quite loss-tolerant, especially as n increases. In- deed, if µ = 1, EPR-steering is demonstrable so long as > 1/n. Moreover, in almost all places, use of more measurements results in EPR-steering bounds that are more loss-tolerant. However, regularly spaced measurementsets do not exist for any n above 10, so we must abandon our scheme of using regularly spaced measurements if we wish to use n > 10. But on the other hand, our restriction to regularly spaced measurements was based upon the intuition that they were the best choice for their respective numbers of measurements, whereas this is demonstrably not true everywhere, as discussed above. Therefore, there may be little reason to continue imposing this condition, and little reason to thusly limit our measurement number. A. Geodesic Solids The reader may notice that Fig. 1 includes not only the Platonic solid bounds mentioned above, but also includes a bound for n = 16 measurements, which cannot correspond to any Platonic solid. This was derived, and employed experimentally, in Ref. [15]. The measurement orientations used to obtain this bound correspond to the vertices of a shape that incorporates the vertices of the icosahedron (n = 6) and the dodecahedron (n = 10), face-centred on one another (as these two shapes are a dual pair). The resulting arrangement of vertices creates a shape that is a geodesic solid-each face is an isoceles triangle, so its neighbouring vertices are not regularly spaced, but are quite close to it. This characteristic is true of any geodesic solid, so given the obvious benefits of using this n = 16 arrangement, it would seem that geodesic solids are one possible solution for obtaining high-n measurement sets with robust bounds. Construction of a geodesic solid does not require two Platonic solids to be superimposed, but only requires vertices to be added to the face centres of a Platonic solid, or another geodesic solid. Thus, they cannot be constructed with arbitrary numbers of vertices, but there does not exist any upper bound upon the number of vertices that can be used to construct one. Having seen that the Platonic solids are not necessarily optimal anyway, the fact that the vertices of a geodesic solid are not regularly spaced is not really much of a drawback. Indeed, the viability of geodesic solids may even raise the point of whether a little asymmetry may be more optimal than regularly spaced measurements even for small n. This will be fully explored in Sec. V. Meanwhile, we will use the geodesic solids as a first investigation into the way asymmetry can affect the derivation of EPR-steering bounds, enabling more loss-tolerant tests than any previously calculated. B. Measurement-independent null result rates When a cheating Alice suspects that Bob is keeping track of her null result distribution, her foremost consideration in optimising S n will be to ensure that this distribution reflects the same profile as that of an honest Alice. This means that Alice should ensure the probability of her reporting a null result on any given measurement is equal to the probability of her reporting a null result on any other measurement. She must do this, even if submitting nulls more often for some measurements would allow her to obtain a higher S n value. In other words, if Bob does verify that the null rate is independent of Alice's supposed setting, then he will be convinced of the reality of EPR-steering for a lower S n value than without this verification, thereby making the test more loss-tolerant. The uniform spacing of the Platonic solids' vertices grants them large symmetry groups; the group of all transformations which leave the polyhedron invariant. In particular, all vertices are equivalent under the action of each solid's symmetry group. Therefore any cheating strategy Alice adopts performs precisely as well if it is symmetrised by application of the symmetry group, and this ensures that the null-rate can be made independent of Alice's supposed setting. For example, when m = 2 for any of the Platonic measurement sets, Alice's optimal choice of ρ β ξ is any state with its spin axis centred on an edge of the Platonic solid (i.e., equidistant between any pair of adjacent vertices). Such a strategy is equally optimal regardless of which adjacent vertex pair is chosen because all edges are the same length. But for any geodesic solid, not all edges are the same length, so (considering m = 2 again) not all edge-centres correspond to optimal strategies. Thus, it may not necessarily be possible to use a nondeterministic strategy that both attains the maximal S n value and keeps Al- ice's null probabilities equal. Such limitations would be expected to become even more important for the more complicated cheating strategies [which would be strategies near = (1 + n)/2n: the middle of each curve]. For illustration, let us consider the geodesic solid that is constructed by combining the n = 3 and n = 4 Platonic solid vertices (because they are dual to one another), to obtain n = 7. To simply maximise the numerical value of S n ( ), without constraining her null probabilities for each measurement to be equal, Alice can obtain the "asymmetric" bound in Fig. 2. If a cheating Alice takes care to obey this symmetry condition, then the maximum S n ( ) she can attain is the "symmetric" bound in Fig. 2. The difference is negligible for most efficiencies, and is most significant near = 1/2. A clearer plot of the numerical difference between these two bounds is shown later. Figure 3 shows how a cheating Alice must depart from her reasonably simple asymmetric strategy in order to attain the maximum bound under the symmetry condition. The partitions in this figure show the optimal mixture of deterministic strategies by Alice, for each possible -value. The height of each partition represents the weighting with which Alice must send Bob each of the ensembles displayed on the shape within that section, in order to attain the maximal value of S n . For example, Alice's optimal symmetric strategy for = 0.3 requires Alice to choose Bob's states ρ β ξ such that: 10% come from the ensemble shown on the solid labelled (0,1), 70% from the (1,1) solid, and 20% from the (1,2) solid. The states in each ensemble must also be submitted equally frequently, e.g., in this strategy, the eight states on the (0,1) solid must each be submitted 10%/8 = 1.25% of the time, in total. The bracketed numbers (m 3 , m 4 ) that label each solid in Fig. 3 respectively represent the number of non-null responses, for the associated deterministic strategy, to Bob's n = 3 and n = 4 measurements (that make up the n = 7 set). For a deterministic strategy i, identified with the pair (m i 3 , m i 4 ), we calculate the deterministic bound quite similarly to before, as whereσ β r corresponds to Bob's n = 3 measurements for the 1 ≤ r ≤ n 3 ≡ 3 (the first sum), and for the n = 4 measurements for n 4 ≡ 4 ≤ r ≤ 7 = n 3 + n 4 (the second sum). The index i is over all possible combinations of (m i 3 , m i 4 ) and thus the maximisation considers the optimal deterministic ensembles for every such combination [there will be (n 3 + 1)(n 4 + 1) of these]. An optimal nondeterministic strategy is composed of these D n (i) as where w i is the weighting of each deterministic strategy, D n (i), and is constrained such that i w i = 1, and such that the apparent efficiency of the strategy is = . Although constructed slightly differently, these are the same relations as given in Sec. III B. In order for K n ( ) to give the optimal symmetric nondeterministic bound, we must also constrain Alice's null probability to be independent of Bob's measurement orientation. This can be done by constraining w i such that the mixing of strategies must be in proportions where, over the entire nondeterministic strategy, the null probability for n = 3 is equal to that for n = 4. Therefore, w i must also satisfy for both x = 3 and x = 4. Without this constraint, the optimal cheating strategies for Alice [those shown in Fig. 3(a)] would lead to very asymmetric reporting of null results. For example, at = 5/7 ≈ 0.714 there is a single deterministic strategy; the (1, 4) strategy, with an apparent efficiency of = 5/7. This strategy requires 3 = 1/3 and 4 = 1, which means Alice would never report a null result for one of Bob's measurements drawn from the cube (n = 4) but would report a null result 2/3 of the time for one drawn from the octahedron (n = 3). C. Weighting for the different types of vertices We have seen that for n = 7, a cheating Alice is able to attain a symmetric bound almost always as high as her asymmetric bound, but only if she employs more elaborate mixings of her deterministic strategies. Indeed, at almost every -value, the optimal symmetric mixings include more deterministic strategies than just the strategies used to attain the optimal asymmetric bounds at that -value. Clearly, only when using two (or more) geometrically inequivalent subsets of measurement direction (as in geodesic solids) could any cheating strategy attain a higher S n ( ) with an asymmetric null distribution than is possible with a symmetric null distribution. Thus, only when using such inequivalent measurement subsets can a symmetry condition be used to improve our EPR-steering bounds (as we observed for n = 7). From this observation, one may come to suspect a further advantage that may be gained in this situation, as follows. Say an optimal asymmetric cheating strategy involves Alice reporting more null results for one of the measurement sets (e.g., the n 3 set). This suggests that a cheating Alice would prefer not to have to report out-comes for this set at all. Therefore, if Alice were not only forced to report results for these measurements equally often, but actually more often than other measurements, this would, intuitively, make it harder for a cheating Alice to achieve a high correlation S n , averaged over all reported results, especially when we impose the restriction that the cheating strategy be symmetric. Thus, using different weights for different measurement sets in the expression for the EPR-steering correlation function could conceivably lower our EPR-steering bounds even further. Like the symmetry condition, such an advantage would clearly only be available to Bob if the set of measurements he employs are not regularly spaced. To make use of this, we should recall that each measurement was equally weighted in all of our previous calculations. Indeed, for any of the Platonic solid measurements, un-equal weightings could predictably lead to higher bounds (attainable by a dishonest Alice by aligning Bob's LHS closer to the more highly weighted measurements), but offer no prospect of lower bounds. The only goal for any choice of weighting (or, indeed, any choice of measurement set) is to limit the values of S n that can possibly be obtained with any cheating strategies. For an honest Alice, S n will be solely dependent upon her state's entanglement parameter µ, and her efficiency. Thus, the only way in which measurement weightings can affect an honest Alice's capabilities is if a change in weightings changes our bounds. This is to say; if unequal weightings can lower our EPR-steering bounds, we can be certain that this is the only consequence they will effect. To investigate how the n = 7 EPR-steering bound is affected when our measurement weightings are not necessarily equal, we will designate the measurement weighting for the octahedral (n = 3) measurements as p 3 , and for the cubic (n = 4) measurements as p 4 . Our previous expressions for D n (i) used equal weightings for all measurements, so removing this restriction from Eq. (10) to include a dependence upon p 3 and p 4 , we obtain with p 3 + p 4 = 1. Note (from this expression) that p 3 = p 4 = 0.5 does not define equal measurement weightings because there are three octahedron (n = 3) measurements and four cube (n = 4) measurements in our set of seven. Therefore, each measurement is chosen equally often with the balanced weightings p 3 = 3/7, p 4 = 4/7. When Bob chooses unbalanced weightings (which we will refer to as p x in the general case), Alice's optimal deterministic strategies will likely change. However, even with Eq. (10) replaced by Eq. (13), Alice's optimal nondeterministic strategies are still described by Eq. (11), and we will still constrain Alice to satisfy the symmetry condition, Eq. (12). Upon calculating the values of these bounds as a function of p x , we find that Bob can indeed alter his p x values to lower the EPR-steering bound for almost all -values. Figure 4 plots the values of p 3 and p 4 that yield the lowest possible EPR-steering bounds for our n = 7 geodesic measurements. From this figure, it is clear that EPR-steering can be more easily demonstrated by using unbalanced measurement weightings. Moreover, the optimal way to unbalance the correlation function is in line with the intuitive argument we used to motivate this unbalancing at the beginning of this section: to more heavily weight the measurements which give lower results in Alice's cheating strategies. Appendix A gives more detail as to how this is shown by the behaviour of Fig. 4. However, at most -values, the magnitude of the improvement we obtain in k 7 ( ) by using the optimal weightings shown in Fig. 4 is on the same scale as the difference between the two n = 7 bounds in Fig. 2; so it would not be very useful to plot the postselected values for these bounds. Instead, we have shown, in Fig. 5, the difference between the optimally weighted bounds and the original asymmetric bounds for n = 7 (as function "B"). This figure also includes the difference between the asymmetric bounds and the symmetric bounds (function "A"), so the spacing between these two functions is the degree of improvement that the optimally weighted bounds offer over the symmetric bounds. On this figure, which is approximately one-fourteenth the vertical scale of Fig. 2, we can observe that the optimally weighted bounds offer improvement at almost every -value, but given the scale upon which this change is visible, it can be said that there is not a significant improvement anywhere except near = 1. The function "C" in this graph shows the improvement gained from further types of optimisation that we discuss in the next section. V. OPTIMISED MEASUREMENT STRATEGIES Our choices of measurement sets thus far have all been built upon the idea that regularly spaced measurement orientations should be of the most benefit for a rigorous test of EPR-Steering. However, in Fig. 1 we saw that regularly spaced measurements for n = 4 are definitely not optimal, and in Sec. IV we have observed several distinct advantages that only exist for measurements that are not regularly spaced (because they combine two Platonic solids). Including these advantages for the n = 7 geodesic solid gives bounds better than the Platonic bounds for n = 6 around ≈ 0.5. However, they are actually worse than the Platonic n = 6 bounds for ∈ [0.24, 0.44] ∪ [0.52, 0.82], meaning that even this scheme cannot be optimal for n = 7. These observations motivate considering the even more general case, where we do not have two (or more) sets of measurements, but rather where we treat each measurement setting independently. That is, we fix only the number of settings n, and, for each , optimise the n directions defining the n measurements, and the n weightings defining the correlation function. To investigate this, we must return to our definition of S n , redefining it as generally as possible. Our use of {σ β r } already allows arbitrary measurement directions, so we need only define a weight for each r-term, which we will denote p r , normalised according to n r=1 p r = 1. Thus, the (non-postselected) form of S n that we consider is In this scenario (just as in Sec. IV C), it is not actually necessary for Bob to experimentally choose measurement setting r with probability p r in order to calculate Eq. (14); he can choose different settings with arbitrary frequency and merely weight each term appropriately in his calculation of S n . To obtain the strongest bounds for variable measurement sets, it will clearly be necessary to employ our symmetry condition. In a form which is independent of measurement orientations (or relationships thereof), the condition a cheating Alice must meet for her null probabilities to be independent of measurement orientation is where for a given deterministic strategy i, A i r is the result she reports when Bob measured with setting r, and w i is the probability with which she chooses each strategy. Note that |A i r | ∈ {0, 1} is the efficiency for measurement r under strategy i. Alice's optimal deterministic strategies are those which attain the maximum in the expression, This looks markedly more similar to Eq. (7) than it does to Eq. (10) or Eq. (13), but its only deviation from any of these equations is that, to define it with generality, we must take the i index to denote the optimal deterministic strategies for each possible permutation of null/non-null values for all measurements. That is, for n measurements, where we now label m i r = |A i r | ∈ {1, 0}, we must consider the optimal deterministic strategies for all 2 n possible values of the list of (m i 1 , m i 2 , . . . , m i n−1 , m i n ). Thus, to employ our generalised symmetry constraint, the maximal nondeterministic bound on S n cannot be defined by Eq. (8), which is not compatible with Eq. (15), but must be defined as where, most generally, i indexes the set of all possible deterministic strategies, {D n (i)}. This is because if Alice's numerically optimal nondeterministic strategies cannot be arranged to satisfy the symmetry condition, she will need to use some suboptimal deterministic strategies in order to satisfy this condition (and to maintain a reasonably high value for S n ). While Bob's choice and implementation of {p r } are of no consequence to an honest Alice (except in their capacity to lower the EPR-steering bound), it merits brief observation that a cheating Alice cannot attain the bounds K n ( ) on S n ( ) without knowing what {p r } will be, since the optimal deterministic strategies defined by Eq. (16) involve p r . (The same is true of the p x in Sec. IV C.) But, as described above, Bob's only priority in choosing {p r } is to make the EPR-steering bound as low as possible. So, given some measurement set {σ β r } and set of weights {p r }, we can calculate K n ( ) from Eq. (17). Thus, in terms of the post-selected S n ( ), as we have been using, the EPR-steering bound is k n ( ) = K n ( )/ . Calculating which measurements and weightings minimise k n ( ) requires searching simultaneously over allσ β r and p r variables. Thus, we can only define the optimal value of k n ( ) as We can minimise the dimensionality of this problem by holding static the direction of the firstσ β r and the plane of the second, and defining one p r from the other n − 1 of them (using their completeness relation), but this still leaves a search space of 3n − 4 scalar variables. Moreover, such an optimisation is required for every different value. Performing such optimisations numerically does not require unreasonable amounts of computational power for moderate n [28]. The sets {σ β r } and {p r } that achieve the minimum bound, c n ( ), define the optimal steering experiment using Werner states, n measurement settings, and an apparent efficiency of . A. Optimal EPR-steering bounds for n = 4 We observed earlier in Fig. 1 that for 0.48 0.58, the Platonic solid EPR-steering bound for n = 4 (cube) was not as loss-tolerant as the n = 3 (octahedron) bound, which would not be possible were it an optimal set of four measurements. This makes n = 4 the obvious place to start for our optimisation. This optimisation was performed for n = 4 at 18 different values, with spacing ∆ = 0.75/18 ≈ 0.042 between each value. The EPR-steering bounds c n ( ) yielded by each optimised measurement strategy are shown in Fig. 6. For comparison, this figure also displays the Platonic solid bounds for n = 3 and n = 4. One might expect us to show also the n = 3 optimised bounds, but it turns out that the octahedral measurement strategy for n = 3 is already an optimal measurement strategy for every . (At least this is what we found after performing the optimisation for n = 3 over a large range of values.) It was concluded that the same is true of the square strategy for n = 2. In Fig. 6, the points on the Platonic solid curves are optimal deterministic strategies, and the lines are the nondeterministic strategies corresponding to the optimal bounds for these measurement sets, as usual. But on the optimised measurement curve, the only bounds which are definitely optimal are the data points, as these are the only values for which optimisations have been performed. The curve connecting these points is calculated from nondeterministic mixings of these optimised bounds. However, analysis of these data points indicates that the optimal values of {p r } and {σ β r } vary quite slowly relative to ∆ , so this curve almost certainly closely approximates the intermediate optimal bounds. As we can see, the optimal bounds for n = 4 are lower FIG. 7: The solid representing the optimal measurement arrangements for n = 4 when = 0.5, from two different angles. The two vertices at the top of (a) are the same two vertices in the centre of (b). than the n = 3 bounds in all places, which more than fulfils our motivating requirement that optimal n = 4 bounds should have c 4 ( ) ≤ c 3 ( ) ∀ . Indeed, the optimised bounds are also visibly lower than the n = 4 Platonic solid bounds for 0.42, but converge with the Platonic bounds as 0.25. Performing the minimisation in Eq. (18) for n = 4, with a large number of -values, reveals that the optimal measurement strategy for n = 4 is still to use equally weighted cubic vertices for 0.42, but as increases, the optimal measurement strategy deviates from the cube (as a seemingly continuous function of ), approaching the spatial configuration shown in Fig. 7, which represents the optimal measurement strategy for = 0.5. The optimal values of {p r } at this point are such that the two measurements in the same plane-the ones that define the square visible in Fig. 7(b)-have weightings of p r = 1/3, and the other two measurements have weightings of p r = 1/6. As increases above = 0.5, this optimal measurement strategy undergoes another continuous transition, and at = 1, the optimal arrangement becomes that shown in Fig. 8(a): three measurements almost (but not quite) equally spaced in the same plane-the optimal lengths of their edges seem to be around 1.03, 1.00, and 0.97, and this performs better than exactly equally spaced measurements-and a fourth perpendicular to them. The weightings associated with these measurements are p r ≈ 0.23 for the three planar measurements, and p r ≈ 0.31 for the extraplanar measurement. B. Optimal EPR-steering Bounds for n ≥ 4 Although we found the Platonic solid bound for n = 3 to be an optimal bound, the clear improvement of the optimal n = 4 bounds over their Platonic solid counterparts strongly suggests that there may be room for improvement in the other Platonic solid measurement strategies. Upon calculating a series of optimal strategies for n = 5, this suggestion becomes an insistence, since we find that the optimal bounds for n = 5 are again better than the Platonic bounds for n = 6 in a range near = 0.5 (we plot the n = 5 curve later, in Fig. 11). Calculating optimal measurement strategies for n = 6 gives bounds that are, as expected, equal to the Platonic bounds in some places, but slightly better in most. In Fig. 9, we have plotted the quantitative improvement that the optimised bounds offer over the Platonic bounds for n = 4 and n = 6 (more visibly displayed than the form of Fig. 6 allows). The maxima and minima in Fig. 9 are indicative of the advantages that optimised measurement strategies offer over Platonic measurements, so we explain in Appendix B what causes them to occur. At each point, it seems that the most beneficial measurement sets should generally be reasonably close to being regularly spaced, but not quite. The most beneficial {p r } sets merely augment these properties, with most p r being close to equal, but slightly higher for measurements that are the most outlying. Based on an exploration of optimised strategies for 4 ≤ n ≤ 10 (though less comprehensively for n = 9 and n = 10), similar behaviours seem to be generally applicable to the optimal strategies for any n. Indeed, the optimal measurement arrangements for n = 5, 6, and 7 have obvious traits in common with those for n = 4. If we define the vertices of a solid from our optimised measurement orientations, we obtain solids for n = 5 and n = 6 that have almost the same arrangement of three equatorial vertex pairs that n = 4 elicits. For n = 5 and n = 6, the only substantial difference from the n = 4 case is that the single vertex at the top of that figure is replaced by a pair of vertices for n = 5, and a (scalene, but nearly equilateral) triangle of vertices for n = 6. This property is made as visible as possible in Figs. 8(b) and 8(c), with their three planar vertex pairs being the six outermost vertices visible on both of those images. The optimal solid for n = 7, on the other hand, breaks with this pattern, but still shows a noticeable similarity to the n = 4 shape. Shown in Fig. 8(d), this solid has the same top-down profile as the n = 4 solid, centred on a "top-bottom" vertex pair. Unlike n = 4, the remaining vertices are not arranged in a single plane, but are arranged in two parallel planes with three vertex pairs defining each one-which is the source of the similarity between our n = 4 and n = 7 shapes. The optimal n = 8 solid shown in Fig. 10 does not bear an immediate resemblance to any of the other optimal solids in Figs. 7 or 8, but can easily be seen to approximate two parallel planes of vertices with and another vertex orthogonal to them. The solids shown in Figs. 8 and 10 were all generated from = 1 optimisations, and do change slightly with , but retain the same general arrangements at all points. In addition to this, the optimal EPR-steering bounds for n = 4, 5, 6, 7, and 8 all seem to adopt the same general behaviour that we have observed in our analysis of the above bounds. If we return to Fig. 5, we can see that the improvement of the optimised n = 7 bounds over the other examples does follow a similar pattern to that observed in Fig. 9 for n = 4 and 6. Around ≈ 0.3 and ≈ 0.75, our optimised n = 7 bounds offer little improvement upon their more regularly spaced counterparts, for the same reasons described above. However, Fig. 5 shows that for n = 7 (at least), the improvements of the optimised bounds at ≈ 1 are largely due to the advantage of unequal measurement weightings. Returning to Fig. 9, a final trend to discuss is that the improvement in n = 4 bounds, at all points, exceeds the improvement in n = 6 bounds. Seeing as the Platonic n = 4 bounds were the only ones to be outperformed by another Platonic solid at any point, this is not surprising. However, perhaps a better framing of the reasons for this can be seen in the tendency of higher nvalues to yield bounds ever closer to the infinite measurement limit-the lowest possible values that EPR-steering bounds can take, regardless of measurement numberanalytically calculated in Ref. [8]. This limit can be expressed as a diagonal line on our graphs, and is shown in Fig. 11. As n increases, the Platonic bounds (in Fig. 1) approach this diagonal, but with every step towards it being smaller than the last (with respect to their increases in n). We would expect that optimised EPRsteering bounds should also approach this limit in a similarly asymptotic manner, albeit more swiftly than suboptimal bounds. Therefore, it should be reasonable to expect that the closer a Platonic bound is to the n = ∞ line, the smaller the advantage conferred by optimising it, just as with the advantage conferred by increasing measurement number. As expected, we find that the optimised EPR-steering bounds do approach the n = ∞ bound more quickly (with respect to n) than the Platonic bounds do. The optimised bounds for n = 2, 3, 5, and 8 are shown in Fig. 11, and at almost every -value, the optimised n = 8 bounds seen here are actually closer to the diagonal than the Platonic n = 10 bounds are (especially around = 0.5, where the n = 10 bounds are inferior to every optimised bound with n > 4). Indeed, the proximity of the bounds in Fig. 11 to the diagonal limit shows that with n = 8, these measurement strategies are considerably loss-tolerant, and have very little room for improvement. However, any optimised strategy with n > 8 is guaranteed to be at least as loss-tolerant, and at least as close to the diagonal as the best bounds in Fig. 11 (and will necessarily be incrementally closer for at least some range near = 1/n). In Fig. 11, this trend is easily observed, but here we can also see the relevance of regularly spaced measurements being close to optimal around ≈ 0.3 and ≈ 0.75: Around these places, the Platonic bounds (for n = 6 and 10, at least) were already reasonably close to their graph's diagonal. Thus, it stands to reason that these would be values where the possible advantages of any other measurement strategies would generally be most limited. This also offers insight as to why the greatest advantages for our optimised bounds were around ≈ 0.5 and ≈ 1. Such behaviour is reassuring to see in optimised bounds, since it is reasonable that only with optimal bounds can we see higher n-values necessarily leading to bounds that are incrementially closer to a diagonal line each time. VI. CONCLUSION In our consideration of EPR-steering tests for twoqubit Werner states, we have confirmed our earlier conclusion [21] that the detection loophole in these tests can be closed without necessarily placing any particularly demanding experimental constraints upon one's detection efficiency. This can be accomplished by employing a large number n of measurements in each test. However, we have also shown, contrary to the assumptions of previous experiments, that measurement sets based upon Platonic solids are, in general, suboptimal. Of course, Platonic solids are suboptimal in that they are restricted to n ≤ 10, but this limit can be overcome by combining Platonic solids to make geodesic solids (which can be defined for arbitrarily large n if desired). The more interesting point is that Platonic solids are demonstrably suboptimal even for n as small as 4. Specifically for some values of Alice's efficiency, there are Werner states which do not violate the EPR-steering inequality for the n = 4 Platonic solid, when we know that EPRsteering can be demonstrated even with n = 3. Considering geodesic solids and how to test Alice's steering ability most rigorously pointed the way to defining the optimal steering tests for any n, even those for which there exist no Platonic solid or geodesic solid. This means that more measurements can always yield more loss-tolerant tests of EPR-steering. More importantly, it means that even with n relatively small, tests of EPRsteering can be much more loss-tolerant than with Platonic solids, or any other merely intuitive strategies. We calculated and explored the optimal measurement strategies for measurement numbers of n = 3, 4, 5, 6, 7, and 8, but were prevented from easily exploring the optimal strategies for n ≥ 9 by the computational demands of their numerical derivations. For this reason, we should conclude that geodesic measurement sets may be a more practical alternative than truly optimal measurements for loss-tolerant experimental tests of EPR-steering for large n. Optimising the EPR-steering inequalities for geodesic measurements do require numerical minimisation, but the number of parameters scales only logarithmically with the number of settings n. This is significantly less demanding than generating a fully optimal measurement strategy and inequality for n settings, which has 3n − 4 free parameters. We note that a recent paper [26] has suggested an alternate method for demonstrating steering with large numbers of measurements, by using random bases, although without consideration of inefficiency or loss. Finally, we note that further work would be required to turn the EPR-steering inequalities we have derived here into truly experimentally applicable inequalities. There are two reasons for this. First, we have assumed that Bob's detectors are completely characterised, with no unknown systematic errors. Second, we have allowed Bob to place restrictions on Alice's reported results (that the frequencies of nulls are independent of his setting) which cannot be exactly verified from any finite data set. A completely rigorous experimental test would have to include the (very small) increase in the ability of an untrusted Alice to cheat by exploiting the imperfections of Bob's measurement apparatus, and any allowed deviation of her null-rates from the average. is why the Platonic measurements are optimal or quite nearly optimal in this range. b. Little or no advantage at ≈ 0.75. A similar principle applies for the minima to the right of the central peak in Fig. 9, when Alice must choose most of her results to be non-null. With non-regularly spaced measurements, Alice can often compose her deterministic strategies for m ≈ 0.75 to have non-null arrangements that are more closely spaced than Platonic measurements allow (in a non-regular set of orientations, it's easy to find one or two that are more isolated from the others, whereas in a regular set, this is impossible). The symmetry condition curbs this ability somewhat since it requires each measurement to have the same average non-null probability, but when Alice has several m-values between ≈ 0.5 and = 1 (i.e., when n is large), it becomes easier for her to mix these closely spaced high-m ( m > 0.75) strategies with low-m strategies that give higher expectation values for the outlying measurements. This is why the Platonic strategies are close to optimal around ≈ 0.75, and moreso for n = 6 than n = 4. c. Large advantage at ≈ 0.5. In this regime, Alice's optimal strategy is to choose roughly the same number of nulls and non-nulls in each deterministic strategy [29]. To do this, Alice would need to find closely-spaced configurations of m ≈ n/2 measurements to be non-null, and must find such configurations in as many directions as possible. This task is trivial with Platonic measurements, as their symmetry groups are such that a configu-ration of m nearest-neighbour measurements is the same configuration for any m nearest neighbours. Therefore, choosing a set of measurements that are not regularly spaced offers an advantage in this region. For the n = 4 solid in Fig. 7, for example, every measurement pair is farther apart than the measurement pairs in the cubic arrangement, with the exception of the pair of lowestweighted measurements. Thus, there is only one pair of measurements that can offer deterministic strategies lower than the cube's, and they have very low weightings. It is in this way that non-regularity of the optimal measurement sets is easily used to outperform Platonic measurement sets. d. Large advantage at = 1. Alice's strategies are most strongly restricted at = 1, where a cheating Alice's optimal strategy is to align Bob's state with the spatial average of all of his measurement axes, with a suitable choice of sign. For the Platonic solids, this means choosing ensembles that are either face-centred or vertexcentred all about the Platonic solid. However, optimised measurement strategies for ≈ 1 tend to have (up to n = 8, at least) most of their measurements defining a single plane (or two parallel planes) of vertices, and the rest clustered near the directions perpendicular to this plane. The benefit this offers is that the spatial average(s) of all of these measurements will be farther from most of them than the Platonic averages are from their constituent measurements. the qubit state by the vacuum state |v with probability p > 2µ − 1 > 0. This creates the qutrit-qubit state (1−p) µ|ψs ψs| + (1 − µ)I αβ /4 +p|v v|⊗I β /2, where here |ψs is a singlet state in the two-qubit subspace, and I αβ is understood to act only on this subspace. By construction, Alice cannot steer Bob, but Bob can steer Alice because Alice (now considered trusted) can simply consider steering in her qubit subspace. [28] The solution time for n = 4 takes about an hour per data point in Matlab on a standard personal computer. However, every time n increases by 1, the variable space requires three more dimensions. Even with efficient optimisation algorithms, our solving time still increases exponentially with n, almost doubling with every increase in n, being approximately proportional to 1.85 n . [29] Alice's other option would be to mix high-and lowstrategies. However, this would combine the weaknesses of the low-and high-strategies without optimally employing their strengths. Alice's most effective cheating strategies at low are those which maximise Bob's results for a minority measurements by disregarding his results for the majority (which are assigned nulls). At high , Alice's most effective strategies are those which maximise Bob's results for as many measurements as possible, with her ability to do so being limited by how many measurements she can afford to assign nulls for (and therefore not care about their values). Choosing roughly the same number of nulls and non-nulls in each deterministic strategy allows her to most effectively prioritise the maximisation of half of Bob's measurements over the other half, which she can afford to not care about (and submit nulls for) in each deterministic strategy.
2014-08-04T03:22:43.000Z
2014-04-23T00:00:00.000
{ "year": 2014, "sha1": "2e446064986624d8cb6ee361bba1d466c3f28139", "oa_license": null, "oa_url": "https://research-repository.griffith.edu.au/bitstream/10072/62560/1/96892_1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b879debdc3cff9ce6f2cc0f35739986606f6c40b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4544819
pes2o/s2orc
v3-fos-license
United airway disease: current perspectives Upper and lower airways are considered a unified morphological and functional unit, and the connection existing between them has been observed for many years, both in health and in disease. There is strong epidemiologic, pathophysiologic, and clinical evidence supporting an integrated view of rhinitis and asthma: united airway disease in the present review. The term “united airway disease” is opportune, because rhinitis and asthma are chronic inflammatory diseases of the upper and lower airways, which can be induced by allergic or nonallergic reproducible mechanisms, and present several phenotypes. Management of rhinitis and asthma must be jointly carried out, leading to better control of both diseases, and the lessons of the Allergic Rhinitis and Its Impact on Asthma initiative cannot be forgotten. Introduction Upper and lower airways are considered a unified morphological and functional unit, and the connection existing between them has been observed for many years, both in health and in disease. 1,2 More than 2,000 years ago, Claudius Galenus studied the upper airway and paranasal sinuses as integral parts of the respiratory tract, and he assumed that rhinitis and asthma were caused by secretions dripping from the brain into the nose and lung. 3 More recently, the concept of united airway disease (UAD) was suggested. [4][5][6] The nose is situated at the entrance of the airway and protects the lower airway from the harmful effects of the inspired air by acting as efficient air-conditioning. The nose warms, filters, and humidifies the inspired air so that clean air that is fully saturated with water vapor at a temperature of 37°C is delivered to the lungs. During nose breathing, the majority of particles with an aerodynamic equivalent diameter (AED) >15 µm are deposited in the upper respiratory tract. Particles with AEDs >2.5 µm are primarily deposited in the trachea and bronchi, whereas those with lower AEDs penetrate into the gas-exchange region of the lungs. 7 The nasal and bronchial mucosa present similarities, and one of the most important concepts regarding nose-lung interactions is the functional complementarity, which assigns the protector role of the nose to the lungs. 4 However, the functions of the upper airway and their interactions with the lower airway are much broader than merely air-conditioning. The Allergic Rhinitis and Its Impact on Asthma (ARIA) guidelines published in 2001 1 achieved some goals: 1) development of a guideline proposing a standardized management plan for allergic rhinitis (AR), 2) establishment of the ARIA concept, 3) spreading of the guideline to general and specialist physicians, and 4) establishment of a multiprofessional forum to study rhinitis and asthma. There is strong epidemiologic, pathophysiologic, and clinical evidence supporting an integrated view of rhinitis and asthma: UAD in the present review. We can also consider UAD an airway-hypersensitivity syndrome, because rhinitis and asthma are chronic inflammatory diseases of the upper and lower airways, which are induced and reproduced by allergic or nonallergic hypersensitivity reactions, and present several phenotypes (Table 1). United airway disease: epidemiologic evidence AR is the most common of all atopic diseases, and although it can develop at any age, most patients report the onset of symptoms before 30 years of age, making it the most common chronic disorder in children. 6 AR can be considered a major public health problem, due to its prevalence and impact on patients' quality of life, work/school performance, and productivity economic burden. 4,6 It is characterized by the classic symptoms of nasal itching, sneezing, rhinorrhea, and nasal obstruction. In addition, AR is associated with a variety of comorbidities, such as atopic dermatitis, sleepdisordered breathing, conjunctivitis, rhinosinusitis, otitis media, asthma, and emotional problems. 6,8 At the same time, AR is a disease that is underdiagnosed and overlooked by patients and physicians. 9,10 AR is considered a risk factor for developing asthma. 4,6 Asthma is a heterogeneous disease characterized by chronic airway inflammation and hyperresponsiveness (AHR) to direct or indirect stimuli, which can persist even when symptoms are absent or lung function is normal but may normalize with treatment. 11 Asthma is defined by the history of episodic respiratory symptoms, such as wheeze, shortness of breath, chest tightness, and cough, and is associated with variable expiratory airflow limitation. 11 Allergic asthma is the most prevalent disease phenotype, which often begins in childhood and is associated with a personal and/or family history of allergic diseases, such as eczema and AR. 11 In the same way of the allergic phenotype, patients with non-AR (NAR) are at increased risk of developing nonallergic asthma. NAR presents later in life than AR and is not a single disorder but is composed of a heterogeneous group of diseases. 12 According to the International Study on Asthma and Allergy in Childhood, the prevalence of AR in Europe was found to be ∼25% and in Brazil ∼15%-20%. The prevalence of asthma worldwide was observed to be ∼20% (Global Initiative for Asthma) and 10%-20% in Brazil (Global Initiative for Asthma, ARIA). Countries with a very high prevalence of rhinitis had asthma prevalence ranging from 10% to 25%. 4,11,13 The management of asthma should include assessment of asthma control, future risks, and any comorbidity that could contribute to symptom burden and poor quality of life. The main associated comorbidities are rhinitis, rhinosinusitis, gastroesophageal reflux, obesity, obstructive sleep apnea, depression, and anxiety. 11 We evaluated the prevalence of comorbidities in patients with severe asthma and observed that rhinitis and gastroesophageal reflux disease were the most common, rhinitis being observed in 91% and gastroesophageal reflux disease in 71% of the asthmatic patients. 14 Interactions between the lower and the upper airways are well known and have been extensively studied since 1990. Over 80% of asthmatics have rhinitis, and 10%-40% of patients with rhinitis have asthma, suggesting the concept of "one airway, one disease". 4 Rhinitis symptoms have been reported in 98.9% of allergic asthmatics and in 78.4% of nonallergic asthmatics. Furthermore, ∼30% of patients with only AR who do not have asthma present hyperresponsiveness to methacholine or histamine. 2,4,15 However, there are large differences in the magnitude of airway reactivity between patients with rhinitis and asthma. Patients with perennial rhinitis have greater bronchial reactivity than those with seasonal rhinitis, in whom the presence of hyperresponsiveness was observed especially during the pollen season. 4,15,16 AHR, which is a paramount feature of asthma, is a strong risk factor for the onset of asthma in patients presenting with AR. 6 Several studies suggest that AR and NAR are risk factors for new onset of asthma and persistence of asthma. 17,18 In a cohort of 690 individuals with a follow-up of 23 years, it was observed that the incidence of asthma was 10.5% in subjects with rhinitis and 3.6% in those without rhinitis. Therefore, the development of asthma was tripled in rhinitis patients compared to those without rhinitis. 19 In the Tucson Epidemiologic Study of Obstructive Lung Diseases, the odds ratio for developing asthma was 2.59 (95% confidence 54-4.34) if rhinitis was present and 6.28 (95% confidence interval 4.01-9.82) in the presence of rhinitis plus sinusitis. 20 A European Survey confirmed the presence of perennial rhinitis as a major risk factor for asthma, with odds ratios of 11 for the atopic and 17 for the nonatopic phenotype. 21 Asthma and rhinitis share common risk factors and present common susceptibility to different agents, such as allergens (atopy) and infections. 4,6 The presence of AHR and concomitant atopic manifestations in childhood increases the risk of developing asthma and should be recognized as a marker of prognostic significance, whereas the absence of these manifestations predicts a very low risk of future asthma. 22 United airway disease: pathophysiological evidence The upper and lower respiratory tracts form a continuum, allowing the passage of air into and out of the lungs and sharing many anatomical and histological properties. 2 They share common structures, including the ciliary epithelium, basement membrane, lamina propria, glands, and goblet cells, forming the so-called united airway. 23 On the other hand, differences between the upper and lower airways do exist. Nasal mucosa, which is attached to bone, is enriched with vessels, whereas bronchial mucosa, which is attached to cartilage, is enriched with smooth-muscle cells. 24 Therefore, the major cause of airway obstruction, especially in the early phase of the allergic response, is different: upper airway obstruction is caused by vasodilation and edema, whereas lower airway obstruction arises from smooth-muscle constriction. 24 It is reasonable to think that because of anatomic reasons, the upper airway constitutes the first target for allergens and for physical and chemical environmental stimuli; therefore, they tend to be the first to be affected by the allergic airway disease, and if the intensity of this disease is low, the upper airway may be the only part of the respiratory tract that is affected. However, when the entire respiratory tract is involved, rhinosinusitis and asthma follow a parallel course. 25 Unfortunately, systematic research in this field has not been performed, and the evidence supporting these postulates is scarce. 25 UAD presents two main phenotypes: allergic (atopic or extrinsic) and nonallergic (nonatopic or intrinsic). With regard to asthma, most children and at least 50% of adults have the allergic phenotype, in which the disease is associated with allergic sensitization defined by the presence of serum-specific immunoglobulin (Ig)E antibodies and/or positive skin tests to the proteins of common inhaled allergens, such as house dust mites, animal dander, fungal spores, pollens, and cochroaches. 26 On the other hand, in nonallergic asthma, we do not observe IgE reactivity to allergens. 26 In the same way, there are two important phenotypes or rhinitis, allergic and nonallergic, both of them associated with increased prevalence of asthma. 27 We focus on the allergic pathophysiology of UAD. AR and atopic asthma result from an IgE-mediated allergic reaction associated with airway inflammation of variable intensity. 4 Since the first class of Ig presented on the surface of B-cells is IgM, it is necessary that IgM is switched to IgE so that allergic inflammation can develop. Isotype switching to IgE requires antigen presentation and two other signals. 28 Signal one is provided by interleukin (IL)-4 and/or IL-13, acting through IL-4R and IL-13R via STAT6, which activate transcription to the IgE isotype. Signal two is provided mainly by ligation of CD40 on B-cells to CD40L on T-cells, which activates DNA-switch recombination. 28 The IgE-mediated immune response is initiated when the allergens are taken up by antigen-presenting cells via the cell-surface Ig receptor. Processed fragments are then presented in the context of major histocompatibility complex class II to T-helper (T H ) cells, which recognize the allergen-major histocompatibility complex II composite and are activated. The allergen-specific T H 2 cells produce IL-4 and IL-13, and express CD154, leading to IgE class switching. 26,28 Although class switching is generally thought to occur in the germinal center of lymphoid tissues, it has also been reported to occur in the respiratory mucosa of patients with AR and atopic asthma and in the gastrointestinal tract in patients with food allergy. 28 Once IgE is produced by B-cells, the Ig will bind to the high-affinity receptor Fc ε R1 on mast cells and basophils. 28 In future, contacts with the polyvalent sensitizing allergen, these cells will be activated through Fc ε R1, initiating an immediate hypersensitivity reaction that is central in the pathogenesis of AR and allergic asthma. 28 The reaction has an immediate phase that is induced by the release of preformed and rapidly synthesized mediators from mast cells and basophils, resulting in erythema, edema, and itching in the skin, sneezing and rhinorrhea in the upper respiratory tract, and cough, bronchospasm, edema, and mucous secretion in the lower respiratory tract. 28 A late phase mediated by cytokines and chemokines and characterized by edema and leukocytic influx can occur 6-24 hours after the immediate phase. Eosinophils recruited mainly by IL-5 produced by T H 2 cells stand out and are essential to maintain the chronic inflammatory process and tissue damage. 28 Eosinophil activation directly contributes to vasodilation, edema, mucous production, bronchoconstriction, and Journal of Asthma and Allergy 2016:9 submit your manuscript | www.dovepress.com Dovepress dysfunctional remodeling of the airway. 29 These processes are mainly induced by eosinophil-derived products, such as eosinophil peroxidase, which causes AHR and activates dendritic cells. 26 Murine studies have shown that eosinophils also contribute to airway-wall remodeling and subepithelial membrane thickening via the release of TGFβ. 26 Finally, similarly to neutrophils, upon activation, eosinophils undergo cytolysis and release mediators from the eosinophilic granules, such as eosinophil-derived neurotoxin, cationic proteins (eosinophil peroxidase), and major basic protein, which can damage structural cells of the airway. 26 It has been demonstrated that humans who died from asthma presented eosinophilic inflammation all over the respiratory tract, from nasal mucosa to lung tissue, showing that the airways really are unique, even in pathologic conditions. 30 Therefore, AR and asthma share immunopathological features, including a T H 2-type immune response, thickness of the basement membrane, and goblet-cell hyperplasia. 24 In contrast to allergic UAD, the pathophysiology of which is well characterized, the etiology of and mechanisms involved in nonallergic UAD remain unclear. Some of the possibilities include allergy triggered by unknown antigens (fungi), persistent infection (caused by Chlamydia trachomatis, Mycoplasma spp., or viruses), and autoimmunity. A central concept of UAD is the influence of the upper airway in the function of the lower airway, which is particularly evident and relevant in the allergic phenotype. 25 The pathological interactions between the upper and lower airways are summarized in Figure 1 and can be divided into: • air-conditioning • inflammation • neural reflexes. Air-conditioning Galen was the first to offer insights on the function of the nose as protector of the lower airway through its ability to clean, warm, and humidify inhaled air. 5 In addition, the nasal mucosa, with its abundant submucosal glands, takes part of the innate and adaptive immune defense by releasing antibacterial proteins, such as lysozyme and lactoferrin, chemical defenses, antioxidants, and secretory IgA, that can protect the lower airway from pathogens and allergens. 25 Patients with AR present partial or complete loss of function of the nose due to mucosal congestion, since nasal airways are bypassed during oral breathing. 25 In this situation, inhalation of cold and dry air may directly induce bronchoconstriction. Therefore, the lower airway would be quite "opened" to the entrance of allergens and pathogens, increasing the risk of asthma exacerbation. Inflammation Propagation of inflammation from the upper airway to lower airway may occur via postnasal drip and systemic circulation. 96 The concept that inflammatory secretions from the upper airway of patients with rhinosinusitis or even with rhinitis are aspirated into the lower airway with adverse consequences has been viewed as one of the principal mechanisms for lower airway symptoms, especially after an upper respiratory infection. 25 It is quite possible that early morning coughing in individuals with rhinitis is associated with accumulation of secretions in the lower pharyngeal area stimulating irritant receptors. 25 It is questionable, however, whether these secretions can reach the intrathoracic lower airway in adequate quantities to alter their physiology and to generate exacerbations or chronically worsen lower airway function in patients with asthma. 25 The development of sinusitis in rabbits is associated with lower AHR, even after eliminating upper-lower airway communication with the use of an inflated endotracheal tube cuff. 31 On the other hand, there is good evidence that allergic inflammation developing in the respiratory mucosa may result in systemic inflammatory events. 25 Blood eosinophilia may be observed in patients with allergic asthma, and can be considered a biomarker of inflammation of the lower airway. There is less evidence that upper airway inflammation can lead to an increase in eosinophil blood count. 25,32 Moreover, there is no experimental information indicating that nasal inflammation leads to systemic inflammatory signals that induce changes in lower airway physiology, 25 even though it seems reasonable to speculate that cytokines released in nasal mucosa could activate bone marrow with chemotaxis of white blood cells to both upper and lower airways. 25 The opposite direction for propagation of the inflammatory process, beginning in the lower airway and getting to the upper airway, has been postulated. A study showed that segmental bronchial allergen provocation in patients with nonasthmatic AR can induce nasal inflammation, nasal and bronchial symptoms, and reduction in pulmonary and nasal function. 33 However, there are recent data suggesting that this lung-nasal propagation of inflammation might not be relevant. In a very elegant murine model of allergic respiratory inflammation induced by ovalbumin, Balb/c mice were submitted to intratracheal challenge after sensitization by an intraperitoneal route. This provocation induced lung inflammation and AHR, but no signs of inflammation were found in the nose. 34 Neural reflexes The existence of a nasobronchial reflex that originates from the sensory nerve endings in the nose, travels to the central nervous system through the trigeminal nerve, and follows an efferent pathway through the vagus nerve to produce airway smooth-muscle contraction has been under debate for years. 25 Despite being well documented in animal models, its existence and relevance in humans are still controversial. Some studies performed in healthy individuals and asthmatics have demonstrated that lower airway resistance increased after nasal inhalation of cold and dry air. 35,36 Another important study showed an increase in AHR after a nasal allergen provocation in asthmatics who had reported worsening of asthma symptoms following seasonal exacerbations of rhinitis. 37 The authors observed that none of the solutions delivered to the nose during allergen provocation could be detected in the lower airway, showing that the increase in AHR was not due to inadvertent inhalation of the allergen. 37 It is important to point out that the classic nasobronchial reflex is a component of the diving reflex. 38 Immersion of the head into cold water leads to immediate suppression of respiration (apnea), laryngospasm, and bronchoconstriction, in order to protect the lower airway from diving. 38 Nasal inhalation of dust, pollutants, and irritants can induce immediate bronchoconstriction with cessation of respiration in the expiratory phase, due to relaxation of inspiratory muscles. 38 Therefore, in individuals with allergic respiratory disease, this reflex could lead to an increase in asthma symptoms after nasal injury. There is less evidence showing the occurrence of a bronchonasal reflex. It has been demonstrated that inhalation of ultrasonically nebulized distilled water increased nasal airway resistance in patients with AR, without the involvement of parasympathetic efferent reflexes, since patients did not present sneezing or rhinorrhea. 39 The clinical relevance of this bronchonasal reflex has yet to be demonstrated. In conclusion, upper and lower airways seem to constitute a unique system, named "united airway", that share similarities in terms of histology, physiology, and pathology. UAD is triggered by a T H 2 immune response of the airway, leading to an extended inflammatory process that begins in nasal mucosa and ends in bronchioles and alveoli, particularly in symptomatic asthmatics. United airway disease: clinical evidence There is also clinical evidence supporting the concept of UAD. Studies have demonstrated that the presence of severe rhinitis is associated with an increased risk of asthma 20 and in patients with asthma a less favorable evolution. [40][41][42] It has also been shown that the treatment of rhinitis can be beneficial to the lower airway, reducing symptoms, emergency room visits, and hospitalizations, as well as the severity of bronchial hyperresponsiveness. [43][44][45][46][47] In protocols of difficult-to-control Journal of Asthma and Allergy 2016:9 submit your manuscript | www.dovepress.com Dovepress Dovepress 97 asthma, rhinitis was included as one of the main comorbidities to be assessed and treated. 48 Therapy for UAD includes avoidance of relevant allergens and irritants, pharmacotherapy, and allergen-specific immunotherapy (SIT). Allergen avoidance has been suggested not only to prevent UAD onset and progression but also to reduce its burden, improving symptoms and quality of life. However, there is a lack of evidence supporting the effectiveness of environmental control. 4 The pharmacologic approach of AR includes antihistamines, oral leukotriene antagonists, and intranasal corticosteroids, the last being considered the most efficacious drug. 4 Agondi et al reported a decrease in asthma symptoms and AHR after intranasal corticosteroid treatment of rhinitis. 43 A recent meta-analysis confirmed the beneficial effect of intranasal steroids in AHR. 44 Oral and intranasal antihistamines, as well as leukotriene antagonists, are less effective than intranasal corticosteroids in improving the symptoms of AR. [49][50][51][52] A protective effect of cetirizine against AHR measured 6 hours after nasal allergen challenge in patients with AR was shown. 53 Allergen-SIT is defined as a procedure to administer increasing amounts of specific allergens in patients diagnosed with IgE-mediated disease, in order to induce immune tolerance. 8,54 Subcutaneous and sublingual IT can reduce symptoms of AR and need of reliever medication, as well as improve the control of comorbid conditions, such as asthma and conjunctivitis. 8,55 Allergen-SIT is indicated in moderate/severe AR for which response to pharmacotherapy is inadequate. Other potential indications are adverse effects of medications, coexisting allergic asthma, bad adherence to therapy, and patient preference for IT instead of pharmacotherapy. 8,56 Furthermore, SIT has been positioned as the only treatment that can modify the natural course of allergic diseases that includes prevention of new sensitizations and reduction of risk of developing asthma in subjects with AR, even after termination of treatment. [57][58][59][60][61] The Preventive Allergy Treatment study was a randomized controlled trial that showed clinical benefits and a preventive effect on asthma development in children suffering from seasonal rhinoconjuctivitis undergoing subcutaneous IT with grass-and/or birch-allergen extracts for 3 years. This positive effect of SIT in preventing the progression from rhinitis to asthma was observed to persist in the same patients for 7 years after the termination of the treatment. [57][58][59] Another study found that after 3 years of sublingual grass-pollen IT in children with AR, eight of 45 actively treated subjects and 18 of 44 controls developed asthma, with 3.8-fold more frequent development of asthma in the untreated patients. 62 Some studies have shown that IT was able to prevent new sensitizations in monosensitized individuals. 63 A research assessing the effects of subcutaneous IT in 147 house dust mite-monosensitized children over 5 years found similar results: 75.3% in the treated group and 46.7% in the control group had no new sensitizations. 64 A randomized controlled study involved 216 children with AR (with or without intermittent asthma) receiving drugs alone or drugs plus sublingual IT for 3 years showed new sensitizations in 34.8% of controls and in 3.1% of the IT group. Moreover, they demonstrated that this protective effect extended to AHR, which significantly decreased in the IT group. 65 Novel targeted therapeutic approaches using biological agents have been studied in the treatment of AR and allergic asthma, especially for the management of severe uncontrolled phenotypes. Among these, omalizumab, a humanized monoclonal antibody that binds circulating IgE and prevents its attachment to high-affinity IgE receptors, is available worldwide. Omalizumab improves both upper and lower airway diseases, reducing nasal and asthma symptoms, decreasing exacerbations, and improving quality of life. 66 Mepolizumab, a monoclonal antibody that blocks the binding of IL-5 to eosinophils, has also shown a beneficial effect on severe eosinophilic airway diseases, such as asthma and nasal polyposis in adults. [67][68][69] Because these treatments have systemic effects, it is not possible to design a study to assess how much the improvement in asthma is, due to direct effects or indirect effects associated with rhinitis improvement. The management of rhinitis may promote better adherence to therapy. It should consider severity and duration of the disease and patient preference, as well as the efficacy, availability, and cost of medications. Therefore, management of rhinitis and asthma must be jointly carried out, including environmental control, pharmacotherapy, and SIT. Conclusion The treatment of rhinitis is indispensable in patients with asthma, since it leads to better control of both diseases, and the lessons of the ARIA initiative cannot be forgotten. Further studies regarding UAD are needed to better understand the interactions between the upper and lower airways, but there is no doubt that rhinitis and asthma have to be studied and managed in an integrated manner.
2018-04-02T12:50:56.062Z
2016-05-11T00:00:00.000
{ "year": 2016, "sha1": "bf07152e7bd1336a7556ae721c4c87b245e6047c", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=30313", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a9c2928e5aeb56a8feb5cf4987b9d90c45127b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119450564
pes2o/s2orc
v3-fos-license
Inclusive Production Through AdS/CFT It has been shown that AdS/CFT calculations can reproduce certain exclusive 2->2 cross sections in QCD at high energy, both for near-forward and for fixed-angle scattering. In this paper, we extend prior treatments by using AdS/CFT to calculate the inclusive single-particle production cross section in QCD at high center-of-mass energy. We find that conformal invariance in the UV restricts the cross section to have a characteristic power-law falloff in the transverse momentum of the produced particle, with the exponent given by twice the conformal dimension of the produced particle, independent of incoming particle types. We conclude by comparing our findings to recent LHC experimental data from ATLAS and ALICE, and find good agreement. Introduction It has been suspected for many years that large-N c QCD admits an alternate description as a string theory 1 . Early developments were inspired by the realization that string scattering amplitudes obey Regge behavior and crossing symmetry. This conjecture was greatly spurred on with the observation that, in the limit of large N c with λ = g 2 Y M N c fixed, the QCD perturbation series can be made to resemble the genus expansion of worldsheet string theory [2]. With the advent of AdS/CFT correspondence [3][4][5][6][7][8], or equivalently gauge-string duality, the theoretical landscape has taken a dramatic step forward and a string realization of QCD has again become a serious goal for current studies. In this paper, we explore the consequences of conformal symmetry in high energy scattering experiments. In particular, we will use the AdS/CFT correspondence to examine inclusive production. Although strictly speaking QCD itself is not a CFT, it is closely related to N = 4 super Yang-Mills, which is conformal, and the two theories are similar enough that a great deal can be learned from the conformal limit [9][10][11][12]. The effects of conformal symmetry on QCD have previously been studied in inclusive scattering in both the fixed-angle [13,14] and in the near-forward limits [15][16][17][18][19][20][21][22][23][24][25]. Here, we will focus on central production at the LHC. Inclusive processes unavoidably involve near-forward particle production. The relevant physics is intrinsically non-perturbative, and cannot be reduced simply to purely partonic scattering. With AdS/CFT, one is able to address both perturbative and non-perturbative aspects of inclusive production at high energy in a unified setting. Indeed, holographic techniques based on a t-channel OPE [26][27][28][29][30][31][32][33] have been used as a complement to more traditional weak coupling methods [34][35][36][37][38][39] to study HERA data for the deep inelastic scattering (DIS) cross section at large s and small x = Q 2 /s. Early interest in inclusive production can be traced back to the work of Feynman [40], Yang [41], Wilson [42], and others, focusing particularly on the scaling properties of particle distributions. Studies of inclusive production in a CFT context began with works of Strassler [43], Hofman and Maldecena [44] and Belitsky et al. [45][46][47]. Instead of focusing on the final state particle distribution, which is ill-defined in the strict CFT limit, the emphasis has been on infrared safety [48,49], e.g., on energy flows, leading to vacuum expectations (1.1) The operator product D[w] in the above expression is not time-ordered, and thus the appropriate Lorentzian correlation functions are Wightman functions. Momentum space Wightman functions lead to amplitude discontinuities, so it is necessary to deal with Landau-Cutkosky singularities 2 . The treatments in [43][44][45][46][47] have mainly focused on processes where the source involves a single local operator, such as e + e − → γ * → X, where X represents all allowed final states, which are implicitly summed over. Our discussion in this paper will deal primarily with scattering processes where the initial source is non-local, and will be carried out in a momentum representation. The simplest inclusive scattering process is a + b → X , (1.2) where again X implies a sum over all possible final states. After summing over contributions from all possible final states, the completeness relation X |X X| = I leads to the usual optical theorem, which states that the total cross section σ ab total (s) of such a process is given by the imaginary part of the elastic amplitude in the forward limit 3 , σ ab total (s) 1 s Im T (s, t = 0). (1. 3) The next simplest inclusive process is single particle production, where again X implies a sum over all possible final states, leading to a differential production cross section, dσ ab→c+X /d 3 p c . Kinematically, single particle production can be treated as a 2-to-2 process, with X having a variable mass M 2 X = (p a + p b − p c ) 2 often referred to as the "missing mass"; for simplicity, we will simply call this M 2 . The invariant differential cross section dσ ab→c+X /(d 3 p c /E c ) therefore depends on three Lorentz invariants instead of the usual two for exclusive scattering. The usual Mandelstam variables s, t, and u can be used, but it is frequently more convenient to work with s, t, and M 2 ; it is easy to see that s + t + u = m 2 a + m 2 b + m 2 c + M 2 , so these two sets of variables encode the same information. In a momentum space treatment, inclusive cross sections can always be identified as discontinuities of appropriate forward amplitudes through the use of generalized optical theorems. The differential cross section dσ ab→c+X /(d 3 p c /E c ) of the process ab → c + X can be identified as the discontinuity in M 2 of the amplitude for the six-point process abc → a b c; symbolically, we have dσ ab→c+X d 3 p c /E c 1 2is Disc M 2 T abc →a b c . (1.5) 2 The Landau-Cutkosky singularities for Lorentzian correlation functions in CFTs with a gravity dual has recently been addressed in [50]. 3 Here we use canonically defined Mandelstam invariants. The elastic scattering amplitude T (s, t) is parameterized by the usual center of mass energy squared s and the momentum transfer squared t. The main goal of this paper is to explore the consequences of conformality on inclusive central production in proton-proton and proton-lead scattering. We examine the use of the t-channel OPE [15,51] for high energy scattering, elucidate subtleties involved in using generalized optical theorems, and pay special attention to non-perturbative issues. In particular, we show that aspects of conformal invariance can be explored in a "gluon-rich" environment 4 by treating central inclusive particle production of the form where X 1 and X 2 represent left-and right-moving "lumps" in the CM frame. Our discussion can be divided into several parts. We first focus on the more formal question of how to treat CFT inclusive shape distributions as weighted discontinuities of multiparticle momentum space amplitudes T ab1 2 ··· → a b 12··· , generalizing on earlier treatments. This treatment is carried out necessarily in a Minkowski setting, with the discontinuity in the generalized missing mass, M 2 = (p a + p b − p i ) 2 taken in the forward limit. This procedure applies to both events initially sourced by a single local operator, as in Eq. (1.1), and to scattering processes at high energy, as in Eqs. (1.3) and (1.5). By multiparticle amplitudes here we simply refer to the usual Euclidean CFT correlations functions, ϕ(x 1 )ϕ(x 2 ) · · · , continued to Lorentzian signature; these lead to vacuum expectation values for time-ordered (or T product) conformal primaries, 0|T{ϕ(x 1 )ϕ(x 2 ) · · · }|0 . We assume a standard Hilbert space structure (e.g. a state space spanned by states associated with conformal primaries) which allow us to use completeness relations. Although our emphasis is on purely conformal characteristics, we are mainly concerned with theories that allow an IR confinement deformation so one can interpret the results in terms of canonically defined scattering amplitudes. We will next discuss inclusive production for scattering processes and explore in particular the consequences of AdS/CFT and conformal invariance at finite 't Hooft coupling, λ large but finite. Here we review the bare necessities on how to move beyond the supergravity limit by including string corrections, and so we are effectively dealing with string amplitudes on an AdS background. Historically, the greatest obstacle to a stringy description of QCD phenomenology has been the requirement of hard partonic behavior at short distances. AdS/CFT provides a framework to resolve these phenomenological difficulties. Polchinski and Strassler [13] identified an approximation regime in which the warped geometry of the dual AdS theory provided a power-law falloff for wide-angle scattering in QCD. This argument has been extended to nearforward QCD scattering in AdS/CFT. We follow a similar approach as first described in [15,52]. In [15] it was shown that Pomeron exchange, i.e. the leading Regge singularity with the quantum numbers of the vacuum, can be described by a Reggeized graviton 5 propagating in AdS 5 . The unifying principle for both exclusive power-behavior at the fixed-angle limit and the Pomeron dominance for the near-forward scattering is conformal invariance. We next apply our analysis to single-particle inclusive production in the central region. Here X can be separated into left-and right-moving groups, X 1 and X 2 respectively. The event shape distribution is controlled by a matrix element, V P V cc V P , involving two Pomeron vertex operators [15]. Just as the case of exclusive fixed-angle scattering, flat space string scatting amplitudes [56][57][58][59][60] predict an exponential cutoff in the transverse momentum ∼ e −4α p 2 ⊥ . However, we argue that a generalization of the Polchinski-Strassler regime [13,14] utilizes the warped AdS geometry to render the effect of confinement deformation unimportant at high p T . Using this we arrive at our central result for CFT behavior at the LHC involving a partonic power-law falloff of the form The exponent δ is fixed by holography and conformal invariance, given by δ = 2τ , with τ = ∆−J, where ∆ is the conformal dimension, and J the spin of the produced hadron. 6 In the large N c limit of the AdS/CFT, the theory is dominated by gluonic interactions; the production of fermion pairs is suppressed by 1/N c . The simplest bound state is then a glueball state T r(F 2 ) [63][64][65] which can be used to describe meson production via AdS/CFT [29,[66][67][68][69] 7 . In QCD, scattering processes dominated by Pomeron exchange are described via the BFKL Pomeron (reviewed in [71]). Since the BFKL Pomeron the exchange of a Reggeized gluon ladder, the bound states lying on the trajectory are thought to be glueballs 8 . For production via scalar glueballs, we thus have δ = 2∆ = 8. This is analogous to the dimensional counting rule [73][74][75], but from a non-perturbative perspective. Finally, we test this prediction by comparing to recent ATLAC and ALICE data from the LHC. This paper is organized as follows: in Sec. 2, we focus on the treament of inclusive distributions as discontinuities. Sec. 2.1 involves reviewing the simple, but illustrative case of 2 point functions. Although these results can be found in the literature, we re-derive them in a consistent notation. 5 In what follows, this will also be referred to as the BPST pomeron [15]. This stands in contrast with the BFKL Pomeron [53][54][55], which is based on perturbative QCD. 6 Amplitudes displaying a similar power-law like behavior can be described using a complimentary holographic approach where one simply considers the string zero-mode contribution. Further details can be found in [61,62] and references therein. We focus here on the BPST approach as we believe it is more analogous to the perturbative weak coupling approach where in both cases Regge poles can be interpreted as eigenvalues of an effective Hamiltonian approach. 7 For a brief introduction to mesons in AdS/CFT see [70] 8 For a recent review of the Pomeron/Glueball connection see [72]. discontinuities of Witten diagrams. In Sec. 3.1 we detail 2-point functions in AdS and derive analogous results to those in Sec. 2. Following this we are able to posit our prediction for high energy inclusive scattering in Sec 3.2. We turn next, in Sec. 4, to inclusive distribution in the central production. Finally, in Sec. 5, we test this finding by comparing with the recent LHC data; in Sec. 5.3 we discuss possible explanations for the results of the experimental fits. We conclude with a brief discussion of our essential results in Sec. 6. Throughout the paper, the details of results from earlier literature are omitted from the body of the text, and are instead provided in Appendices A-D. In particular, these appendices cover the treatment of inclusive cross sections as discontinuities in QCD itself, the holographic pomeron, aspects of conformal field theory, and flat space string amplitudes, respectively. However, because much of the work here connects disparate background material we provide a bare minimum of review and examples in the main text for the paper to be relatively self contained. Inclusive Cross Sections and Discontinuities In field theory, inclusive cross sections involve Minkowski space Wightman functions. In this section, we clarify how these Wightman functions, in a momentum representation, can be identified as "forward discontinuities" of n-to-n amplitudes, e.g., n = 3 for the process a+b → c+X. We begin by reviewing the more familiar case of 2-point functions before generalizing to higher point correlators. We conclude by demonstrating these ideas in the context of deep inelastic scattering (DIS), where the cross section can be explicitly related to a discontinuity; we also relate the moments of the DIS distribution to a t-channel OPE. 2-Point Functions The relationship between a conventional time-ordered Green's function, 0|T{ϕ 1 ϕ 2 · · · }|0 , and a Wightman function, which is not time-ordered, can best be understood in a momentum representation. Let us illustrate this by first comparing the Feynman propagator, 0|T{ϕ(x)ϕ(0)}|0 , for a free scalar, with the corresponding Wightman function, 0|ϕ(x)ϕ(0)|0 . In a momentum representation 9, 10 , Let us turn next to CFT, using conventional CFT normalization and again in a Minkowski setting. Consider a generic scalar conformal primary ϕ of dimension ∆. The Fourier transform for its Feynman propagator and the corresponding Wightman function are is a real-analytic function 11 in p 2 , with a branch cut across 0 < p 2 < ∞. The corresponding Wightman propagator, G W (p 2 ), is a distribution. Although there is no mass-gap, the relation between time-ordered amplitudes and Wightman functions remains. G W (p 2 ) is a continuum over 0 < p 2 < ∞, corresponding to the discontinuity of G F (p 2 ) across its cut. Inclusive Distributions in CFT It is useful to distinguish between two types of inclusive processes. The first type corresponds to events with a single initial local source, e.g. γ * → c 1 + c 2 + · · · + X, which has been discussed before. The second type invovles a non-local source, as in scattering, e.g., a + b → c 1 +c 2 +· · ·+X, which we expand on here. In the language of CFT, these inclusive cross sections can be interpreted as flow rates for conserved quantities, such as the energy density flowing into a solid angle d 2 Ω about a directionn [44][45][46][47]. General inclusive flows for conserved quantities can always be expressed as weighted discontinuities [44][45][46][47]. Let us consider scattering processes first. As stated in Sec. 1, the simplest inclusive process corresponds to Eq. (1.2), with the total cross section given by the imaginary part of forward amplitude, as in Eq. (1.3). Consider next the inclusive production of a scalar particle, a + b → c + X, as in Eq. (1.4). The invariant differential cross section can be expressed as [76] dσ ab→c+X Making use of the completeness relation X |X X| = I, the cross section can also be expressed as a matrix element, is the Fourier transform of product of two local operators. Here, p c is the four-momentum for the produced scalar, with p 0 c > 0. Since the product ϕ c (x)ϕ c (0) is not time-ordered, one is again dealing with a Wightman function. 11 Positivity requires that 1 ≤ ∆. For ∆ approaching a positive integer n, coefficient d diverges while c remains finite, indicating the emergence of (−p 2 ) n−2 log(−p 2 ). We shall stay with a generic ∆, away from positive integers. The corresponding 3-to-3 process is a + b + c → a + b + c, where the amplitude T abc →a b c is given by a T-product between asymptotic states 12 , p a , p b |T{ϕ c (x)ϕ c (y)}|p a , p b , in momentum space. One can move from a T-product to a Wightman function as done earlier for the free propagator. Because it is a matrix element between asymptotic states, one replaces the 4-vector This is the process that is examined holographically in Sec. 4 and more details can be found in Appendix A. Next we turn to inclusive processes involving a single local source, O, for example e + e − → γ * (p) → c 1 + c 2 + · · · + X. The decay process can be interpreted as a CFT process as motivated by the work of Hofman and Maldacena [44]. In what follows it will be useful to recast Eq (1.1) in the form of a normalized distribution in a momentum representation as where O w is chosen to ensure infrared safety. In general, O w is a non-time-ordered product of a set of local operators, as in Eq. (2.6); as discussed above, this necessitates the use of Wightman functions [45][46][47]. We can now apply to this expression the same analysis used to argue for Eq. (2.5). The matrix element O(p)| O w |O(p) admits a form similar to Eq. (2.4), but with the momentum p replaced with p − p c , where p c is the momentum associated with the flow so that p 0 > p 0 c > 0. Then we can relate O(p)| O w |O(p) to a discontinuity exactly as was done for Eq. (2.5) earlier. Generically, we can write the cross section for such a process as where the sum is taken over all possible X and involves an integration over the phase space for each state X, weighted by w(X). For example, the simplest inclusive single-particle production process, e + e − → γ * → c + X, involves the measurement of a charge Q by a "calorimeter" at spatial infinity encompassing a differential solid angle d (2) Ω around a directionn. This corresponds to having w Q = c Q c δ (2) (p c −n Ω )θ(p 0 c ). 12 Energy components for all external 4-vectors are positive. For simplicity, the overall delta-function due to translational invariance will be surpressed in what follows. Strictly speaking, we need to work with amputated on-shell amplitudes, where ϕc should be replaced by a source function, jc(x) = ( − m 2 c )ϕc(x). We will skip this step to avoid notational overload. The cross section can also be re-written as σ Q (p,n) ∼ X γ * (p)|X w Q X|γ * (p) . If the factor w Q is replaced by a delta-function of four-momentum, Q c δ(q c − p c ), then using completeness, X |X X | = I where the sum over X stands for the previous sum over X with a state ϕ c removed. This in turn simply leads to the discontinuity of a 4-point function in the invariant in the forward limit of p γ * = p γ * and p c = p c . The discontinuity is taken for M 2 > 0. The same formalism can be used to study the flow of other conserved charges, such as energy and momentum, as well as higher-point correlation functions O w (1) O w (2) · · · . For instance, the flow of energy in a directionn is given by This is related to the momentum space representation for the correlator O(p)| O E (p c )|O(p) ; however, this is kinematically related to a position space three-point Wightman function of fields, ϕ 1 (x)ϕ 2 (y)ϕ 3 (z) . Similarly, the two-point energy correlator is related to a position-space four-point function, and so on. It is therefore desirable to explore directly conformal invariance for Eq. (2.8) in a coordinate representation, as initiated in [44][45][46][47]. A similar analysis holds for higher order correlators O w (1) O w (2) · · · , where we now have String-Gauge Duality In this section we discuss scattering via the AdS/CFT correspondence with a particular focus on scattering in the gravity theory. We first review only the essentials of scattering in AdS space needed to understand our phenomenological model and arrive to Eq. (3.15). Our discussion revolves around the scattering of AdS states and stringy effects beyond the super-gravity limit of λ → ∞. A detailed dual description in terms of the N = 4 SYM theory, while interesting and informative, is not needed for our current application. We stress that stringy effects are not only conceptually important but also phenomenologically necessary. Due to the difficulty of full finite λ string calculations, scattering amplitudes are most easily formulated by starting with the infinite coupling limit and then calculating 1/ √ λ corrections in the context of 1/N c expansion: we will treat stringy effects perturbatively. We pay special attention to two kinematic limits where the consequences of stringy corrections can be seen easily. One limit of interest is that of fixed-angle scattering, which leads to "ultralocal" scattering in the AdS bulk and hence in the Polchinksi-Strassler regime [13]. This is briefly reviewd in Appendix B. A second limit of interest is scattering in the near-forward limit which is discussed below. At high energy, the most important contribution to the AdS amplitude in this limit is due to the exchange of a graviton in the t-channel. However, this leads to too rapid an increase for amplitudes; stringy effects can slow the increase. In [15,17] it was shown that this leads to the introduction of a "reggeized" AdS graviton known as the BPST pomeron; this pomeron serves as the leading contribution to the scattering in a unitarized treatment via an eikonal sum. This framework can also be extended to multi-particle near-forward scattering [52,77,78], which paves the way for the treatment of central inclusive production in Sec. 4. AdS Scattering The AdS/CFT correspondence relates N = 4 SYM correlation functions to a dual description in terms of correlation functions of string states in a higher-dimensional via an equivalance of partition functions. 13 From the gravity perspective, CFT states can be thought of as propagating from a four dimensional boundary theory into the gravity bulk, scattering, and returning to the boundary CFT. In the limit of large 't Hooft coupling, this process can be described with perturbative sums of "Witten diagrams" in analogy to weak coupling descriptions. (See Appendices B and C for further clarification.) For most of the following calculations, it is sufficient to work with the Poincare patch of AdS 5 , described by the metric where R is the AdS radius. This metric corresponds to a boundary theory with purely conformal dynamics, as can be seen by comparing the five-dimensional AdS isometry group to the fourdimensional conformal group. The radius R of the bulk geometry is related to the 't Hooft coupling λ ≡ g 2 Y M N c of the boundary gauge theory by λ = (R/ string ) 4 , where string = √ α is the string length. Therefore, the limit λ → ∞ of strong boundary coupling corresponds to a weakly curved bulk geometry, and hence weakly coupled bulk dynamics. In these coordinates, z → 0 and z → ∞ correspond to the UV and IR of the dual gauge theory, respectively. However, we will also be interested in deforming away from a strictly conformal boundary limit, by introducing a confinement scale in the boundary theory. There are a variety of approaches to introducing a confinement deformation in AdS space [79][80][81][82][83][84][85][86][87][88], but we are interested in universal 13 The canonical description of the AdS/CFT correspondence describes string states living in AdS5 × S 5 . Here we are only concerned with excitations in the AdS space. features that are common to all the approaches. Generically, a confining gauge theory has a bulk dual with metric where A(z) describes both the AdS warping and the deformation away from pure AdS. Sometimes, as in the so-called "hard wall" models of QCD, the coordinates are restricted to lie in finite intervals. The presence of a confinement deformation introduces a new length scale Λ −1 R; we take Λ ∼ Λ QCD . For concreteness, in most of the rest of the discussion we will assume a hard wall deformation, where we put in a hard IR cutoff by restricting the AdS radial coordinate z to lie in the interval [0, z max ]. Then the confinement scale Λ is given by Λ ∼ z −1 max . However, we expect our main results to be essentially independent of exactly which confinement deformation is used, since they depend essentially on the conformal UV dynamics. A connected Green's functionG F in the boundary theory can now be expressed in terms of an amplitude in the AdS bulk via a convolutioñ where dµ(z) = dz √ −g and g = det g. T n can be considered as an "amputated Green's function", and G(p, z) is the bulk-to-boundary propagator, which, for a scalar of conformal dimension ∆, is given up to a normalization factor in terms of Bessel function of the second kind, We will not provide here a detailed discussion on the Witten diagram expansion here except for several remarks, which will become relevant shortly. Consider first the bulk-to-bulk Feynman propagator 0|T{ϕ(x, z)ϕ(x , z )}|0 of a scalar with conformal dimension ∆. Its momentum representation, which will be designated as G F (z, z , p µ ), can again be expressed in terms of Bessel functions as Since there is no mass gap, G F (z, z , p 2 ) is analytic in p 2 , with a branch cut over 0 ≤ p 2 < ∞. Its discontinuity over the branch cut, which corresponds to the momentum-space representation for the Wightman function G W (x, z; x , z ) = 0|ϕ(x z )ϕ(x, z)|0 , is In the limit of z, z → 0, it approaches, up to a normalization constant, the Wightman function in Eq. (2.4). Confinement Deformation in the IR, Universality and Conformal Invariance: Let us return to the issue of on-shell amplitudes. For CFTs, associated with each leg of the Green's function G n is an off-shell wave-function, e ipµx µ , and a bulk-to-boundary propagator, G(x , z ; z, x)| z →0 . In order to define on-shell amplitudes, it is necessary to introduce a confinement deformation in the IR leading to finite a mass gap. A new dimensionful scale, Λ −1 >> R, enters serves as the basic length scale. Conformality holds for z << Λ −1 . Conversely, confinement effect becomes important if z ∼ Λ −1 , with Λ expected to be of the order Λ QCD . In such a scenario, on-shell amplitudes are given by amputated Green's functions, which have a normal singularity structure as in standard flat space field theories. After the introduction of a confinement deformation in the IR, the spectrum of the bulk theory becomes discrete, so that the propagator in Eq. (3.5) is replaced by a discrete sum, where the ϕ n (z) are a set of orthonormal wave functions associated with an infinite set of scalar glueballs of increasing mass m n 14 More importantly, the bulk-to-boundary propagator in Eq. (3.4) is also given by a discrete sum, with poles at p 2 = m 2 n . This in turns allows us to extract on-shell amplitudes in a standard manner. Although our discussion will turn to theories with an IR confinement deformation, there are features of the Witten diagram expansion that are model independent. As stressed in [15,17], it is possible to identify features which depend only on the conformal structure, such as the large Q 2 behavior of DIS at small-x. We stress here the important fact that AdS wave functions have universal behavior 15 in the UV. As z → 0, where τ is the twist and J is the spin. This behavior is independent of the confinement deformation and depends only on the conformal properties. We shall make use of this fact when implementing the Polchinski-Strassler mechanism for large p ⊥ production. It is now possible to define scattering amplitudes as amputated Green's functions by going on to the pole for each external state, leading to on-shell scattering amplitudes, (3.10) 14 These states also interpolate with higher spin glueball states on the same Regge trajectories, leading to the reggeized J-dependent propagator appearing in Eq. (B.10). 15 In the hard wall model, the glueball wave function has ϕ(z) ∝ z 2 J∆−2(mnz) ∼ z ∆ as z → 0. A similar explicit analytic expression can also be obtained for other deformations, such as the "soft wall" model. For each external on-shell particle, one associates a bulk wave-function e −ipx ϕ(z). This can also be extended to multi-particle inclusive productions which we will turn to shortly. High Energy Limit: In this paper, we will be primarily be interested in inclusive processes due to scattering at high energies where the source is in general non-local. One therefore will deal with (2n)-point functions for n = 2, 3, · · · . It is interesting to note that non-trivial dynamics already occur at the lowest level, for example the γ * p total cross section [27][28][29]. More generally, an inclusive discontinuity can be taken through Witten diagrams in a momentum representation. This can be done most readily for near forward scattering at high energy in the Regge limit. There exists a rather extensive literature on the applications of AdS/CFT to high energy nearforward scattering [16-18, 24, 25, 28, 51]. The factorization of AdS amplitudes has emerged as a universal feature, present in the scattering of both particles and currents. The amplitude for elastic two-to-two scattering can be represented schematically in a factorized form as where Φ 13 and Φ 24 are elastic vertices and the convolution, * , involves an integration of the vertex position over the AdS bulk, as in Eqs. (B.2-B.3). This can be seen in Fig. 3.1. The Pomeron kernel K P , described in more detail in Appendix B, is defined as where the reggeized graviton propagator G j (t, z, z ) is defined in Eq. (B.8). Through AdS/CFT, one can identify the Pomeron with a reggeized graviton in the AdS bulk. The Pomeron kernel K P can be introduced by perturbing about the super-gravity limit through a world-sheet OPE. More formally, one can introduce a Pomeron vertex operator in AdS, as done in [15], so that where Π(α P ) is a complex "signature factor" carrying information about its phase that is useful for taking the discontinuity. It is customary to normalize this signature factor as Π(j) = 1+e −iπj sin πj so that Im Π(j) = 1. Inclusive Cross Sections as AdS Discontinuities In Eq. (3.10), we have expressed on-shell scattering amplitudes T n in the boundary theory in terms of scattering amplitudes T 4 (p 1 , z 1 , · · · ) in the AdS bulk. We can now extend this treatment to inclusive cross sections. After applying Eq. (1.3) to Eq. (3.10) for n = 4, we find that the cross section for a + b → X is given in terms of a bulk amplitude as (3.14) Similarly, by applying Eq. (2.7) to Eq. (3.10) for n = 6, we find that the differential inclusive cross section for a + b → c + X is given by Both of these discontinuities are taken in appropriate forward limits. This key result can also be extended to multi-particle inclusive production. As an explicit illustration, consider the case of DIS. Here one first replaces Φ 13 in Eq. (3.11) by the appropriate product of propagators for external currents [26,27]. One next performs the step of taking discontinuity [27][28][29][30][31][32], leading to a factorized form for the cross section: For the general two-to-two scattering of scalar glueballs, 1 + 2 → 3 + 4, one has where the vertex coupling Φ ab (z) involves the normalized wave-function ϕ a (z) of scalar glueball of conformal dimension ∆. As indicated earlier, for a hard-wall deformation, we have ϕ a (z) ∼ Similarly, the reggeized Pomeron kernel K P in Eq. (C.5) can be given a more explicit form in the hard wall model [15,89]. We will not discuss this propagator in detail, except to note that its phase information is given by Eq. (3.13). The propagator has a discontinuity in s, with its leading behavior given by In the particular case of DIS, this leads to Eq. (A.3), with anomalous dimensions for j 2 at strong coupling given by The results of this section rely on identifying that the analytic structure of amplitudes in AdS/CFT are analogous to that of Field Theory. However, in the next section we turn our attention to a specific high energy process appropriate for collider scattering. Here, collisions with large transverse momentum will be localized in a transverse space and in the Polchinski-Strassler regime; we can consider flat-space string vertices with physical momenta red shifted by the geometry. This is more fully explored in [15,52] and analogous situations for specific collider physics are described in [24,25,27,28,30,77,78,90,91]. Inclusive Single-Particle Production in the Central Region We have shown that the inclusive single particle production cross section in the boundary theory can be related to the discontinuity of the six-point amplitude in the bulk. In order to evaluate this discontinuity, we must generalize the treatment of two-to-two amplitudes given above to apply to three-to-three amplitudes. We begin by discussing the kinematics of inclusive production. For fixed X, the inclusive processes a + b → c + X can be treated kinematically as a two-to-two process where we treat X effectively as a particle with mass Thus, in addition to M 2 , we have the usual three Mandelstam invariants These invariants are related by the constraint Therefore, the kinematics can be parameterized by three invariants, which can be taken to be (s, t, M 2 ) 16 . However, there exists an alternate parameterization that can better illuminate the simplicity of the actual process. A universal characteristic of high energy particle production is the fact that the majority of produced particles will have small transverse momentum relative to the (longitudinal) incoming direction. In a typical hadronic collision at the LHC, the detector essentially sits at rest in the center of momentum frame of the two incoming particles, which have equal and opposite large momentum; these momenta define a longitudinal light cone (LC) direction. To be more explicit, we choose the incoming particles a and b to have LC momenta where Y is the rapidity. Then, taking m a = m b = m for simplicity, the Mandelstam s invariant is given by s ∼ m 2 e Y , and the produced particle has LC momentum given by Equivalently, the produced particle has energy E = m ⊥ cosh y and longitudinal momentum Inclusive central production involves particles with fixed p ⊥ and y in the CM frame in the s → ∞ limit, and therefore incoming particles have large rapidities, −y b y a = Y → ∞. In such an event, the produced particles can be grouped in an intuitively helpful way as a+b → X 1 +c+X 2 , where c is the centrally produced particle and X 1 and X 2 are left-and right-moving particles, respectively. In this limit, the traditional Mandelstam variables behave as We can additionally check that the ratio is fixed. These kinematic conditions can be thought of as the definition of central production. Phenomenologically, we often prefer to use (s, y, p 2 ⊥ ) as the three independent variables describing the kinematics of central production at the LHC. On the other hand, when we take the discontinuity in the 3-to-3 amplitude, we will see that it is more convenient to parameterize the kinematics with (M 2 , t, u), and to therefore treat s as a dependent variable. We will return to this issue shortly. Inclusive Central Production and the 3-to-3 Amplitude A holographic analysis of the 2-to-3 amplitude in the double Regge limit was performed in [52] by generalizing the AdS treatment of 2-to-2 scattering. Schematically, this 2-to-3 bulk amplitude can be represented by where we have introduced a new 3-point central production vertex, V c , shown in Fig. 4.1. In terms of the Pomeron vertex operator, V c can be expressed as V c = V P |ϕ c |V P . These AdS vertex operators are closed string operators where the invariants are redshifted. In general these can be complicated expressions. However, following the analysis [24,25,52,77,78], many of the general features are shared with the much simpler flat space string theory vertex operators which we review in Appendix D. We now move on to the six-point function, which was discussed for flat-space string scattering in [56][57][58]. Following the above discussion and the logic in [52,77,78], we will be interested in the limit where the three-to-three amplitude takes on a factorized form, given by Again we have had to introduce a new central vertex, V cc , shown in Fig. 4.2, which can formally be expressed as the matrix element involving two pomeron vertex operators Following the flat space calculation in [56][57][58], we can take the M 2 discontinuity in the amplitude to find that [92] ( As in two-to-three scattering, this AdS-space central vertex V cc (κ,t 1 ,t 2 ) has the same functional form as the flat space vertex, ), (4.11) but with the arguments appropriately redshifted. (We follow here notation of [56]. See Appendix D for more details.) The invariant κ was defined in Eq. (4.6), and can also be expressed as where t ≡ s 1 < 0 and u ≡ s 2 < 0 and M 2 are defined in Eqs. (4.2) and (4.3). The singularity of T abc →a b c in M 2 now appears only as a singularity of V cc in κ, with discontinuity given by ) . (4.13) At t 1 = t 2 = 0, αā ac (0) = α bbc (0) = 0, for external tachyons, with Im V cc (κ, 0, 0) finite. We can now explicitly write out the bulk six-point amplitude. Putting everything together, Eq. (4.8) can be expressed as where the dependence on the central vertex is collected as In Eq. (4.14), we have also introduced an explicit IR cutoff, z max , which should be of the order O(Λ −1 QCD ); this amounts to implementing a hard wall confinement deformation. It is essential that all Mandelstam invariants in this amplitude are holographic quantities, related to the flat space invariants by the prescription in Eq. (B.4). For instance,s 1 < 0 ands 2 < 0 are given bỹ In this limit, we have s M 2 >> |s 1 |, |s 2 |. Next we will compute the discontinuity in the missing mass M 2 , given in Eq. (4.10), in the forward limit. From Eq. (4.10), we see that, due to factorization, Eq. (4.8), as schematically represented by Fig. 4.2, the discontinuities Im K P (−s 1 , 0, z 1 , z 3 ) and Im K P (−s 2 , 0, z 2 , z 3 ) lead to the z 3 integral being entirely independent of z 1 and z 2 . (See Eq. (B.9).) Thus, we can perform the z 1 and z 2 integrals to find an inclusive particle density ρ for central production given by where β is an overall constant partially stemming from the z 1 and z 2 integrals. This is our key result. Central Production at Large p ⊥ and Conformal Invariance It should be stressed that Eq. (4.17) depends crucially on factorization in the double-Regge limit. In the factorization limit, the particle density is independent of both y and s 17 . Conversely, the density depends on p ⊥ through the wavefunction ϕ c (z) and the vertex Im V cc (z 2 κ/R 2 , 0, 0). Recall that the double Regge kinematics are such that κ p 2 ⊥ + m 2 c , and therefore that taking p ⊥ large is equivalent to working in the limit where κ is large. We can then check that conformal dynamics emerge in this limit, as we saw above in the fixed-angle limit. In flat space string scattering, the six-point central vertex V cc (κ, 0, 0) is an analytic function of κ, away from a branch cut along the positive real line. In the limit κ → ∞, the discontinuity vanishes and the vertex becomes factorizable with an exponentially small imaginary part: Im V cc decays exponentially. From Eqs. (4.13) and (D.12), we have, for large κ, This parallels the result for exclusive fixed-angle scattering in Eq. (B.6). As emphasized in [56][57][58], this exponential suppression reflects the "softness" of flat-space string scattering. When the scattering occurs on an AdS background, the large κ asymptotics are rather different. The redshifted vertex is now where we have substituted α → 1 2 α to return to closed string scattering. Thus, the z 3 integrand picks up an exponential suppression for large z 3 . This induces an effective cutoff z s . We determine z s by demanding 2α κ = O(1), so that We can thus approximate Eq. (4.14) by integrating only up to z 3 = z s << z max , where the exponential factor is of order one and can be neglected. Additionally, since we are taking κ → ∞, we can, following Eq. (3.9), approximate each wave-function by ϕ(z) z τ , where τ is the twist. Thus Eq. (4.17) becomes where we have introduced a new normalization constant β . In the simplest model of bulk physics, the external particles labeled by c are scalar glueballs and thus have τ c = ∆ c = 4. We therefore have This result follows essentially from conformality, since it depends on the behavior of the external wave functions away from the confinement region; our prediction does not depend on the details of the confinement deformation chosen. It serves as a generalized scaling law for inclusive distribution, as is the case for exclusive fixed-angle scattering [73][74][75]. Evidence for Conformality We have argued that conformal symmetry is manifested in the presence of power law behavior in inclusive scattering processes. We will now test this prediction by direct comparison to experimental results. We will focus on differential cross section measurements at high √ s performed at the LHC. Many recent measurements are in the form of a double differential cross section, in which particle production is binned both in the transverse momentum p T and the pseudorapidity η; symbolically, these studies measure the cross section 1 Here we are interested in the region where p T > Λ QCD where y ≈ η. In principle, this is not precisely the quantity we have computed above. However, as discussed in [93], these two cross sections encode essentially the same information, so we expect essentially the same dependence on the kinematic variables. More concretely, we expect that the leading order physics should be independent of η, and that the exponent of the power law should be independent of s. Our goal is to fit conformally motivated behavior to differential cross sections. We will use our central results, Eq.(4.17)-(4.20), to model p-p [93,94] and p-pb [95] central production via Eq.(4.22). One of our assumptions from Sec. 3.2 going into Eq.(4.22) is that the incident wave functions behave as ϕ a,b (z) ≈ z 2 J (∆−2) (m a,b z). This is consistent with hard and soft wall AdS confinement schemes where the wave function scale has been shown to be m a,b ≈ 1GeV , or the size of a proton. [27,28,30,77,78,90,91] Although no heavy ion studies have been done, we assume a similar wave function form holds for pb as well. As described in Sec. 4.2, the simplest model of bulk physics describes the production wave function, ρ c , to be that of scalar glueballs which will hadronize into the detected charged particles. As briefly described in [93], the central production of charged particles in pp and pb collisions is inherently non perturbative. Described by the kinematics of Sec. 4, the inclusive central production is described via a color-singlet exchange (Pomeron) which dominates in the Regge limit. The only current Monte Carlo (MC) methods used to describe this data involve a combination of multi-parton interactions involving single and double diffractive dissociation (including Pomeron and gluon effects), Gribov-Regge theory, and a "semi-hard" Pomeron model. In this kinematical region, these MC methods agree on a description of the differential cross section, but vary in describing event track multiplicities and mean transverse momentum distributions. For the p-pb collisions there is no current MC prediction. At large p T our result implies that the differential cross section is described by the exchange of Reggeized objects leading to power law behavior depending on conformal dimensions. However, this behavior is only expected to hold at moderately high p T above the QCD scale. At low p T , much more complicated behavior can occur [96]. Some of these low-p T effects stem ultimately from saturation, which, from a string perspective, corresponds to the emergence of eikonal physics in summing over string-loop diagrams 18 . Other effects will be sensitive to confinement specifics which are partially avoided at large p T from the AdS/CFT perspective [30,91]. This is borne out in the data by deviations from power-law behavior at small p T as can be seen in To avoid these complications, we will attempt to allow for such behavior by including an offset C, expected to be of order Λ QCD , in our fit function. Thus, for production mediated by factorized Mueller diagrams, we want to fit a curve of the form where the B i are given by twice the conformal dimensions of the produced particles. More details about the reasoning leading to this fit function are given in Appendix E. Theoretically, our results are most strongly suited to describe glueballs. Because glueballs are not experimentally identifiable, we will instead focus on the production of other QCD bound states, namely mesons, via glueball decays. We will study meson production at the LHC in both proton-lead and proton-proton collisions. Within AdS/CFT, the dominant contribution should be from the production of scalar glueballs with ∆ = 4 (and thus B = 8) with double-Pomeron Mueller diagram, so for simplicity we will mostly focus on a fitting function given by We will consider here three datasets. The first comes from proton-lead collisions studied by the ALICE collaboration at √ s N N = 5.02 TeV [95], and the last two come from proton-proton collisions analyzed by the ATLAS Collaboration at center of mass energies of √ s = 8 [94] and 13 [93] TeV. These two categories are discussed in Sections 5.1 and 5.2, respectively. Results of these studies are shown in Table 1. These results are interpreted in Section 5.3. 18 Eikonalization is also responsible for saturation in the context of DIS. More discussion will be provided at the end of this section. By comparing analysis run on the various data sets we will be able to gain some insight into the (lack) of energy dependency in this kinematic regime. The ALICE datasets in particular have been run at various pseudorapidity, η, ranges which allows us to see that there is also essentially no variation in kinematics under changes in pseudorapidity. The ATLAS data has been collected at psuedorapidity range covered by the end caps (|η| < 2.7) [97], but this is still safely inside the central production limit. Proton-Lead Collisions and Pseudorapidity Dependence The data in [95] are binned in the pseudorapidity η 19 . There are three bins, corresponding to the |η| < 0.3, −0.8 < η < −0.3, and −1.3 < η < −0.8 regimes, respectively. This allows us the opportunity to study the possible presence of a dependence on pseudorapidity at fixed √ s. These data cover the range 0.15 GeV < p T < 50 GeV, and hence allow us to extend further into the high-p T regime than the above analyses. The results of the fits are shown in Figure 5.2. Excellent agreement between the fit model and the data is seen in all three cases. This plot is visually suggestive that the kinematic dependencies depend very slightly, if at all, on the pseudorapidity bin; this is confirmed numerically by the results in Table 1. All three fit parameters are compatible in the three bins at the one sigma level. Proton-Proton Collisions and Center of Mass Energy Dependence ATLAS has also measured the inclusive double-differential single-hadron production cross section [93,94]. Unlike the data discussed above, these data are presented in a single pseudorapidity bin, so we cannot extract any information about η dependence. Instead, these two datasets allow us to study the validity of our model in the energy frontier; we have worked in the limit of large center of mass energy, so this is the regime where we expect our results to be the most directly applicable. The results of the fit are shown in Figure 5.3. As before, the model is seen to correspond closely to data. Within one sigma, the results are seen to match between the two ATLAS datasets, although given the smaller number of data points the uncertainties are of course larger than in the above analysis. Interpretation Overall, the preceding results, summarized in Table 1, match up rather well with our predictions. The fits are compatible at the two-σ level with the power law exponent being independent of both the pseudorapidity and the center of mass measurement. This agrees with the results of Section 4. There are two important caveats, however. First, the overall normalization of the distributions varies sharply between the two types of measurements, with the proton-lead collisions seeming to have a cross section enhanced by an order of magnitude relative to the proton-proton collisons. That the overall normalizations vary so strongly is not altogether surprising. The holographic argument presented here does not offer an easy way to compute this prefactor, so we have no real prediction for it. Certainly we expect higher-order corrections, which are unaccounted for in our tree-level calculation, to importantly influence the normalization. Moreover, from considerations of the mechanisms for proton-lead and proton-proton scattering, it is clear that the difference between these two can have a physical interpretation, rather than being interpreted as an artifact of our calculation. Let us turn to our predicted value B = 8 for scaling dimension. In [98], it was found that this value is consistent with low energy data. In a perturbative treatment for inclusive production, one generally expects a p T -dependence of the type with n = 4 for naive scaling. It is also interesting to point out that a picture based on "constituent-quark-interchange" [99] also leads to an effective value of n 8. However, our expectation of n = 8 follows from the assumption that gluon dynamics dominates in central production and particle distribution follows that for production of scalar glueballs. It would be interesting to explore how the "constituent-quark-interchange" approach could be made compatible with our dual picture of strong-coupling AdS-Pomeron for central production in a gluon-dominated setting. It is equally important to point out that the fitted values for the scaling dimension, although comparable, are not strictly compatible with the predicted value of B = 8. In the context of our paper, that the experimental data do not appear strictly consistent with the interpretation of production mediated by ∆ c = 4 glueballs could be significant. In general, the best we can hope for from AdS/QCD is an understanding of event kinematics, so the value of the power law exponent is of central importance to our results. We therefore turn now to a discussion of this small but possibly significant discrepancy. Deviations from Conformality: The five values for the exponent B are all consistent with B 7, which seems to correspond to a process with ∆ 3.5 instead of our expected ∆ = 4. Given the small numerical uncertainties on our fits, it is extremely unlikely that this is a fluctuation, and we must reconcile this result with our expectations. We will outline below some possible explanations for this effect. Although we cannot conclusively claim that any or all of these suggestions completely explain the fit results, they are within the realm of possibility, and would provide conceptually appealing physical interpretations. Note that, strictly speaking, our CFT prediction yields a power B = 2τ , where τ is the twist, τ = ∆ − J, J being the spin. For scalar glueball, with J = 0, we thus have τ = 4. A more appealing version has the additional power law terms originating in the production of object with twist τ = 4. The dominant scaling behavior is due to the production of scalar glueballs, with τ = ∆ = 4. However, if there is a significant production via tensor glueballs, τ = 4 − 2 = 2, thus leading to a term with power 2τ = 4. If we allow production to be mediated by both types of glueballs, we would naturally find a cross section of the form where we expect B ∼ 8 and E ∼ 4. For A D, at small p T the δ = 4 term will dominate (See App. E for other small p T information) and at large p T the δ = 2 term dominates. In the crossover region of intermediate momentum, the two terms can compete, causing a lowering of the effective power law exponent to be lowered, as alluded to above. Because of the competing effects of these two terms, it is difficult to fit a function of this form directly to data. However, it is possible to make some simplifying assumptions to demonstrate that it is at least a plausible model. If we expand the cross section in Eq. Table 1, we can float the normalizations A and D to compare this model to data. Such a fit is shown in Fig. 5.4 for the ALICE data set with −0.8 ≤ η ≤ −0.3; to mitigate low-p T effects, we have discarded data with p T < 3 GeV. We do not claim that this is a legitimate fit to data per se; instead we aim to show that such a two-term fit is not an unreasonable form for the cross section. Along similar lines, one could imagine quark-antiquark (qq) mixing becoming significant. The calculation in Sec. 4 occurs at large N c , where qq mixing is suppressed. However, in real-world QCD we have N c = 3, so to obtain phenomenologically viable results it would be beneficial to consider the effects of glueballs mixing with qq. One could imagine performing this calculation in a top-down Sakai-Sugimoto picture [100]. From a power-counting argument, we expect scalar qq states to lead to wavefunctions with τ c = 2, and thus would contribute identically to a tensor glueball. It is unclear how these two scenarios might be distinguished either phenomenologically or experimentally. As a last incarnation of this argument, we could have considered the effects of mixed Pomeron-Reggeon exchange; it was argued in [24,25] that these contributions could remain important , with a 1 − 2, which will move the fit closer to the LHC data. In worldsheet terms, these diagrams would involve additional twist-two operators contributing to the t-channel OPEs. This could in general lead to an additional η-dependence of the final result, which a more refined treatment could become sensitive to. As another possible line of reasoning, we can consider the effects of finite coupling. The earlier discussion mostly focused on the strong coupling limit of λ → ∞. However, other attempts to fit holographic calculations to data have demonstrated that finite-λ effects can be important [27,29,30,91]. In Appendix B, we argue that the Reggeization of the graviton depends crucially on finite-λ effects, i.e., from stringy physics beyond the supergravity limit. Thus, we expect finiteλ physics to effect the glueball wavefunctions that must be convoluted with the scattering kernel. Related to this is the possibility of nontrivial anomalous dimensions. The central argument of this paper involved a holographic prediction for the kinematics of N = 4 SYM. In this theory, superconformality protects the conformal dimensions of scalar glueballs. However, real-world QCD has no such protection, and hence we might expect QCD glueballs to pick up nonvanishing anomalous dimensions. Such an effect could easily account for the observed deviation from ∆ = 4 production. Eikonalization: Another possibility for lowering the effective exponent is due to corrections coming from string-loops, although it is not immediately clear how such effect would emerge. See Appendix B for more details. When the eikonal, Eq. (B.11), becomes large, χ(s, b, z, z ) = O(1), multiple Pomeron exchange becomes important, leading to "saturation". Indeed, such effect should be important for inclusive production with p T = O(Λ QCD ). Since this region depends crucially on how confinement deformation is implemented, our single-Pomeron analysis can be modified significantly 20 . However, for production at large p T , our current treatment should be reliable. Further study in this direction will be pursued. Naive Scaling: In a perturbative treatment for inclusive production, in the absence of dimensionful scales, the function F in (5.3) would be dimensionless, leading to n = 4; this is known as "naive scaling". However, our non-perturbative result in Eq. (4.22), differs significantly from the naive scaling expectation, and the corresponding function F in (5.3) depends also on confinement scale Λ QCD ; this dependence enters through the "string cutoff" z s in Eq. (4.20), as well as through the total cross ection σ total . We note that LHC data has also been examined in [102,105], against such naive expectation of p −4 T . Clearly, this is not evident at LHC energies. This perturbative scaling law was also mentioned peripherally in [106]. Assuming the parameter B is energy dependent, it was speculated in [102,105] that one would reach B 4 at s ∼ 10 3 TeV, far beyond the LHC range. Our study, on the other hand, is based on the belief that there are no unexpected new scales involved other than Λ QCD , and therefore that our AdS/CFT based analysis should be applicable at LHC energy. Summary and Discussion We have explored the consequences of conformal invariance in inclusive QCD production at high energy by means of the AdS/CFT correspondence. As mentioned in Sec. 1, although QCD is not strictly a CFT, it is nevertheless possible to address in certain kinematic limits where effects of confinement deformation are not expected to be important. In this treatment, we have focused on inclusive central production at large p ⊥ where we demonstrate that particle density obeys a power law fall-off that depends only on conformal dimension of the produced particle, The analysis is carried out in a momentum-space setting. With inclusive cross sections as discontinuities, it is important to include stringy effects, e.g., taking the discontinuity for the matrix element of the central vertex, V cc , between two Pomeron vertex operators. As is the case of exclusive fixed-angle scattering, this power fall-off occurs due to the geometry of warped AdS space, via a generalized Polchinski-Strassler mechanism [13,14]. The form of the power law is fixed by conformal invariance. This prediction appears to be well-supported by recent LHC data. In the first part of this paper, we concentrated on more formal aspects of inclusive cross sections as discontinuities. We first focused on general CFT and useed DIS at small-x as an illustration 20 For a perspective possibly different from ours, see [101]. A universal e −cp T behavior for the region pT < ΛQCD was advocated in [102]. See also [103,104] and App. E. Since the data in this region is spare, a more conventional behavior such as e −c p 2 T cannot be ruled out. on how to invoke a t-channel OPE. We next discussed AdS/CFT via Witten diagrams, and additionally introduced a confinement deformation in the IR. Lastly, we discuss gauge-string duality beyond the strict supergravity limit, which leads to the inclusion of stringy effects and, in turn, the AdS-Pomeron. In the second part of the paper, we turned to the calculation of inclusive distribution for central production, with a particular focus on the kinematic limit of large p ⊥ production. We discussed the generalized optical theorem for 3 → 3 amplitude and computed the curved-space string theory prediction for the inclusive cross section, which lead to the conformal behavior in Eq. (6.1). Finally, we test this finding by examining the recent LHC data, coming from both protonlead and proton-proton collisions analyzed by the ALICE and ATLAS collaborations. We end by mentioning some possible future directions for inclusive study of conformal invariance. On a more theoretical side, a better understanding on the x-space and p-space connection would be desirable. For a CFT with gravity dual, this can be done most easily through a perturbative Witten diagram approach. Another possible avenue of attack is through the use of Mellin representation, as discussed by Mack [107]. Equally interesting is to extend the study to multiparticle production [44,47,108,109]. Other phenomenological applications include inclusive production in other kinematical regions where the consequences of conformality can appear 21 , such as the triple-Regge limit, explore heavy quark production in the central region 22 , and tetra quark production 23 Also interesting would be the study of two-point correlations, such as γ * → c 1 + c 2 + X or a + b → X 1 + c 1 + c 2 + X 2 . Study in some of these issues are currently underway. Acknowledgments The work of T.R. and C.-I T. are supported in part in part by the Department of Energy under contact DE-Sc0010010-Task-A. T.R. is also supported by the University of Kansas Foundation Professor grant. R.N. is funded by the Stanford University Physics Department, a Stanford University Enhancing Diversity in Graduate Eduction (EDGE) grant, and by NSF Fellowship number DGE-1656518. 21 Single-particle inclusive cross section in other kinematic regions has been addressed in [98,99,106] from a perturbative dimensional counting perspective. 22 For heavy quark production, this can be treated with the perturbative BFKL approach. See [110] and references therein. 23 Although quark contributions are 1/N suppressed holographically-the ∆ = 4 tetra quark contribution [111][112][113], which has already been investigated holographically [114], could compete with a scalar glueball. A Inclusive Cross Sections and Applications Inclusive cross sections as discontinuities also follow from unitarity. Here we give more detail first on the single particle inclusive amplitude and also provide examples of the power of taking discontinuities to calculate cross sections. The issue of analytic structure is necessarily more involved in the case of CFT, which can be simplified in strong coupling via the use of Witten Diagrams in momentum-space representation. A.1 Single Particle Inclusive The discontinuity in Eq. (2.7) is taken in the forward limit, where p a = p a , p b = p b , and p c = p c . This corresponds to a generalized optical theorem [76,115,116], and is also known as the Mueller formula. Just as the familiar optical theorem in Eq. [116]. Each term on the right-hand-side of the equation can be identified as the discontinuity in an appropriate invariant [115]. There are four types of discontinuity diagrams, with P i and P f summing over all possible permutations of initial and final states, while the n i -sum over all allowed states. The missing-mass discontinuity enters in the second group, i.e., that indicated by n 2 sum in the unitary equation. The discontinuity in Eq. (2.7), is taken in the forward limit, where p a = p a , p b = p b , and p c = p c . This identification, as explained in Sec. 2.1, is in exact correspondence to that for the free propagator. The discontinuity in M 2 enters as a term in the 3-to-3 unitarity relation, as represented schematically in Fig. A.1. In this figure, shaded bands represent allowed intermediate states and all amplitudes, indicated by circles, involved are connected. We denote amplitudes in the physical region by "+" and complex conjugation by "−". As also explained in Sec. 2.1, each term on the right-hand-side of the unitarity equation can be identified as the discontinuity in an appropriate invariant [115]. From the perspective of the process a + b + c → a + b + c, M 2 is a "cross-channel" invariant, as opposed to "direct-channel" invariants, such as s ab = (p a + p b ) 2 , s abc = (p a + p b + p c ) 2 , etc. Because of the Steinman rule, there are no double-discontinuities in overlapping invariants in the physical region [115,116]. This discontinuity in M 2 , Eq. (2.7), yields a sum over all allowed multi-particle states X, multiplied by a delta function factor, δ((p a +p b −q c ) 2 −M 2 X ). Each state X contributes a term which is the product of an on-shell amplitude for a + b → c + X with its conjugate, T * a b →c X T ab→cX . The total discontinuity involves a sum over each allowed state X; for each X, the sum involves an integral over the appropriate multi-particle phase space. A.2 DIS, OPE and Anomalous Dimensions As an explicit illustration, consider the inclusive scattering γ * + proton → X of a virtual photon with momentum q off of a proton of momentum p in the limit of Q 2 = −q 2 → ∞ with x = Q 2 /s fixed. That is, one is dealing with the photon-proton total cross section, σ total γ * p , as a function of Q 2 and x. This cross section can be expressed as a product of photon polarization vectors and the hadronic tensor, W µν (p, q), defined as the Fourier transform of the current commutator, p|[J µ (x), J ν (0)]|p . It can be expressed in terms of two scalar structure functions, For virtual Compton scattering, q + p → q + p , the amplitude T µν (p, q; p , q ) is given by the Fourier transform of the T-product p |T{J µ (x)J ν (0)}|p . In the forward scattering limit, p = p and q = q , T µν has a Lorentz covariant expansion similar to that of W µν , with new form factors F α (x, Q 2 ) replacing F α (x, Q 2 ). The hadronic tensor is related to the forward amplitude by the Optical Theorem, which implies that W µν (p, q) = 1 2i Disc s>0 T µν (p, q; p, q). Treating the F α (x, Q 2 ) as real-analytic functions of x with a branch cut over [0, 1], one has 24 DIS in QCD is strictly speaking not conformal. However, it is possible to explore conformal dynamics if one assumes a fixed coupling and focuses on the kinematic region of small-x. DIS structure functions are strongly peaked phenomenologically at x → 0, which can be used to infer the dominance of gluon dynamics, consistent with the large N c expectation [26,27,30]. This singular small-x behavior allows a direct measurement of the anomalous dimensions, γ n , for twist-two operators, O n , since these operators dominate in the t-channel OPE of two currents, J µ (x)J ν (0) = n |x| n c µν n O n (0). Schematic form of the ∆ − j relation for twist-2 spectral curve at weak (λ 1) and strong coupling (λ 1), reproduced from Ref. [15]. Symmetry about ∆ = 2 follows from conformal invariance. A standard analysis leads to an expansion for F α in x −1 , valid in the limit of large Q 2 . Through a dispersion relation, the coefficients M (α) n (Q 2 ) of this expansion can be expressed as "moments" over its discontinuity across 0 < |x| < 1, i.e., M (α) n (Q 2 ) = dxx n−α F α (x, Q 2 ). In the large Q 2 limit, these coefficients are given approximately by Here, the γ n are the anomalous dimensions of twist-2 operators with even integer spin j = n, defined by For j = 2, we have γ 2 = 0 due to energy momentum conservation 25 . For j = 2, anomalous dimensions do not vanish, which leads us directly to CFT dynamics. We will frequently treat ∆ = j + 2 + γ(j) as a function of j, or, equivalently, its inverse, j(∆), as a continuous function of ∆, as shown in Fig. A.2. That is, by treating the structure functions as discontinuities, one can explore anomalous dimensions through a t-channel OPE, which can serve as a spring-board for introducing stringy effects via AdS/CFT 26 . In particular, at large 't Hooft coupling λ, by exploring Regge behavior, one has F s ∼ x −(2−s)−j 0 for 2 > j 0 > 1, where ∆(j 0 ) = 2 and is identified with the Pomeron intercept. In the strong coupling limit, j 0 2 − 2/ √ λ. Clearly, exploring this holographically requires going beyond the SUGRA limit of λ → ∞, which we turn to next. B AdS/CFT Scattering and the BPST Program We provide here further details of scattering in the AdS/CFT and give a brief summary of the BPST program [15][16][17], which constitutes the steps leading to Eqs. (3.11) and (4.7). In [15,17], AdS/CFT is implemented by starting first with flat-space string theory. Alternatively, the construction of the BPST Pomeron can be initiated with a CFT OPE, and the corresponding Witten diagram expansion in the supergravity theory, and from there incorporate stringy effects [20][21][22][23]51]. The two approaches are equivalent, and provide separate intuitive frameworks. Here and in Appendix C we will integrate both approaches. t-Channel OPE and Witten Diagrams: For 2-to-2 scattering at high energy, with s = (p 1 + p 2 ) 2 → ∞ and t = (p 3 − p 1 ) 2 < 0, the simplest Witten diagram that appears in the t-channel OPE is that from a single scalar exchange. In a momentum-space representation, up to a constant it is given by where dµ(z) = dz √ −g is the AdS 5 measure and G F (z, z , t) is the scalar bulk-to-bilk propagator given in Eq. (3.5). In anticipation of the confinement deformation we will later introduce, we will replace bulk-to-boundary propagators Φ i (z, p 2 ) with normalizable physical wave functions ϕ i (z) in what follows. In a Minkowski setting, the exchange of a spin J excitation leads to a contribution whose growth is bounded from above by s J . Therefore, the t-channel scalar exchange in Eq. to the large light cone momenta (zz ) 4 (p + 1 ) 2 (p − 2 ) 2 . Thus, in this limit we have where the graviton kernel can be expressed in terms of scalar propagator G F (z, z , t) and the red-shifted energy invariant s as K G = G ++−− s 2 = (zz ) −2 G F (z, z , t) s 2 . We have also defined vertex factors Φ 13 (z) = z 2 ϕ 1 (z, p 2 1 )ϕ 3 (z, p 2 3 ) and Φ 24 (z ) = z 2 ϕ 2 (z , p 2 2 )ϕ 4 (z , p 2 4 ). We therefore see that the amplitude scales as s 2 , as expected. Schematically, we write this as where * corresponds to integration over the AdS bulk. Ultralocal Scattering and the Polchinski-Strassler Mechanism It has been stressed in [13] that scattering amplitudes in gauge theories with a good string dual description can often be simplified, since the dual ten-dimensional string scattering on AdS 5 × S 5 is effectively local. This simplification is particularly applicable in the limit of fixed angle-scattering when all four-dimensional Mandelstam invariants are large and of the same order. In this limit, gauge theory amplitudes can be expressed as a coherent sum of local scattering in the AdS bulk, where again we ignore fluctuations in S 5 throughout [13]. As an effective five-dimensional scattering process, the momenta p µ for external states are seen by local observers in the AdS bulk to be red-shifted, with large components p µ along p µ i where We are interested in a strongly coupled boundary theory, so as above we take the AdS radius R large compared to the string scale. In what follows, we shall set R = 1. In this limit, a 4-D scattering amplitude reduces to a coherent sum over local scattering in the AdS bulk, so that where T n corresponds to the amputated bosonic string Green's function in flat space. In terms of invariants, the arguments for T n are red-shifted, s ij → z 2 s ij . A flat-space bosonic 4-point amplitude can be expressed in a Koba-Nelson representation involving an integral over a single modulus. In the limit of −t (1 − cos θ cm )s → ∞, the integral is dominated by a saddle point. This leads to an exponential cutoff, More details are provided in Appendix D. This exponential suppression is a generic feature of flatspace string scattering, and also holds for multi-particle scattering in similar generalized fixedangle limits 27 . As stressed in [13], the exponential suppression in Eq. (B.6) allows us to restrict the domain of integration in Eq. (B.5) to an effective scattering region z ∈ [0, z s (s)], where z s (s) = O(1/ √ s). We will refer to this simplification as the Polchinski-Strassler mechanism. In the scattering region, T 4 ( s, t) = O(1). This, combined with the wavefunctions in Eq. (3.9), leads to a power-law falloff for the cross section of the form dσ dt ∼ s −τ total , where τ total is the sum of the twists τ i = ∆ i − J i of the external particles. This is consistent with the dimensional counting rule of [73][74][75]. Beyond the SUGRA Limit: The standard Witten expansion involves only propagators and vertices of super-gravity fields in AdS 5 , such as the dilaton ϕ, metric fluctuations h µν , and the anti-symmetric tensor B µν . This dramatic reduction in the number of degrees of freedom can be understood in terms of the boundary theory by the rapid increase of anomalous dimensions for all unprotected gauge-invariant local operators in the large 't Hooft coupling limit. Generically, their conformal dimensions grow as so that in the λ → ∞ limit their string duals become heavy and decouple. In this limit of the sum can often be truncated so that it is given approximately by sums of perturbative t-, sand u-channel exchange diagrams. Perturbatively each of these diagrams will contribute only to discontinuities in their respective channel. From the Graviton to the BPST Pomeron: In a t-channel OPE, the contribution from a conformal primary with definite spin does not lead to singularities in the cross channel invariants s and u. Discontinuities can emerge due to re-summation of high-spin exchanges. For finite 't Hooft coupling, incorporating the higher string modes associated with the graviton leads to a "reggeized AdS graviton". This in turn leads to the BPST program, where elastic amplitudes at high energy can be represented schematically in a factorizable form like that of 2-to-2 amplitude in Eq. (3.11). Here the universal Pomeron kernel K P grows with a characteristic power behavior at large s >> |t|, i.e. K P ∼ s j 0 . The strong-coupling Pomeron intercept, at leading order in λ, is j 0 = 2 − 2/ √ λ, which agrees with spin J = 2 of the graviton in the limit of λ = g 2 N c → ∞. Conversely, at finite λ, the regggeized AdS graviton has its intercept lowered below J = 2. More generally, this approach leads to conformal Regge theory in CFT, which we will discuss briefly in Sec. C. Holographic descriptions of scattering data agree with a Pomeron intercept near j 0 1.3 in a strongly coupled regime [27,77]. At finite λ, one can incorporate higher string modes through a Pomeron vertex operator via a world-sheet OPE. More directly, one can adopt a J-plane formalism, where the Pomeron kernel K P can be given by an inverse Mellin transform, as in Eq. (3.12), with Re (j − j 0 ) = L > 0. Due to curvature of AdS, the effective spin of a graviton exchange is lowered from 2 to j 0 < 2. The propagator G j (z, z ; t) can be found via a spectral analysis in either t or j. Let us focus on the conformal limit. Holding j > j 0 real and working at leading order in λ, the spectrum in t can be seen to be continuous along its positive real axis leading to where ∆(j) = 2 + 2 √ λ(j − j 0 ). At J = 2, this reduces to the graviton kernel. An alternative spectral representation in j has also been provided in [15]. The leading contribution to Eq. (3.12) comes from a branch-cut at j 0 which corresponds to a coherent sum of contributions from string modes associated with the graviton. At s large and t = 0 the kernel becomes up to log corrections. This is the form we adopt for inclusive central production. Tensor Glueballs and Confinement Deformation: Consider next the addition of a confinement deformation, leading to a theory with a discrete hadron spectrum, e.g. tensor glueballs lying on the Pomeron trajectory. To gain a qualitative understanding, it is instructive to rely on the "hard-wall" model, where the AdS coordinate z is restricted to lie in the range [0, z max ]; we take z max ∼ 1/Λ QCD . This model captures key features of confining theories with string theoretic dual descriptions. The propagator is now given by a discrete sum over allowed states as where ϕ n (z, j) can be expressed in terms of Bessel functions. Eq. (B.10) extends Eq. (3.7) to a sum over Regge trajectories. Eikonalization: From a string-dual perspective, summing higher order string diagrams leads to an eikonal summation. More generally, eikonalization assures s-channel unitarity. Nearforward scattering in the high energy limit is referred to in some literature simply as the eikonal limit; this limit corresponds to s → ∞ and t fixed, leading to a CM frame scattering angle θ that vanishes as θ ∼ 1/ √ s. Under plausible assumptions, it can be shown that flat-space scattering in this limit is determined by the integration of an eikonal phase, χ(s, b), over the two-dimensional space of impact parameters b. In this eikonal form the reduced 5-D momentum transfer squared serves as a 3-d Laplacian, t → ∇ 2 AdS ⊥ ; and there is a diffusion kernel in 3-dimensional transverse space, between (x ⊥ , z) and (x ⊥ , z ). In the eikonal limit [16][17][18][19][20][21][22][23], one finds Expanding to first order, we thus can identify our Pomeron kernel with the eikonal χ(s, b, z, z ) as When the eikonal becomes large, χ(s, b, z, z ) = O(1), multiple Pomeron exchange becomes important leading to effects like saturation. For many purposes, for example DIS at HERA, keeping a single Pomeron contribution is often sufficient. For p-p and p-Pb scatterings, eikonalization is also phenomenologically important. This can be seen in effects like "taming" the power increase for total cross sections with s ε to log 2 s, etc. C Conformal Partial-Wave and Regge Theory: The Regge limit for CFT can also be addressed more directly by analytically continuing the Euclidean OPE to Minkowski space. We will now briefly discuss this approach, which will lead us to an alternate derivation of Eq. (3.12). We will focus on a four-point correlation function of primary operators O i of dimensions ∆ i . For a t-channel OPE, it is customary to express the 4-point correlation function for external scalars as where we define x ij = x i − x j and the invariant cross ratios u = . For simplicity we have assumed ∆ 1 = ∆ 3 and ∆ 2 = ∆ 4 . To explore conformal invariance, one normally begins with a conformal partial wave expansion [20][21][22][23], starting first in an Euclidean setting, where the connected component of the amplitude F (u, v) is given by a sum over conformal blocks, For planar N = 4 SYM, we restrict the sum to single-trace conformal primary operators. C.1 OPE in Minkowski Setting: The conformal Regge limit corresponds to a double light-cone limit in a Minkowski setting. This light-cone limit for the OPE corresponds to u → 0 and v → 1, with Equivalently, by introducing u = zz and v = (1 − z)(1 −z) with z = σe ρ andz = σe −ρ , the precise Regge limit can also be specified by In a frame where x 1⊥ = x 3⊥ and x 2⊥ = x 4⊥ , this limit corresponds to approaching the respective null infinity while keeping the relative impact parameter b ⊥ = x 1⊥ − x 2⊥ fixed. To make contact with Regge theory, it is useful to adopt a more general starting point. We introduce a basis G(j, ν; u, v) of functions for the principle unitary conformal representation of the four-dimensional conformal group SO(5, 1) and then expand F (u, v) in terms of this basis as The conformal harmonics G(j, ν; u, v) are eigenfunctions of the quadratic Casimir operator of SO(5, 1). Eq. (C.3) combines a discrete sum in the spin j and a Mellin transform in a complex ∆-plane, with ∆ = 2 + iν. To recover the standard conformal block expansion, one can close the contour in the ν-plane [107], picking up dynamical poles in a(j, ν), at ν(j) = −i(∆(j) − 2), thus arriving at Eq. (C.1). These dynamical poles correspond to the allowed conformal primaries O ∆(j) of spin j and dimension ∆(j). In continuing to the Minkowski limit, it is necessary to work with conformal harmonics G(j, ν; u, v) which are eigenfunctions of SO(4, 2) Casimir with two continuous indices, ν and j. A distinguishing feature for the Minkowski conformal harmonics is the fact that, in the Regge limit, sinh ρ , so that the G(j, ν; u, v) are more and more divergent for increasing j > 1 as σ → 0. It follows that the conventional discrete sum over spin would no longer converge. As explained in [51], a Sommerfeld-Watson resummation leads to a double-Mellin representation where the contour in j is to stay to the right of singularities of a τ (j, ν). The factor, 1+τ e −iπj sin πj , is referred to as the "signature factor". We will in what follows consider even signature case, τ = +. For more discussions, see [51]. C.2 Conformal Regge Theory and Eikonal: Conformal Regge theory assumes that a(j, ν) is meromorphic in the ν 2 − j plane, with poles specified by the collection of allowed spectral curves ∆ α (j). An example of such an a is a(j, ν) = α rα(j) ν 2 +(∆α(j)−2) 2 . In the Regge limit, for even signature, τ = +, the spectral curve associated with the energy-momentum tensor plays the dominant role. Here ∆ P (2) = 4 and this spectral curve is where the Pomeron singularity lies, as in Fig. A.2. Keeping this contribution only leads directly to the Pomeron kernel in Eq. (3.12). For more discussions, see [27,29] and [15,51]. In flat-space, by expanding Eq. (B.11) to first order in χ and applying Eq. (1.3), one can see that exchanging the eikonal once contributes to the total cross section, so that σ total (s) 2 d b χ I (s, b) + O(χ 2 ), where χ I > 0 is the imaginary part of the eikonal. With AdS/CFT, it is possible to associate the eikonal with the leading t-channel exchange, as is done in [15]. The result is the leading (Pomeron) kernel, given by Eq. (B.12), and repeated here, where s and t are redshifted holographic invariants, as in Eq. (B.4). In the conformal limit, Eq. (C.5) provides a representation for a general scattering kernel. The eikonal χ encodes all dynamical information and, due to conformal symmetry, depends only on s = zz s and cosh ξ = , where cosh ξ corresponds to a transverse chordal distance. The Regge limit is now s → ∞ with fixed ξ. It is important to note that the conformal representation (C.5) is valid for any value of the coupling constant, since it relies only on conformal invariance. We end by providing a Regge Dictionary for CFT: For more details, see [51]. D Flat-Space String Amplitudes Here we describe and evaluate some flat-space string amplitudes. As an illustration, we will begin with tree-level amplitudes for tachyons in bosonic string theory. The four-point open-string tachyon amplitude, known as the Veneziano amplitude, can be expressed in a Koba-Nielson form as This is a planar-ordered amplitude, with singularities in s and t only. The full amplitude is given as a sum of three planar amplitudes, with singularities in (t, u) and (u, s) respectively, A open (s, t) = A 0 (s, t) + A 0 (t, u) + A 0 (u, s). Since the external particles are tachyons, we have α (s + t + u) = −4 . The corresponding 4-point closed-string tachyon amplitude is the Virasoro amplitude, given by where for closed strings α (s + t + u) = −16. Unlike the Veneziano amplitude, the Virasoro amplitude contains singularities in all three channels. There exists closed-form expressions for these integrals in terms of Γ-functions. Fixed-Angle Limit for 4-Point Amplitudes For four-point scattering, the limit of fixed angle scattering is given by large s and t, with s/t held fixed. In the CM frame, we then have It is possible to read off the behavior for the Veneziano formula directly, but it is more instructive to work with the Koba-Nielson representation. Consider the open-string amplitude. When s and t are both large, with t/s fixed, the integrand has a saddle point at w * = t/(s + t). When the integral is appropriately defined by analytic continuation, this saddle-point indeed dominates [118], and we then have A similar analysis applies to all three terms for A open (s, t). This property is clearly also shared for closed string amplitudes. It can be shown that the integral for A closed (s, t) is again dominated by the saddle-point at w * = t/(s + t), thus leading to an expression like that in Eq. (D.4) but with α replaced by α /2. This represents a generic property, which also applies to multiparticle amplitudes: in the fixed-angle limit where all invariants are large with relative ratios fixed, all flat-space string amplitudes are exponentially suppressed. Regge Limit for 4-Point Amplitudes: In the Regge limit of s → ∞ with t fixed, the saddle-point w * moves to one of the end-points of the domain of integration, w = 0, and the amplitude can no longer be evaluated at w * . Instead, we must sum the contributions from w = O(1/s). In [27], it was shown that this summation corresponds to a world-sheet OPE, and can be represented by a Reggeon vertex operator. More directly, one finds, Consider next A 0 (t, u) and A 0 (u, s). For A 0 (u, s), this corresponds to a fixed-angle limit and its contribution is exponentially suppressed. leading to a total contribution that can be expressed as In the physical region where s > 0 and t < 0, the discontinuity formula corresponds to Im A open (s, t) πΓ(α t)(α s) 1+α t . (D.7) The same analysis can also be carried out for the closed-string amplitude. For large s at fixed t, the region w = O(s −1 ) dominates, leading to Regge behavior Double-Regge Limit for 5-Point Amplitude: We will be interested five-point string scattering, shown in Fig. 4.1, in the double-Regge limit, where we take s = (p 1 +p 2 ) 2 , s 1 = (p 3 +p c ) 2 , and s 2 = (p 5 + p c ) 2 , large, with t 1 = (p 3 − p 1 ) 2 , t 2 = (p 2 − p 5 ) 2 and κ ≡ s 1 s 2 s fixed. Consider a planar order amplitude V 5 with planar ordering (13452). For exploring the double-Regge limit, it is best to use the Koba-Nielson representation (D.9) Now we take the limit s 1 → −∞, s 2 → −∞ and s → −∞, with κ = s 1 s 2 /s fixed, to find that , where we have defined α(t) = 1 + α t as well as a central vertex coupling with x = s 1 s 2 α s . This representation is valid for κ < 0, and the physical region κ > 0 is to be reached via analytic continuation. From Eq. (D.10), one observes that V c (x, t 1 , t 2 ) is realanalytic, with a branch-cut over 0 < x < ∞. For x > 0, one finds that where Ψ is the confluent hypergeometric function and we have also abbreviated α(t i ) by α i , i = 1, 2. Most importantly, for x > 0 and x → ∞, ImV c vanishes exponentially, so that in this limit V c becomes real and factorizable, V c (x, t 1 , t 2 ) → Γ(α 1 )Γ(α 2 ) . We have considered so far only a particular planar ordering for the amplitude; to obtain the full amplitude, we need to sum over all other orderings, each of which we expect to have a similar double-Regge limit. A similar expression holds for closed strings [59,60]. In AdS/CFT, the central vertex takes on the form V c ( t 1 , t 2 , x) with all invariants redshifted [52] as appears in Eq. (4.7). The Six-Point String Amplitude From [56], the six-point amplitude depicted in Fig. 4.2 is given by In the double Regge limit, we have s ac sā c → −∞, s bc sb c → −∞ with M 2 = (p a +p b −p c ) 2 s = (p a + p b ) 2 , and κ ≡ sacs bc M 2 fixed. In this limit, the dominant contribution comes from u = O(1/s ac ) and v = O(1/s bc ), and one finds ) , (D.14) with the usual identification of invariants with it linear trajectory function, e.g., α(t) = α t + 1. The discontinuity in M 2 , which now enters through κ, is given by ) . E Fit Validation and Parameter Stability Here we provide more details on the fits to data presented in Sec. 5, specifically with respect to implementing a cut-off and the stability of parameters; the discussion here will be focused on technical details, and physical interpretation will be left in Sec. 5 and 6. E.1 Power-Law Behavior As discussed above, the arguments of Sec. 4 suggest that the cross section should behave as where ∆ is the conformal weight of the particle mediating production in the bulk. This naively indicates that we should fit to data a power-law curve of the form where the overall normalization A and exponent B are floated. However, this formula is only expected to be true asymptotically as p ⊥ → ∞. In general, there are expected to be small-p ⊥ effects that are not visible to our analysis. This can be easily seen by noticing that the cross section diverges as p ⊥ → 0. There are several ways one could imagine modifying Eq. (E.1) to include these effects. One particularly obvious way to avoid these effects is to fit a sum of the power-law curve and some other curve to data; in this approach, the second curve is intended to model directly the low-p T physics. Such an approach was recently advocated in [102,105]. Although we are not interested here in this region we comment on it below in Section E.2. Given that we are not interested in these non-universal effects at small momenta, we have no principled reason to prefer any one form of this low-p ⊥ curve over any other. Especially given that introducing such an extra curve would drastically increase the number of floated parameters, and hence potentially lead to overfitting, it is best to be more agnostic as to the form of the low-p ⊥ effects. We will therefore consider simpler ways to remove low-p ⊥ effects. Perhaps the most obvious solution would be to simply introduce a lower cuttoff p min on the allowed p ⊥ , and therefore only fit to a subset of each data sample. Another approach is to allow a small offset in the momentum that appears in the power law curve, i.e. to fit a three-parameter curve of the form instead of the two-parameter form presented in Eq. (E.2). This form has two advantages. First, for C > 0, the numerical singularity at p ⊥ = 0 is directly removed; additionally, as p ⊥ → ∞, it is readily seen to agree with Eq. (E.1). One could imagine adding in a lower cutoff to this form of the curve as well. Without a handle on the small-p ⊥ physics, we have no theoretical reason to prefer one of these approaches over the other. We will therefore fit both forms to data, both with and without a cutoff, and choose the approach that gives the quantitatively best overall results, as quantified by χ 2 /NDF. In the following pages, we will present a thorough evaluation of these two methods. For each of the five datasets discussed in the main text, we will present the results of twenty-two fits to data, corresponding to eleven different cutoffs p min for each of the two fit functions in Eqs. (E.2) and (E.3). We will also display some characteristic plots, to facilitate a visual analysis of the results. From the fit results in Tables 2 through 11, we can immediately exclude the proposal to fit Eq. (E.2) directly to data. For all values of the cutoff tested, the χ 2 /NDF is unacceptable, being extremely high at small or no cutoff, and then rapidly falling to below one at large cutoff. This leads us to consider instead Eq. (E.3), and leaves only the question of whether or not to institute a cutoff, and if so what value of the cutoff to use. For much the same reasons as above, we dispense with the possibility of a large cutoff. For cutoffs between 0 and 1.5 GeV, the gains in χ 2 /NDF are minimal for removing the low-p ⊥ data. Thus, to be conservative, and to minimize the overall statistical uncertainties, we will fit Eq. (E.3) to data directly, without a cutoff. These are the results given in Section 5. Table 12.
2017-11-15T18:31:50.000Z
2017-02-17T00:00:00.000
{ "year": 2017, "sha1": "084b8cd241481c84d54e46b589487066ccb7f241", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2017)075.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "ca72bacdccf524a5b2aaa539a58950dc2dbf9804", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234034559
pes2o/s2orc
v3-fos-license
Localization of Optic Disk and Exudates Detection in Retinal Fundus Images Diabetic Retinopathy (DR), is the most prevalent human retina disease. This disease occurred due to diabetes. This disease requires early treatment to avert any malfunction of organs affected by diabetes that could render them dysfunctional. Exudates are the most familiar of diabetic retinopathy earliest signs. In this context, new methods have been developed for the localization and isolation of the optical disk and the discovery of exudates in the retina image. A new algorithm was introduced to localize and segment optical disk prior to process of secretion because it appears in a similar color, density, and in contrast of other features of the retinal image. The algorithm deployed utilizes three steps to extract exudates from physiological properties. In the fundus image, first step depends on the thresholding of intensity, the second is based on Morphological processing and the third step strategically combines a joint of step1 and step 2 to disclose all exudates thereby evicting any sort of bogus positive. The proposed method was applied on several images from the database and thus far concluding accurate and promising results. Introduction Fundus image can provide data about pathological variability owing to certain eye illnesses and early signs of certain systemic illnesses such as diabetes and hypertension. Importance of conducting ophthalmology diagnostic procedure by analyzing the fundus image is becoming increasingly an important area for researchers to conduct this procedure automatically [1]. Medical imaging currently offers an excellent assistance in various fields of medical diagnosis as it qualifies promotes possession and produces assessment means of medical images. Furthermore, it offers a wide-range of diagnostic assistance. Medical imaging is still on mounting scale of development, thus, adding novel types of imaging and chances towards unceasing refinement of the hardware trend [2] [3]. Diabetes has become the most prevalent cause of blindness in the developed world, especially among working-age groups. Diabetes is known to cause cataracts, glaucoma and most importantly harm to the blood vessels inside the eye which can affect patient's sight. Sight-loss caused by diabetes is commonly known in medical-science jargon as "retinopathy with diabetes." Diabetic retinopathy is a critical eye disease triggered by diabetes manifestation on the retina. Diabetic patient screening for diabetic retinopathy growth can possibly eliminate 50% risk of blindness in these patients [4]. Accordingly, this study focuses primarily on exudates as a mechanism that produces diagnostic dataleads on early retinopathy for diabetes. The primary cause of exudates is leakage produced by damaged blood vessels of proteins and lipids from the bloodstream to the retina [5]. In retinal imaging, exudate is exhibited with distinct dimensions, shapes and positions which normally look as hard white or yellowish localized areas. It generally forms within the retina close to the leaking capillaries [6]. The 'optic disk' OD is one of a retinal fundus image's primary characteristic and has a comparable appearance of exudates. OD disclosure is one of the main parts of preprocessing in algorithms intended to obtain anatomical retinal structures automatically. The OD appears as roughly circular region in the fundus image and size about one-sixth the diameter. OD image in this case, shows an area brighter than the surrounding region and converging region of the blood vessel network. By contrasting OD image with that of a healthy retina, then the features of color, shape, size, and convergence can be used to detect of the OD [7]. Computer detection of exudates can detect a faster and precise diagnosis of specific inspection as it further enables the doctor to make a timely choice on the correct therapy. Related Work (Literature Review) Studies in the field were pioneered by the 2012 Esmaeili et al. A algorithm which introduced curveletbased system to detect (OD) and exudate using images with low contrast. This method consisted of (3) major phases without need for user configuration to identify changes in retinal appearance of image. The lesions bright candidate are initially obtained in the image using DCUT and adjusting the curvelet coefficients of the enhanced image of the retinal. After this phase, the writers presented an OD and DCUT-based boundary extraction and level adjustment technique. At last, the image of bright lesions map (BLM) was created to differentiate between exudates and OD, i.e. a spurious identification of final exudates, extracted BLM candidate pixels not in OD areas (identified in the earlier step) were regarded to be real bright lesions [8]. Abbadi and Al-Saadi in 2013 presented an automated detective method to investigate retinal image of bright lesions (exudates). This enhanced approach localizes the optic disk, separate it and detect exudates. Accordingly, a new algorithm was introduced for optical disk localization which was capable of detecting the confusion between exudates and OD. The technique utilizes certain color channels and characteristics to distinguish exudates in digital fundus image from physiological characteristics. [9]. In 2015 Zeljković et al. Cost-efficient exudate and optical disk extraction algorithms were proposed with the aim of classifying retinal images that would further assist diagnostic procedures. They represented collective effort to provide an algorithm for optic disk identification to enable easy extraction method of exudate and developing improved classification means of retinal images so as to better assist ophthalmologists during medical diagnoses. The suggested mathematical modeling algorithms allowed better focus on light intensity concentrations, an easier optical disk and exudes detection, efficient and accurate classification of retinal images [10]. In 2016 Partovi et al. A technique to automatically detection of retinal exudates in fundus images was proposed. The morphological feature was implemented in this technique to intensity parts of room for hue saturation intensity (HSI). To detect the areas of the exudates, all images were thresholded and the region of the exudates was segmented. The binary morphological features were implemented to enhance detection effectiveness. Finally, for further statistical reasons, the areas of the exudates were quantified and assessed [11]. Zhu and Rangayyan in 2017 suggested a technique for automatically locating the optical disc (OD) in the retina background images. The suggested technique, based on OD characteristics, Includes Sobel or Canny edge detection process to find the circles using the Hough transformation [7]. In 2018 Nur and Tjandrasa offered segmentation exudates using the region-based saliency technique acquired by means of severity threshold intensity. In this study, there are three primary stages, namely, optical disk removal, exudate location detection, and segmentation of exudates. Optical disks were removed using an algorithm for the midpoint circle. The image was split into smaller sub-images called patches in the detection stage of the exudate-area and then ranked into an exudate patch and an exudatefree patch based on the threshold of intensity acquired from each image. For a sub-image categorized as an exudate patch, the technique of saliency is then segmented [12]. Methodology The algorithm was developed based on fundus images. Dataset used is from DRIVE database also used images that captured by a ZEISS CLARUS 500 fundus camera. That images for automatic detection of the optic disk and exudate in the retina image. The exudates were of the type DR provided in the images. The intensity of each pixel in the image ranges from 0 to 255. The areas with elevated and low image intensities may have very significant characteristics because they are labeled as image objects. A pre-processing begins with image retention to represent an image in RGB color space which means that the image is represented in (3) channels RGB (Red, Green and Blue), each with an intensity of 0 to 255. Exudates-detection is considering a major problem which affects the performance of diagnostic method because of the high similarity ratio between optic disk and exudates. Consequently, this work aims at finding and segmenting the optic disk in exudates detection in order to provide data about early diabetic retinopathy symptoms so that the disease can be adequately managed to reduce visual impairment opportunities. Pre-processing The intent of this project is to deracinate the exudates utilizing color fundus image. The optic disk is light yellow zone that have analogous semblance of exudates (See Fig.1.). The initial step of this work is to eliminate the OD prior to searching of the exudates, which is based on their yellow color criteria. The proposed method is being progressed for spontaneous disclosure of the OD to eliminate this physiologically valid, yet it has a comparable structure. Fundus image is an RGB color image, broadly, color retinal fundus image composed of three channels (blue, green, and red). Low disparity distinguishes the blue channel and does not contain much data. The red channel ships are noticeable but commonly have lots of noise or simply appeased because most characteristics in the red channel utter signal. In the meantime, the green element of the retina image gives the highest consequent in the comparison between blood vessels (darker blood vessels on a shiny background), on the basis of image green channel is implemented with automatic fundus image evaluation. La*b* color space is valuable in exemplifying color merit of a image due to its robustness to all devices as it facilitates implementation. The La*b* color model is a three-axis color system with dimension L for Lightness while a* and b* refer to the chromatic information. In this suggested technique, we originally mutate RGB image to La*b*, then calculate CMY: (3) Where C, M and Y from eq. (1), (2) and (3) are Cyan, Magenta and Yellow that calculate from R, G and B that extract from RGB (red, green and blue) image before. Enhancement: Retinal images after obsession are mostly loud and of low contrast which makes them of a non-unified lighting. Hence, we set algorithms to enhance image and noise reduction by increasing contrast and filtration. To improve contrast, a Contrast-Limited Adaptive Histogram Equalization "CLAHE" has been used. "CLAHE" run on small regions of the image and the contrast of every small region improved by the "Histogram Equalization". The mechanism of "CLAHE" are renowned for their local contrast improvement of the images and facilitate the mission of physicians prior to image test. Enhancement shapes the first move to automatic analysis of retinal images. "CLAHE" technique used to make the image well contrasted. An adjusted image intensity would be applied after the implementation of equalizations to increase the contrast of the output image outcome from the equalization. (i.e. adjust image in intensity value or color plan). As a result of that, the produced image become noisy and in order to fix it we used the Morphology filtration to improve the image and noise reduction. Morphological filtration utilized by considering composite operations in which it can be opened and closed as filters. These operations can filter from an image any model with a smaller size than the structuring element. However, portions of a image can pass the filter if they fit the structuring element while the smaller structures will be blocked and eliminated from the output image. Size of the structuring element is a paramount and can be used to reduce noisy details that will not damage any objects of interest. Ultimately, more Morphological Techniques will be utilized to clean up the image that called "opening by reconstruction" or "opening closing by reconstruction". ROI (Region Of Interest) Automatic detection based on maximum intensity We want to investigate more closely a specific area within the image. To do this we need steps that modifies spatial intensity values. First step to localize the OD to determine the optic cup that is the lightest and shiniest region (s) in the retinal image (See Fig.2). (Red R, green G and blue B). The red channels are the shiniest one where the optic disk appearance is in a well-defined form. The green channel has the highest contrast, while the blue channel is normally characterized by a low contrast. In this case, we initiated another step to figure out the luminance component (ye) from eq. (4) that we be able to distinguish intensities coloration red, green and/or blue to obtain the revised channel that joint all features among the three channels. This proposed method is determined by the below formulas: ye(i, j)=0.73925 R(i, j)+0.14675 G(i, j)+0.114 B(i, j); (4) for 1< i <720, 1< j <480 In order to remove the background lighting variations, the filtration process applied by the mean filter, which is obtained by (fye) on (ye) as in eq. (5). Thereafter, and in order to acquire the light and bright regions of an image (optic cup) (Fig 3), we have specified the maximum amount of the optic disk (Very few numbers of pixels come up with this property) by: fye= ∑ ; (5) for 1<i<n, n=size of image, x ∈ (x, y) value of ye image MS(x,y)=fye-0.299 M(x,y); (6) for 1<x<720, 1<y<480 After that we cropped the image to the bound the area of the optic disk (see Fig 4). To reduce the white region in optic disk we implement black bottom hat which was minified from white top hat of the preceding FCM resulted a cropped image and convert the white pixels in FCM cropped image to black (see Fig 6). Retrieval ROI image To localize the OD in crop image, a black pixel covering the OD in green image with that has the same OD (see index Fig. 7). Exudates Detection The fully exudates detection in the retinal image is done using three successive procedures, for correction of the exudates detection and removing false positive. P1-Excaudate detection Based on Intensity Thresholding The initial step is focused on Intensity Thresholding. In this approach, the luminance constituent of the LAB color space (L) was utilized to facilitate the separation of the color apparatuses from intensity. Because of the greater intensity levels of the exudates in comparison to fundal image's background, this allowed (L channel) to avoid undesirable noises throughout the process of exudates detection. Subsequently, utilizing morphology-based contrast enhancement on the (L channel subtract from C channel) of input fundal image, we were able will to intensify the image contrast which will eventually simplify the process of differentiating the exudates from the fundal image. This contrast enhancing technique is using white and black top hat subsequently and it transform as shown below:  In order to increase the concentration of image exudate, intensity adjustments are applied before applying intensity thresholds for exudate identification.  Exudates are disclosed as white pixels and other pixels as black pixels in the original output image after intensity adjustment.  "Limited adaptive histogram equalization" (CLAHE) is used to remove reflections in certain cases on the image.  Exudates are detected using an equalized histogram threshold. P2-Morphological Processing The morphological processing used to detect all the exudates in the retinal image. This will be achieved by using the green channel (G) of the retina image, and then eliminate the OD from input image as shown in fig. (4). The steps of detection of exudates based on morphological processing is explained as the following: To extract exudates in fundal image we generated ring filter by dilation (green channel) G two times (I1 and I2) by utilizing structuring constituents with disc shape and of various sizes of 16 and 7. I1=G  S1 (7) I2=G  S2 (8) Subtract eq. (7) from eq. (8) that produces the exudates' edges. converting the resulting image from the step above to binary by using 0.04 as threshold. closing operation is implemented in order to complete the edges of detected exudates. P3: Combination P1 and P2 As both procedures of detecting exudates show a variety sources of error, these varieties will be assumed as false positives. The combination of last output from both procedures shown above was applied in order to enable the detection of any trivial exudates originating in the image during the removal of the false positives. The presenting of false positives in the morphological output technique are because of blood vessels and reflections near to blood vessels. The steps of combination procedure are shown down: Let me represent the production of the exudate detection technique based on morphology. Morphological S1 and S2 closure operation. S1 and S2 are disk shape structuring constituents of size 6 and 25 respectively. Closing operation with two different sizes of structuring elements is applied so each size of exudates can be detected. I1=I  S1 (9) I2=I  S2 (10) Where  denotes morphological closing operation, I1 and I2 are output images that involve smaller size and larger size of items respectively yet noises are due to blood vessels are also existing. Logical "AND" and "EX-OR" operations are used between output images I3 and I4 to reduce these reflections and incorporate exudates detection, and results are combined. (15) where "(+)" represents logical EX-OR operation, "." Signifies logical "AND" operation. Because the error of the two approaches are distinctive, taking into account only the combination of the two independent methods produces only original exudates and would overturn the false positives to promote the accuracy of detection. The result of exudates identification is shown in Fig. (8) below: Figure 9. Algorithm for exudates detection A performance measures comparison between the proposed method with related works mentioned in [Patrovi et. al., 2016] is summarized in table (1), and the graphical representation is shown in Fig (10). Conclusion Thus far, the paper introduced new approach to eliminate the optic disk and automatic detection of exudates in the retina images for preemptive diagnosing of diabetic retinopathy in its early stages in order to reduce the risk of complications that can lead to total sight-loss. Several images from a standard database were applied to examine these approaches. Automatic techniques aiming at screening the exudates were developed based on the methods of image processing that use strategic mixture of morphological based techniques and intensity thresholding that aim at the removal of false positives to obtain accurate exudates detection. False positives exist in output of intensity thresholding are largely due to the reflection near to optic disc, however, in the morphological processing false positives were present due to blood vessels and reflection near to blood vessels. The result of this algorithm detects all the exudates in the image precisely and with a promising scale of exactness.
2021-05-10T00:03:41.763Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "269fdcd829b9fe62f6e4ab7151480bcd254677d9", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1804/1/012128/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "129e076502ea0c46c1657b0891bcc91478807d28", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
245987717
pes2o/s2orc
v3-fos-license
Novel Mechanisms of Anthracycline-Induced Cardiovascular Toxicity: A Focus on Thrombosis, Cardiac Atrophy, and Programmed Cell Death Anthracycline antineoplastic agents such as doxorubicin are widely used and highly effective component of adjuvant chemotherapy for breast cancer and curative regimens for lymphomas, leukemias, and sarcomas. The primary dose-limiting adverse effect of anthracyclines is cardiotoxicity that typically manifests as cardiomyopathy and can progress to the potentially fatal clinical syndrome of heart failure. Decades of pre-clinical research have explicated the complex and multifaceted mechanisms of anthracycline-induced cardiotoxicity. It is well-established that oxidative stress contributes to the pathobiology and recent work has elucidated important central roles for direct mitochondrial injury and iron overload. Here we focus instead on emerging aspects of anthracycline-induced cardiotoxicity that may have received less attention in other recent reviews: thrombosis, myocardial atrophy, and non-apoptotic programmed cell death. INTRODUCTION Considerable research effort has been invested in understanding the complex and multifactorial mechanisms underlying anthracycline-induced cardiotoxicity. Longstanding evidence has established causative roles for oxidative stress in contributing to cardiomyocyte dysfunction and death (1). Mitochondrial dysfunction generates much of this oxidative stress and the central role of multifaceted mitochondrial injury in anthracycline-induced cardiotoxicity has been comprehensively reviewed recently (2). Here, we will focus on emerging, though less-studied, mechanisms underlying the adverse effects of anthracyclines on both the heart and the vasculature. ANTHRACYCLINES AND THROMBOSIS Observational data suggest that some anti-cancer therapies are associated with increased risk for thrombotic events in the venous and arterial vasculature including deep vein thrombosis (DVT), pulmonary embolism (PE), and arterial thrombosis (AT) as recently summarized by Grover et al. (3). Indeed, Weiss et al. reported that 5% of stage II breast cancer patients (22/443) with 2 years of post-mastectomy chemotherapy developed venous thrombosis without signs of metastasis (4). Interestingly, no thrombosis was observed after completion of the chemotherapy (4). In another study of Stage IV breast cancer patients, thrombosis incidence rose to 17.6% in those who received anthracyclines (5). Interestingly, analysis of common risk factors for thrombosis (ambulatory status, obesity, family history, smoking, diabetes mellitus, hypertension, liver dysfunction, thrombocytosis, and previous endocrine therapy) showed no association with the observed thrombotic events (5). With specific regard to anthracyclines, multiple myeloma patients were at an increased risk of DVT (16%) when doxorubicin (DOX) was added to thalidomide and that risk increased with age (6). Importantly, the thrombotic risk for all three of these trials is reported relative to a control group that did not receive an anthracycline. Increased thrombosis incidence (7.5%) was also observed in breast cancer patients undergoing an anthracycline-containing chemotherapy regimen with agedependent risk increase (27%) in patients over 60 years, though this study did not include a control group that was not exposed to anthracyclines (7). Patient-specific factors that enhance risk of anthracyclineinduced thrombosis are poorly defined, though one intriguing possibility is the metabolic syndrome. Individuals with the metabolic syndrome are at higher risk of both thrombotic events (8), and anthracycline-induced cardiotoxicity (9), possibly as a result of the chronically proinflammatory systemic milieu. Obesity (10) and insulin resistance (11,12) components of the metabolic syndrome, also independently enhance risk for anthracycline-induced cardiotoxicity, though a direct link to thrombosis has not been established. PRO-THROMBOTIC EFFECTS ON VASCULAR CELLS How do anthracyclines, such as DOX, contribute to a prothrombotic phenotype? Multiple studies have shown that anthracyclines increase phosphatidylserine (PS) exposure on the outer cell surface on vascular cells (13)(14)(15)(16). Negatively charged PS-rich membranes enhance the coagulation cascade reaction by increasing the activity of gamma carboxyglutamic acid (GLA)dependent coagulation factors like factor VIIa (FVIIa), FXa, FIXa, and thrombin (17). Liaw's group showed that DOX induces a procoagulant phenotype in human endothelial cells (ECs) by increasing the PS flip to the cell surface which enhances activity of preexisting tissue factor (TF), without increasing its expression level (16). Interestingly, this effect was not seen for methotrexate nor 5-fluorouracil treated ECs (16). Further, the increase in surface PS on the ECs was associated with DOXinduced EC apoptosis (16). Later, Boles et al. (15) confirmed that the anthracycline daunorubicin also increased cellular TF activity without affecting TF protein levels, but rather by enhancing PS surface exposure on the human monocytic cell line THP-1 (Figure 1). DOX had a similar effect on platelets, causing increased PS surface exposure due to apoptotic pathway activation in DOX-exposed human platelets and subsequently resulting in enhanced procoagulant activity (14). The authors linked the increased PS exposure to DOX-induced platelet mitochondrial dysfunction at doses of 2.5-7.5 mg/kg in rats (13). Interestingly, at a cardiotoxic DOX dose of 25 mg/kg apoptosisdependent thrombocytopenia was observed as early as 4 h after DOX injection in rats (13). Moreover, daunorubicin was shown to increase the release of TF+ extracellular vesicles (EV) from THP-1 cells in vitro (Figure 1) (15). Increased anthracycline-induced EV release was confirmed by others (18)(19)(20). DOX-induced EVs are enriched for 4-hydroxy-2-nonenal (4-HNE), a marker for oxidative stress (19). 4-HNE can directly induce the release of TF+EVs from perivascular cells which can contribute to a prothrombotic state (21,22). In line with this observation, TF+EVs were shown to enhance thrombus formation in multiple murine models of cancer-associated thrombosis (23,24). Aside from its procoagulant effects, DOX is known to negatively affect the anticoagulant properties of ECs by downregulating the expression of the endothelial protein C receptor, leading to decreased protein C pathway activation (25). EFFECTS ON BLOOD FLOW AND THROMBUS FORMATION IN VIVO Injection of DOX (8 mg/kg) leads to occlusive vasoconstriction of smaller vessels (<15 µm) and vascular leakage in the murine femoral microvasculature within 4 min (26). Moreover, the same dose of DOX also reduces the blood flow in testicular arteries in mice within 15 min of injection (27). The authors linked these phenomena to DOX-induced vascular toxicity leading to EC-platelet interactions and the formation of EC-bound platelet microthrombi (27). Blood flow was restored by pre-treatment with low molecular weight heparin or the anti-platelet drug eptifibatide, suggesting that anti-platelet/anti-coagulant agents might be effective in reducing the detrimental vascular effects of DOX (27). DOX doses up to 7.5 mg/kg significantly enhanced thrombus sizes in a modified rat FeCl 3 vena cava thrombosis model, without causing thrombocytopenia (14). In addition, in a vena cava stasis model DOX (7.5 mg/kg) caused increased thrombus formation that was reduced by administration of clopidogrel, aspirin or an inhibitor of platelet activated factor (28). These findings strongly suggest that DOX-induced venous thrombosis is dependent upon platelet activation (28). COAGULATION-DEPENDENT SIGNALING IN ANTHRACYCLINE-INDUCED CARDIOTOXICITY While coagulation activation leads to fibrin deposition, the coagulation proteases that are generated in the process also lead to cleavage of protease-activated receptors (PARs) (29). PAR1 and PAR4 are activated by thrombin and are expressed on human platelets; their cleavage is the strongest plateletactivating stimulus. PAR3 also is activated by thrombin, but PAR3 mostly acts as co-factor for PAR4 and has only limited signaling function in humans (30). PAR2 is rather thrombininsensitive and is primarily activated by the TF:FVIIa complex or FXa (31). Though PARs frequently are considered for their roles in platelets, they also are expressed on cardiomyocytes, where they contribute to the cardiac response to multiple injury models (29,31,32). The absence of PAR1 and PAR2 reduced infarct size and adverse cardiac remodeling in experimental heart failure (29,31,32). PAR4 activation can be cardioprotective or detrimental dependent on the chosen injury model and time point analyzed (31,(33)(34)(35)(36). With regard to chemotherapy-induced toxicity, PAR1 deficiency and PAR1 inhibition with the FDA-approved drug vorapaxar protected against DOX cardiotoxicity in mice (37). PAR1 activation exacerbated mitochondrial dysfunction and apoptosis in cardiac cells exposed to DOX in vitro (37). PAR1 deficiency was associated with reduced oxidative stress and apoptosis as well as decreased circulating cardiac troponin I and improved cardiac contractile function in the hearts of mice treated with 20 mg/kg DOX (37). PAR1 deficiency was also protective in a chronic DOX cardiotoxicity model (5 mg/kg/week for 5 weeks) (37). In line with these observations, PAR1 inhibition with the PAR1 inhibitor Q94 reduced toxic renal effects of DOX (15 mg/kg) in mice (38). Whether PAR2 or PAR4 contribute to DOX cardiotoxicity is the objective of ongoing investigations. Interestingly, PAR2 inhibition with FSLLRY-NH2 reduced nephropathy in a chronic rat DOX kidney injury model (1 mg/kg/day for 6 weeks) suggesting that PAR2 deficiency/inhibition might also be cardioprotective during DOX chemotherapy (39). ANTHRACYCLINES INDUCE MYOCARDIAL ATROPHY Anthracycline-based chemotherapies are known to cause abnormalities in heart morphology in cancer patients. Childhood cancer survivors who received anthracycline treatment have reduced ventricular wall thickness and myocardial mass later in life (40,41). Recent evidence suggests that anthracyclines also cause a reduction in left ventricular mass in adult cancer patients (42)(43)(44). Importantly, an early decline in heart mass is associated with worse heart failure outcomes, emphasizing the importance of this phenomenon (42). A decrease in heart mass can be caused by reduced cardiomyocyte size (atrophy) and/or number (i.e., loss of cardiomyocytes due to cell death). Here, we summarize recently identified mechanisms underlying anthracycline-induced atrophy and cell death (Figure 2). Similar to the clinical findings, exposure to the anthracycline DOX also reduces heart weight in mice (44)(45)(46). At the molecular level, DOX induces p53 expression, which is necessary for inactivation of mammalian target of rapamycin (mTOR), a serine-threonine kinase essential for protein synthesis (46). Interestingly, DOX-induced reductions in heart weight and myocyte size are abolished by cardiac-specific expression of dominant-interfering p53 or constitutively active mTOR, suggesting that DOX induces cardiac atrophy through p53dependent inhibition of mTOR (46). Activation of mTOR by vascular endothelial growth factor-B (VEGF-B) gene therapy also prevents DOX-induced cardiac atrophy (47). Conversely, inducible ablation of mTOR in adult heart is sufficient to reduce cardiomyocyte size within 1-2 weeks (48). Taken together, these data indicate that mTOR inhibition is an important mechanism underlying DOX-induced atrophy. Cardiac atrophy can occur as a result of oxidative stress. DOX exposure induces reactive oxygen species (ROS) generation through mitochondrial iron accumulation, owing to repression of ATP-binding cassette protein-B8 (ABCB8)mediated mitochondrial iron export (49). Cardiac-specific ABCB8 transgenic mice are protected from DOX-induced ROS generation and atrophy (49). In addition, DOX exposure induces transient receptor potential canonical 3 (TRPC3)-dependent upregulation of NADPH oxidase 2 (Nox2) (50). Formation of the TRPC3-Nox2 complex amplifies ROS production and results in cardiac atrophy. Knockdown of TRPC3 or pharmacologic inhibition of TRPC3-Nox2 interaction attenuates DOX-induced atrophy in neonatal rat cardiomyocytes (NRCMs) (50). Moreover, mice lacking Nox2 are also resistant to DOX-induced cardiac atrophy (51). These findings suggest that enhanced ROS production resulting from mitochondrial iron accumulation or TRPC3-Nox2 complex formation also contributes to DOX-induced atrophy. CONTRIBUTIONS OF PROGRAMMED CELL DEATH TO ANTHRACYCLINE CARDIOTOXICITY Exposure to anthracyclines triggers a variety of cell death modalities in the heart, resulting in cardiac cell loss. Anthracycline-induced cell death pathways have been reviewed in detail quite recently (52). A brief summary of the novel mechanisms of anthracycline-induced cardiomyocyte death is provided below. Apoptosis Apoptosis is undoubtedly the most intensively studied form of cell death in anthracycline cardiotoxicity. DOX targets topoisomerase-IIβ to cause DNA double-strand breaks and initiate the intrinsic apoptosis pathway (53). DNA damage induces p53-dependent oligomerization of the Bcl2 family members Bak and Bax, which forms a pore in the outer mitochondrial membrane, resulting in cytochrome c release, caspase activation, and apoptosis. Accordingly, pharmacological inhibition of p53 or Bax blocks apoptosis and prevents DOX-induced cardiomyopathy (54,55). It is noteworthy that p53 plays complicated roles in DOXinduced cardiotoxicity by modulating apoptosis-independent processes including mitochondrial biogenesis (56) and clonal hematopoiesis (57), as well as atrophy (46). In addition to the pore-forming effectors Bak and Bax, the pro-apoptotic Bcl2 family proteins also include activators (Bim, Bid, and Puma) that directly interact with the effectors to trigger apoptosis (58). DOX induces expression of Bim through CDK2-dependent FOXO1 activation (45,59). Inhibition of either CDK2 or FOXO1 attenuates DOX-induced apoptosis and cardiac dysfunction (45,59). Young age, a major risk factor for anthracycline cardiotoxicity in humans, is associated with higher sensitivity to apoptosis, further supporting an important role of apoptosis in anthracycline-related cardiotoxicity (60). Mitochondrial Permeability Transition Pore (mPTP)-Driven Necrosis Necrosis driven by opening of the mPTP is characterized by rapid loss of the inner mitochondrial membrane potential and is dependent on cyclophilin D (CypD) (61). Recent evidence suggests that DOX treatment provokes mPTP-driven necrosis in cardiomyocytes (62). Mechanistically, DOX induces expression of Bnip3, which binds CypD to trigger mPTP opening and resultant necrosis (62). Bnip3 null mice are protected from DOX-induced mitochondrial damage, necrosis, and cardiac dysfunction (63). In addition, Bax and Bak are necessary for mPTP-driven necrosis (64,65). Indeed, a small-molecule Bax inhibitor protects against DOX-induced necrosis in vivo (55). Necroptosis Necroptosis is programmed cell necrosis that is initiated by binding of a death ligand (typically from the TNF superfamily) to a death receptor (such as Fas, TNFR1, or TRAIL) and culminates in plasma membrane permeabilization mediated by mixed lineage kinase domain like pseudokinase (MLKL) (61). MLKL activation and plasma membrane translocation requires phosphorylation by receptor-interacting protein kinase 3 (RIPK3) (66). DOX exposure upregulates cardiac RIPK3 and MLKL in vivo and in vitro to induce necroptosis (67). RIPK3 knockout mice are resistant to DOX-induced myocardial necrosis, cardiomyopathy and death (68). In this context, RIPK3 induces activation of Ca 2+ -calmodulindependent protein kinase (CaMKII) to trigger necroptosis (68). Moreover, DOX-induced cardiomyocyte death is blocked by the necroptosis inhibitor necrostatin-1, suggesting that necroptosis contributes to DOX-induced cardiomyocyte injury (67). CONCLUSIONS Here, we have reviewed our emerging understanding of the contributions of thrombosis, myocardial atrophy, and programmed cell death to the complex and multifaceted pathobiology of anthracycline-induced cardiovascular toxicity. Future work in our labs and others will further explicate the importance of these processes to anthracycline-induced cardiovascular toxicity and define whether they could represent novel therapeutic targets for prevention or treatment of these dose-limiting and potentially life-threatening adverse effects.
2022-01-17T14:09:27.401Z
2022-01-17T00:00:00.000
{ "year": 2021, "sha1": "f8d34fe59410d2495eabef7d4851aebef0dc8202", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "f8d34fe59410d2495eabef7d4851aebef0dc8202", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253235974
pes2o/s2orc
v3-fos-license
Brain magnetic resonance imaging predictors in anti‐N‐methyl‐D‐aspartate receptor encephalitis Abstract Objective Brain magnetic resonance imaging (MRI) findings in anti‐N‐methyl‐D‐aspartate receptor (NMDAR) encephalitis are nonspecific and rarely have obvious associations with clinical characteristics and outcomes. This study aimed to comprehensively describe the MRI features of patients with NMDAR encephalitis, examine their associations with clinical characteristics, and evaluate their predictive power for disease recurrence and prognosis. Methods We retrospectively extracted the clinical data and brain MRI findings of 144 patients with NMDAR encephalitis. Patients underwent a 2‐year follow‐up to assess disease outcomes. We evaluated the associations of brain MRI findings at the onset with clinical characteristics, recurrence, and prognosis. Results Initial MRI showed typical abnormalities in 65 patients (45.1%); of these, 34 (29.3%) developed recurrence and 10 (9.4%) had poor prognosis (mRS ≥3). Binary logistic regression analyses revealed that insula abnormalities were associated with acute seizure (odds ratio [OR] = 3.048, 95% confidence interval [CI]: 1.026–9.060) and white matter lesions were associated with cognitive impairment (OR = 2.730, 95% CI: 1.096–6.799). Risk factors for a poor 2‐year prognosis included a higher number of brain MRI abnormalities (OR = 1.573, 95% CI: 1.129–2.192) and intensive care unit (ICU) admissions (OR = 15.312, 95% CI: 1.684–139.198). The risk factors for 2‐year recurrence included abnormalities of the thalamus (HR = 3.780, 95% CI: 1.642–8.699). Interpretations Brain MRI features of patients with NMDAR encephalitis were associated with clinical manifestations, prognosis, and recurrence. Higher numbers of MRI abnormalities and ICU admissions were predictive of poor prognosis. Abnormalities of the thalamus constituted a recurrence‐related risk factor. Introduction Anti-N-methyl-D-aspartate receptor (NMDAR) encephalitis, the most common type of autoimmune encephalitis mediated by the NMDAR GluN1 subunit, [1][2][3][4][5] was first described by Dalmau et al. in 2007. 6 Most patients in that study were women (women:men, 8:2), and patient age varied extensively (median age, 21 years; range, 1-85 years). NMDAR encephalitis is particularly prevalent in women of childbearing age and children. 2,[4][5][6][7][8][9] NMDAR encephalitis presents diverse clinical symptoms, including fever and pre-onset flu-like prodromal symptoms; further, it has an acute onset with rapid progression and various neuropsychiatric symptoms. [4][5][6]8 Approximately 70%-75% of patients with NMDAR encephalitis are admitted to the intensive care unit (ICU) for monitoring and to receive respiratory and circulatory support. 3,4,6,[10][11][12][13] Diagnoses are confirmed after the detection of specific NMDAR antibodies in the cerebrospinal fluid or serum. NMDAR encephalitis is accompanied by the presence of underlying tumors in 5%-58% of patients, [5][6][7][8] and surgical resection of the tumors and immunotherapy are the main treatment options. 14,15 Early diagnosis and treatment can improve prognosis. 5,7,13 Prognosis is good in 75%-81% of patients, with a reported mortality rate of approximately 4% and a recurrence rate in the range of 12%-25%. [2][3][4]16,17 Brain magnetic resonance imaging (MRI) is a common auxiliary modality for examining the central nervous system and is relatively easy to perform. The rate of brain MRI abnormalities at the onset of NMDAR encephalitis ranges from 11% to 83%. 3,6,13,[17][18][19][20] Typical brain MRI abnormalities manifest as hyperintensity on T2-weighted images (T2WI) and fluid-attenuated inversion recovery (FLAIR) sequences. The distribution and degree of brain MRI abnormalities vary widely. [20][21][22] Abnormalities may occur in various brain regions and may involve a single site or multiple sites simultaneously. 6,11,13,18,20,[22][23][24] The abnormalities may be unilateral or bilaterally symmetric. 6,11,13,17,18,20,[22][23][24] Dalmau et al. reported that 55% of MRI abnormalities were located in the temporal lobe, hippocampus, corpus callosum, cerebral and cerebellar cortex, base of the frontal lobe, basal ganglia, and brainstem. 3,8 A 2015 review conducted by Heine et al. reported that 23%-50% of MRI abnormalities were located in the frontal, parietal, and temporal lobes, and that abnormalities were rarely located in the basal ganglia. 24 A systematic review published in 2018 reported that brain MRI abnormalities were most frequently located in the temporal lobe, cortical gray matter, and subcortical white matter. 18 Atypical or unrelated MRI findings include white matter lesions (WMLs), cerebral atrophy, ventriculomegaly, pituitary disease, and leptomeningeal and brain parenchymal enhancement. 18,22,24,25 Some patients develop cerebral atrophy during follow-up. 26,27 These abnormal brain MRI findings (T2/FLAIR hyperintense lesions) are nonspecific and have little association with clinical manifestations. 16,28,29 Recently, we demonstrated that abnormal MRI is a risk factor for NMDAR encephalitis relapse. 30 Bartels et al. found that children with abnormal MRI exhibited a more severe disease course and worse outcomes. 31 According to Balu et al., MRI abnormalities constitute the NMDAR Encephalitis One-Year Functional Status (NEOS) score. 32 However, the NEOS score and other existing studies do not describe any specific MRI imaging predictors for clinical outcomes. 9,16,28,29 Thus, we sought to systematically investigate the brain MRI features of NMDAR encephalitis, analyze the associations of each brain MRI feature with various clinical manifestations, and evaluate the imaging predictors associated with long-term prognosis and recurrence. We conducted this study to provide MRI support for effective clinical decisionmaking, recurrence prediction, and prognosis guidance. Experimental design and patient enrollment This study retrospectively and continuously enrolled patients with complete MRI data diagnosed with NMDAR encephalitis at the First Affiliated Hospital of Zhengzhou University (January 2013 to October 2019). The inclusion criterion was meeting the diagnostic criteria for NMDAR encephalitis (published by Lancet Neurology in 2016). 5 Patients with other central nervous system diseases, such as intracranial infections, metabolic encephalopathy, and neurodegenerative diseases, as well as those who failed to complete the brain MRI examination were excluded from the study. Standard protocol approvals and patient consent This retrospective observational study was approved by the Scientific Research and Clinical Trial Ethics Committee of the First Affiliated Hospital of Zhengzhou University (2021-KY-0193). The requirement for informed consent was waived by the committee due to the study's retrospective design. Antibody testing and clinical examinations NMDAR antibodies in the cerebrospinal fluid and serum of patients were detected at our institution's Department of Neurology laboratory using two assays: (1) a cell-based assay (CBA) that detects the antibodies through antibody-antigen reactions using a human embryonic kidney cell line (HEK293) transfected with NR1 and NR2B (i.e., NR1-NR2B heterodimers forming NMDA receptors) as the substrate (Euroimmun, L€ ubeck, Germany), and (2) a tissue-based assay (TBA) that detects NMDAR antibodies in frozen sections of rat cerebellum and hippocampal tissue through immunohistochemistry. NMDAR antibodies were considered present only when the patient tested positive on both assays. MRI scans All patients completed sequential MRI examinations within 1 week of admission, including T1-weighted images (T1WI), T2WI, FLAIR imaging, and diffusion-weighted imaging (DWI) sequences. Some patients were also examined using gadopentetate dimeglumine (Gd-DTPA) contrast-enhanced MRI, and some patients underwent MRI re-examinations 1-3 months after admission. All MRI examinations were performed using 3.0-T MRI scanners (Discovery 750, GE Healthcare, Chicago, IL, USA) with an eight-channel coil; and (Prisma, Siemens, Munich, Germany) with a 64-channel coil. The MRI findings were evaluated independently and reviewed by two neuroradiologists with 5 years of experience blinded to previous diagnoses; a third neuroradiologist with 10 years of experience was asked to evaluate the images in cases of disagreement. Abnormal brain MRI findings were defined as hyperintense regions on T2WI/FLAIR, which may be accompanied by hypointensities on T1WI or hyperintensities on DWI. 20,22 The sites showing abnormal MRI findings were the frontal, parietal, temporal, and occipital lobes; insula; basal ganglia; thalamus; lateral ventricle; brainstem; cerebellum; hippocampus; corpus callosum; and pituitary gland. An abnormal MRI region (such as the frontal lobe, temporal lobe, or basal ganglia) was defined as a region with MRI abnormalities. As a dichotomous variable, it was directly included in the regression analysis. The number of abnormal brain regions was recorded. The number of abnormal brain regions defined on this basis was a numerical variable that was included in regression analysis after the removal of extreme values. Abnormal MRI findings were classified as left-right symmetric (i.e., bilateral abnormal signals with fully symmetric sites and distribution ranges) and left-right asymmetric (i.e., unilateral abnormal signals, or bilateral abnormal signals with a lack of full symmetry of the involved sites and their distribution ranges); the symmetry of the distribution of MRI abnormalities was categorized as a dichotomous variable, and the assessment only included the symmetry of the distribution of abnormal MRI findings (T2WI/FLAIR hyperintense lesions) rather than any other radiological findings (e.g., white matter lesions). Compared with MRI examination findings at admission, the MRI re-examination findings 1-3 months after admission were classified as unchanged (i.e., no visible changes in lesion site and range), improved, or aggravated. If multiple MRI re-examinations were performed and the results were inconsistent, only the MRI findings of the most recent re-examination were compared with the findings at hospital admission. Alternatively, the MRI re-examination findings constituted a categorical variable, and "no significant change" served as a reference. "Alleviation" and "Worsening" were subsequently defined by conducting comparisons with respect to this reference value during regression analysis. MRI findings other than the above-mentioned T2/FLAIR hyperintense lesions (e.g., WMLs, cerebral atrophy, ventriculomegaly, ischemic foci, and leptomeningeal/brain parenchymal enhancement) were analyzed separately. WMLs were defined as hyperintensities on T2WI/FLAIR images involving the subcortical, periventricular, or deep white matter. We classified patients into those with and without WMLs according to their MRI findings. In addition, the presence of WMLs was also a dichotomous variable and was therefore directly included in the regression analysis. Data collection, follow-up, and outcome evaluation Assessment, documentation, and follow-ups were performed by two trained physicians, each with more than 5 years of clinical experience. Collected data comprised demographic characteristics, clinical symptoms within 1 month of onset, laboratory test results, MRI features, tumor status, and treatments. Cognitive function was assessed using the Mini-Mental State Examination (MMSE) within 1 month of onset, and cognitive impairment was defined as a score <27. The Montreal Cognitive Assessment (MoCA) was used to detect subtle cognitive impairment in patients who could complete the assessment, and cognitive impairment was defined as a score <26. Patients were followed up by telephone or in the outpatient clinic for 2 years starting from disease onset. Modified Rankin Scale (mRS) scores and recurrences were recorded. The follow-up cutoff date was October 31, 2020. Patients' mRS scores were used to assess neurological prognosis 33 ; poor prognosis was defined as an mRS score ≥3, and an mRS score = 6 indicated death. Good prognosis was defined as mRS scores ≤2. A recurrence event was defined as the appearance of new symptoms or the worsening of existing symptoms after 2 months of remission, or stabilization accompanied by cerebrospinal fluid positivity or serum antibody positivity. 34 Statistical analysis Statistical analyses were performed using the SPSS software program (version 25.0; IBM Corp., Armonk, NY, USA). Normally distributed continuous variables (confirmed by the Shapiro-Wilk test) were expressed as means AE standard deviations (SD), while non-normally distributed variables were expressed as medians and interquartile ranges. Categorical variables were expressed as counts and proportions. The associations between different imaging features and clinical characteristics were assessed via binary logistic regression analyses evaluating clinical characteristics as dependent variables and brain MRI findings as predictors. The factors influencing recurrence (e.g., sex, age, symptoms, MRI findings, laboratory results, ICU admission, and treatment) were used as covariates in Cox regression analyses. Each variable was subjected to univariate analyses. Linear regression was used to test the multicollinearity between statistically significant variables (p < 0.05) in univariate analyses, and tolerance and variance inflation factor (VIF) were calculated. The variables excluded from multicollinearity were subsequently selected by the forward likelihood ratio method and included in multivariate regression models. Lastly, the proportional hazards assumption of the Cox regression hazard model was verified via log-negative-log survival curves. Prognostic factors were identified using binary logistic regression; variable selection was performed using the same steps as General and clinical characteristics This study enrolled 160 patients diagnosed with NMDAR encephalitis. Sixteen patients who did not undergo MRI were excluded. Thus, 144 patients who completed brain MRI examinations were included in our analysis. The baseline data and follow-up clinical outcomes of the patients are shown in Table 1. Brain MRI findings All 144 patients completed routine 3.0 T brain MRI scans. Among them, 52 (36.1%) had normal brain MRI findings, and 65 (45.1%) had brain abnormalities (T2WI/ FLAIR hyperintense lesions) involving the cortex, or white matter in various brain regions; 27 (18.8%) patients presented with WMLs, ischemic foci, lateral ventriculomegaly, and/or cerebral atrophy. Table 2 presents the details of abnormal MRI findings and the outcomes of reexamination MRIs. MRI abnormalities are often located in multiple regions, some of which are often associated; however, the pattern is diverse. Figure 1 shows the abnormal areas in 65 patients with typical brain MRI abnormalities and intuitively displays the common combination patterns of different regions. Forty-six patients (31.9%) underwent contrastenhanced MRI. Among them, nine showed enhanced signals (six patients with leptomeningeal enhancement and three patients with brain parenchymal enhancement). One patient with contrast enhancement also showed abnormalities on noncontrast MRI. The remaining eight patients with enhanced signals showed new abnormalities on MRI when a contrast agent was used: four patients with normal MRI and two patients with abnormal signals on routine MRI showed leptomeningeal enhancement in contrast-enhanced MRI. In one patient with normal MRI and one with abnormal signal on routine MRI, contrastenhanced MRI revealed new abnormalities in the brain parenchyma. Eighty-four patients completed MRI re-examinations 1-3 months after admission, including 37 patients with no abnormalities and 57 patients with brain MRI abnormalities on the first brain MRI conducted at the onset. Among the patients who underwent multiple MRI examinations, two presented with worsening abnormalities and subsequent alleviation; these were considered remission patients. One patient presented with substantial cerebral atrophy on the re-examination MRI. Three patients presented with WMLs but no MRI abnormalities at onset; these patients showed a meaningful improvement in WMLs during re-examination. As only 58% of the cohort had follow-up MRI, the subsequent regression analysis did not include the variable of "alleviation or worsening of MRI abnormalities" in the model. We compared the age differences between the WMLs and non-WMLs groups using the independent sample t-test and found that the mean age of patients in the WMLs and non-WMLs groups was 39.42 years and 25.08 years, respectively. A significant difference was observed between groups (mean difference = 14.34, p = 0.005). Thus, we adjusted for the multicollinearity of age and WMLs (age: VIF = 1.160, WMLs: VIF = 1.160) and included them in the binary logistic regression analyses. After adjusting for the influence of age (OR = 0.986, 95% CI: 0.961-1.012, p = 0.283), WMLs were still associated with cognitive impairment (OR = 2.909, 95% CI: 1.139-7.428, p = 0.026). Associations between MRI findings and 2year prognosis The influential factors associated with prognosis were assessed via binary logistic regression (Table 3). Linear regression was used to analyze statistically significant variables in the univariate analysis (p < 0.05) before multivariate regression analysis: number of brain MRI abnormalities (VIF = 3.412), temporal lobe abnormalities (VIF = 3.673), insula abnormalities (VIF = 1.613), ICU admissions (VIF = 1.076), and recurrence (VIF = 1.134). There was no indication of severe multicollinearity. Figure 1. Involvement of each brain region in 65 patients with abnormal brain MRI, where 1 (yellow) indicates abnormal MRI findings, and 0 (blue) indicates normal MRI findings. Some common combination patterns, such as lateral periventricular region + parietal lobe + frontal lobe, frontal lobe + parietal lobe + temporal lobe + insula, parietal lobe + temporal lobe + occipital lobe, and thalamus + basal ganglia + brain stem (among others) can be observed. Both outcome (Poor/Good) and recurrence (Yes/No) columns are included in the figure, with * representing lost to follow-up. The multivariate analysis of binary logistic regression suggested that a higher number of brain MRI abnormalities (OR = 1.573, 95% CI: 1.129-2.192, p = 0.007) and ICU admissions (OR = 15.312, 95% CI: 1.684-139.198, p = 0.015) were associated with poor 2-year prognosis ( Table 3). Given that the number of patients with eight sites was equal to zero, and the patient with nine sites was lost in the follow-up, the number of abnormalities included in the regression analysis was in the range of 0-7. Figure 2 shows the relationship between "number of MRI abnormalities" and "2-year prognosis." MRI findings and disease recurrence The influential factors associated with recurrence were assessed via Cox regression hazard models (Table 4). Univariate analyses suggested that the following risk factors were associated with 2-year recurrence, which were included in the linear regression for multicollinearity A total of 107 patients completed routine follow-up, and nine patients were re-admitted because of recurrence, leading to a total of 116 patients. tests: age (VIF = 1.100), parietal lobe abnormalities (VIF = 2.084), temporal lobe abnormalities (VIF = À2.693), thalamus abnormalities (VIF = 1.912), lateral periventricular abnormalities (VIF = 1.303), and number of brain MRI abnormalities (VIF = 5.764); there was no significant multicollinearity between these variables. Discussion There are numerous studies on brain MRI findings in patients with NMDAR encephalitis; however, the results are often inconclusive. 16,28,29 The present study showed that various brain MRI findings were associated with clinical manifestations and outcomes: (1) Insula abnormalities were associated with acute seizures, and WMLs were associated with cognitive impairment; (2) higher number of brain MRI abnormalities and ICU admissions were risk factors for poor 2-year prognosis; and (3) thalamus abnormalities constituted a risk factor for recurrence. Herein, we enrolled 144 patients with NMDAR encephalitis, 45.1% of whom presented with brain MRI abnormalities. Chinese studies typically have a larger proportion of male patients 16,20,22,28 and more relapses than studies from Western countries. 8,13,32 This was also the case in our study, as 42.4% of the patients were men, and 29.3% had relapses. Since this could be related to race or culture, it may be worth investigating. In our study, 20.8% of patients had cognitive impairment within 1 month of onset, and their cognitive function was not reassessed during follow-up. Heine et al. evaluated the long-term cognitive functions of 43 patients with NMDAR and found that all patients had persistent cognitive deficits 2.3 years after onset. 35 These results suggest that persistent cognitive impairment may develop, and future studies should continuously evaluate cognitive impairment through long-term follow-up. Other demographic characteristics, common clinical manifestations (Table 1), and incidence of abnormal MRI were similar to those in prior large-sample studies. 3,6,13,[16][17][18][19][20]32 MRI abnormalities (T2WI/FLAIR hyperintense lesions) were observed in various brain regions and were mostly found in the temporal, frontal, and parietal lobes ( Table 2). According to Zhang et al.,20 hippocampal lesions were the most common MRI abnormalities. Other authors have noted that these were likely driven by herpes simplex encephalitis (HSE) with subsequent NMDAR encephalitis in some patients. 36 Hippocampal involvement was rare (6.3%) in the current study. Thus, it is possible that the sample size in the study by Zhang et al. was relatively small, and that the proportion of hippocampus lesions was affected by the patients with post-HSE NMDAR encephalitis. Compared with previous studies, the brain MRI abnormalities observed in this study covered a broader range of brain regions, indicating that brain MRI abnormalities can involve various brain regions and do not occur at typical preferential sites. Multiple brain MRI findings were associated with clinical manifestations. Namely, insula abnormalities were associated with acute seizures, and WMLs were associated with cognitive impairment. A 2018 cohort study enrolling 106 cases reported that the evaluated associations between MRI abnormalities and clinical manifestations (seizures, hypoventilation, loss of consciousness, and tumors) were not statistically significant. 22 Conversely, this study comprised a wider range of clinical characteristics and MRI findings. Due to the anatomical locations of lesions within the nervous system, insula lesions often clinically manifest as acute seizures. Previous research has been performed on insular epilepsy. 37 Isnard et al. showed that in refractory temporal lobe epilepsy patients, temporal lobe cortical resection completely controlled seizures of temporal lobe origin but did not affect insular-origin seizures. 38 Previous studies have suggested that WMLs increase the risk of cognitive impairment or dementia in the general older adult population (60-90 years). 39 We performed a subgroup analysis and observed differences in the mean age (39.42 years in the WMLs group and 25.08 years in the non-WMLs group), suggesting that age also affects the presence of WMLs in patients with NMDAR encephalitis. However, our results also showed that after adjusting for age, WML was still associated with cognitive impairment. Therefore, the previously reported inconsistency between brain MRI findings and clinical manifestations in NMDAR encephalitis 22 may not be entirely correct. Nonetheless, additional research is needed in this regard. Multivariate analyses suggested that a higher number of brain MRI abnormalities and ICU admissions were risk factors for poor 2-year prognosis. Balu et al. 32 and Titulaer et al. 13 suggested that ICU admission is a risk factor for poor prognosis in NMDAR encephalitis; this is consistent with our findings. As mentioned earlier, previous studies by Bartels et al. 31 and Balu et al. 32 suggested that abnormal MRI was a risk factor for poor prognosis. Wang et al. observed that the mean mRS scores during a 4-month follow-up period were higher in patients with abnormal initial MRI findings; however, neither the abnormal initial MRI findings nor the mRS scores were significantly associated with the prognosis. 22 To date, there have been a few studies on specific MRI features. Zhang hippocampus were the most common abnormal MRI findings and that hippocampal lesions were the main MRI predictors of poor prognosis in NMDAR encephalitis. 20 However, a small sample size of only 53 patients was included in this study, and the researchers defined poor prognosis based on an mRS score 33 of 2-5. Iizuka et al. showed that diffuse cerebral atrophy, proposed as reversible, is not associated with poor prognosis. 26 Cerebellar atrophy is irreversible, and it is uncertain whether MRI findings are predictive of prognosis in this disease. 26,27 Our results demonstrated that patients with more brain MRI abnormalities had poor 2-year prognosis, suggesting that MRI findings are predictive of prognosis and that this predictive power depends on the number of MRI abnormalities. In this study, multivariate Cox regression analyses showed that abnormalities in the thalamus identified on brain MRI were risk factors for recurrence. Studies on factors associated with the recurrence of NMDAR encephalitis are lacking, and studies on imaging predictors are even more infrequent. A 25-patient study by Gabilondo et al. suggested that immunotherapy at initial onset may reduce the risk of recurrence. 34 However, we previously demonstrated that abnormal MRI is a risk factor for relapse. 30 In a further study, abnormalities in the thalamus identified on brain MRI were a predictor of recurrence. Notably, we observed that some brain regions (e.g., temporal lobe abnormalities) were not independent predictors in the multivariate analyses, even though they were relatively common, thus suggesting that the more frequently affected regions may not be more important in outcome predictions. This finding is novel, and further research is needed to explore the underlying mechanisms. Herein, we enrolled 144 patients, 37 (25.7%) of whom developed WMLs. As mentioned earlier, patients with WMLs tend to be older and more often experience cognitive impairment. The observation that WMLs were related to cognitive deficits is in line with Phillips et al.'s observations. 40 Furthermore, a link between oligodendrocyte pathology and WMLs has been discussed in NMDAR encephalitis. 41 In this study, 46 of 144 patients (31.9%) underwent contrast-enhanced MRI examination, among whom nine presented with an enhancement of brain MRI abnormalities. One patient with contrast enhancement also demonstrated abnormalities on noncontrast MRI, and the remaining eight patients showed new abnormalities when a contrast agent was used. Dalmau et al. reported that some patients present with MRI abnormalities accompanied by meningeal enhancement in the involved regions. 8 A systematic review reported that the most common enhancement is a leptomeningeal enhancement, followed by cortical enhancement, 18 consistent with our study findings. In conclusion, contrast-enhanced MRI examination currently presents no substantial advantages for improving the positivity rate for lesion detection; however, it can indicate new lesions that are not detected in routine MRI, particularly when there is meningeal involvement. The outcomes of brain MRI findings were explored by re-examining the patients 1-3 months after admission. Re-examination was not necessarily synchronized with disease progression. Multiple rounds of re-examination revealed that MRI abnormalities of two patients in our study first aggravated and subsequently improved. Preliminary statistical analyses revealed that the outcomes of brain MRI findings may somewhat be associated with disease recurrence. However, multivariate analyses revealed no significant associations. Admittedly, the follow-up MRI study will be biased by clinical needs, that is, patients without lesions or rapid clinical improvement will not undergo follow-up studies in contrast to patients that do not improve. Thus, we recommend that future studies perform MRI re-examination multiple times and compare the MRI findings during the entire disease course and conduct a long-term follow-up. This single-center retrospective study was conducted at a provincial tertiary hospital and may be subject to certain biases peculiar to observational epidemiologic research. Notably, complete rounds of brain MRI examination were lacking in the 2-year follow-up, and the follow-up MRI study will be biased by clinical needs. We described the brain MRI findings, and the observed associations identified herein may have implications for clinical practice; nevertheless, further research to understand the mechanism underlying these associations is needed. In summary, we observed associations between brain MRI abnormalities and clinical characteristics in patients with NMDAR encephalitis and demonstrated that brain MRI is a valuable predictive tool in NMDAR encephalitis that should be validated in larger prospective multicenter studies.
2022-11-01T06:16:10.842Z
2022-10-31T00:00:00.000
{ "year": 2022, "sha1": "e77eba1a5a46c767d70b8adb1052611221ff65ec", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "b347d48f13b0fc69e7fa9ced92336e6f59a5c76d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264808841
pes2o/s2orc
v3-fos-license
Residual Risk of Ipsilateral Tumor Recurrence in Patients Who Achieved Clear Lumpectomy Margins After Repeated Resection Purpose Patients with breast cancer with positive lumpectomy margins have a two-fold increased risk of ipsilateral breast tumor recurrence (IBTR). This can be the result of either technically incomplete resection or the biological characteristics of the tumor that lead to a positive margin. We hypothesized that if achieving negative margins by re-excision nullifies the IBTR risk, then the increased risk is mainly attributed to the technical incompleteness of the initial surgeries. Thus, we investigated IBTR rates in patients with breast cancer who achieved clear margins after re-excision. Methods We retrospectively reviewed patients who underwent breast lumpectomy for invasive breast cancer between 2004 and 2018 at a single institution, and investigated IBTR events. Results Among 5,598 patients, 793 achieved clear margins after re-excision of their initial positive margins. During the median follow-up period of 76.4 months, 121 (2.2%) patients experienced IBTR. Patients who underwent re-excision to achieve negative margin experienced significantly higher IBTR rates compared to those achieving clear margin at first lumpectomy (10-year IBTR rate: 5.3% vs. 2.6% [25 vs. 84 events]; unadjusted p = 0.031, hazard ratio, 1.61, 95% confidence interval [CI], 1.04–2.48; adjusted p = 0.030, hazard ratio, 1.69, 95% CI, 1.05–2.72). This difference was more evident in patients aged < 50 years and those with delayed IBTR. Additionally, no statistically significant differences were observed in the spatial distribution of IBTR locations. Conclusion Patients who underwent re-excision for initial positive margins had an increased risk of IBTR, even after achieving a final negative margin, compared to patients with negative margins initially. This increased risk of IBTR is mostly observed in young patients and delayed cases. INTRODUCTION Currently, the widely accepted principle guiding lumpectomy margins in early breast cancer patients is to avoid the presence of tumor cells at the margin ('no ink on tumor').The 'no ink on tumor' principle was based on the results of a meta-analysis involving 28,162 patients in 33 studies, which demonstrated a lack of benefit in achieving a wide resection margin once the margin was free of tumor cells [1].The introduction of the 'no ink on tumor' principle and the subsequent endorsement by academic societies has substantially reduced the rate of re-excision after initial lumpectomy [2,3].However, more than 15% of patients who were initially treated with lumpectomy still underwent additional surgery [3,4], and 8.5% of patients with positive resection margins chose mastectomy as their next procedure [5].Although the potential disadvantages of reoperations, such as increased emotional stress, deterioration of cosmesis [6], and increased healthcare costs [7], are well known, most patients with positive margins undergo re-excision to minimize the risk of ipsilateral breast tumor recurrence (IBTR).Moreover, studies have repeatedly indicated a two-fold increased risk of IBTR with positive resection margins [1,8]. The presence of tumor cells at the lumpectomy margin represents two different aspects of local control in breast cancer.One aspect is related to potential technical failure, which may lead to incomplete tumor resection [9,10].Another aspect of a positive margin is the biological nature of the tumor.For example, extensive intraductal tumor components often result in positive lumpectomy margins [11] and may lead to an increased risk of local recurrence after breast conservation surgery [12].Additionally, lobular histology is associated with margin positivity [13,14] and an increased risk of IBTR [15].Therefore, the two-fold increase in IBTR risk with positive margins reflects the combined effect of both technical failure and biological characteristics. In this study, we aimed to address this issue by investigating the IBTR rates in patients who achieved negative margins through initial lumpectomy or re-excision.We assumed that if the IBTR rates of the two groups are comparable, then the two-fold increase in IBTR risk associated with a positive margin is mostly caused by technically incomplete resections, as achieving negative margins by repeated excision nullifies the IBTR risk. METHODS This study was approved by the Institutional Review Board (IRB) of Seoul National University Hospital (IRB No. H2109-125-1257) and was performed in accordance with the Declaration of Helsinki or comparable ethical standards.The requirement for informed consent was waived; however, the reuse of their electronically recorded data was approved. Study design We retrospectively reviewed the data of patients with invasive breast cancer who underwent upfront breast conservation surgery followed by whole-breast irradiation between January 2004 and December 2018 at Seoul National University Hospital.Patients who received neoadjuvant chemotherapy and those with male breast cancer, bilateral breast cancer, recurrent breast cancer, or stage IV breast cancer were excluded.A clear resection margin was defined as "no ink on tumor."Focusing on the IBTR, patients who underwent a mastectomy to achieve a clear resection margin after breast-conserving surgery were also excluded from the analysis.Breast cancer was pathologically staged according to the 8th American Joint Committee on Cancer staging criteria.Hormone receptor (HR) status, including estrogen and progesterone receptors, was reported to be positive when dyed at > 1% on immunohistochemistry. Human epidermal growth factor receptor-2 (HER2) status was assessed using anti-HER2 antibodies and/or fluorescence in situ hybridization. Regarding radiation treatment, patients received radiation treatment with a regimen of either conventional fractionated 1.8-2.0gray in 28-33 fractions over 6 weeks or hypofractionated 2.4-2.7 gray in 15-20 fractions over 3 weeks, once daily according to the fractionation schedule of our institution.An additional boost to the tumor bed was applied, depending on the clinical experience of the radiation oncologists.Patients who discontinued radiation treatment for various reasons were excluded from the study. Evaluation of resection margin status The extent of surgical resection was determined based on the extent of the disease on magnetic resonance imaging or sonography.Cavity shaving or intraoperative frozen-section biopsy for the resection margin was not routinely conducted in all patients, and the decision to do so was made by the surgeon.Furthermore, separate cavity shaving was performed in the direction of the residual parenchyma after resection of the main tumor with or without intraoperative frozen biopsy.The outer surface of the resected specimen was sutured. When a frozen-section biopsy revealed the involvement of atypical or tumor cells on the resection margin, additional tissue excision was performed in the direction of the positive margin.As further resected tissue is usually sutured and attached to the main specimen, pathologists examine the specimen as a whole and not as separate specimens.In contrast, when tumor cells were identified in lumpectomy margins via the final pathology reports 1 or 2 weeks after surgery, we conducted reoperation, and a separate pathology review for the additionally resected tissues was obtained separately. Definition of IBTR and recurrence-free survival The recurrence of breast cancer in the ipsilateral breast was defined as IBTR; however, recurrence in the breast skin was not included in the IBTR.Other types of recurrences, including regional recurrence or distant metastasis before IBTR, were treated as censored events related to competing risks.The IBTR-free survival rate was calculated as the interval from the date of the last surgery to the time of pathological diagnosis of IBTR or censored events. Statistical analysis The continuous variables were compared with one-way analysis of variance and categorical variables were compared with Pearson's χ 2 test.The log-rank test was used to analyze the differences between the survival curves derived using the Kaplan-Meier method.A Cox proportional hazards regression model was used to estimate the adjusted hazard ratio and to adjust for other variables affecting the recurrence rate.Statistical significance was set at p < 0.05.All analyses were performed using SPSS (version 26.0;IBM, Armonk, USA), and figures were plotted using GraphPad Prism (version 9.0; GraphPad Software, San Diego, USA).Propensity score matching was conducted using the "MatchIt" R package (version 3.6.3;R Foundation, Vienna, Austria). Patient characteristics Between January 2004 and December 2018, 5,632 patients underwent breast conservation surgery and breast radiation therapy at our institution.During the study period, 34 patients (0.6%) had positive resection margins and did not undergo additional surgery for various reasons, including comorbidities or patient refusal.The aforementioned 34 patients had a significantly higher risk of IBTR compared to the remaining patients with clear final resection margins (10-year IBTR rate: 17.8% vs. 3.0%; p < 0.001, hazard ratio, 7.56, 95% confidence interval [CI], 3.09-18.51,Supplementary Figure 1).As we aimed to compare the IBTR rates among patients with clear final resection margins based on re-excision, the remaining 5,598 patients were the main participants of this study. Among the 5,598 patients, 793 (14.2%) achieved clear margins after re-excision because of their positive resection margins.The median age of the patients was 49 years old , and nearly two-thirds had T1 tumors (64.3%).A total of 4,293 patients (76.7%) were node-negative, and 4,332 (77.4%) had HR-positive tumors.Intraoperative margin assessment was performed in 4,098 (73.2%).Detailed information on the study population is presented in Table 1. Impact of re-excision on the risk of IBTR During the median follow-up period of 76.6 (± 44.6) months, a total of 121 patients (2.2%) experienced IBTR.Additionally, 95 patients achieved clear resection margin at first lumpectomy and 26 patients underwent re-excision.Patients who underwent re-excision to achieve negative margins experienced a significantly higher rate of IBTR compared to patients in whom the margins were clear at the first lumpectomy (10-year IBTR rate: 5.3% vs. 2.6% [25 vs. 84 events], p = 0.031; hazard ratio, 1.61; 95% CI, 1.04-2.48)(Figure 1A).The survival curves began to separate around 4-5 years after surgery (98.5% vs. 98.0%[64 vs. 14 events] at 5 years, and 97.4% vs. 94.7% at 10 years of follow-up).Moreover, the annual recurrence pattern demonstrated that the re-excision group had a higher incidence of IBTR 5 years after surgery than the other groups, while patients with involved resection margins displayed a higher incidence within the first 5 years of surgery in comparison to other groups (Figure 2). Age at the time of operation was also revealed to be significantly associated with IBTR (hazard ratio, 0.97; 95% CI, 0.95-0.99;p = 0.001), and an increased number of younger patients were included in the re-excision group compared to older patients (p < 0.001).Thus, we conducted a subgroup analysis according to the age at surgery to adjust for confounding effects between age and IBTR.The survival difference was observed for young patients (p = 0.033; hazard ratio, 1.72; 95% CI, 1.04-2.85for age less than 50) (Figure 1B and C).Additionally, when the patients were divided according to their HR and HER2 amplification status, we observed significant differences in the HR+/HER2− and HR−/HER2− subtypes, whereas HER2-amplified tumors displayed no significant differences (Figure 3). Using Cox regression analysis, we adjusted for other clinicopathological variables, such as age, histologic grade, lymphovascular invasion, HR status, HER2 amplification status, Ki-67 levels, and administration of adjuvant treatments (Table 2).The results of the Cox regression analysis demonstrated that re-excision to achieve negative margins was significantly associated with the risk of IBTR after adjusting for the aforementioned variables (p = 0.030; hazard ratio, 1.69; 95% CI, 1.05-2.72).Furthermore, to further adjust for clinicopathologic features associated with IBTR, we conducted 1:1 propensity score matching yielding 1,304 patients (Supplementary Table 1).Patients who underwent re-excision still exhibited unfavorable IBTR-free survival than those who did not (p = 0.045; hazard ratio, 2.12; 95% CI, 1.00-4.47)(Supplementary Figure 2).Values are means ± standard deviation (range) or number (%).HR = hormone receptor; HER2 = human epidermal growth factor receptor-2; CTx = chemotherapy; HTx = hormonal treatment.*Stratified according to the American Joint Committee on Cancer (AJCC) 8th tumor, node, and metastasis (TNM) stage.† Among 4,332 patients who were indicated for hormonal treatment.‡ Among 599 patients who were indicated for HER2-targeted treatment. IBTR patterns in patients with re-excision The events of IBTR can be classified as true recurrence (TR) or new primary (NP) events based on their location and histological features [16,17].Moreover, TR was defined as tumor recurrence in the same quadrant with the same HR/HER2 status and histological type as the original tumor.We investigated that the relapse patterns of IBTR due to technical failure to remove cancer cells were more likely to result in TR than in NP.The recurrence patterns of IBTR in both groups are displayed in Figure 4.As demonstrated in Figure 4A, the number of TR events did not differ significantly between the two groups.Additionally, the spatial patterns of IBTR, defined by the distance of recurrence from the initial tumor, did not exhibit a significant difference between patients who achieved clear margins at the initial The annual recurrence pattern displays that the re-excision group had a higher incidence of IBTR 5 years after surgery than the other groups, while patients with involved resection margin display a high incidence within the first 5 years of surgery.IBTR = ipsilateral breast tumor recurrence. lumpectomy and those who underwent re-excision (Figure 4B and C).Notably, regarding subtype changes between primary and recurrent tumors, 26.1% of patients displayed different tumor subtypes. DISCUSSION A widely accepted fact is that the microscopic presence of tumor cells at the surgical margins leads to a two-fold increased risk of IBTR in patients undergoing breast conservation [18].This association between resection margin status and the risk of recurrence has resulted in a consensus that stresses the importance of achieving a negative resection margin to minimize the risk of IBTR [1].Accordingly, many patients with breast cancer undergo repeat surgeries to minimize the risk of recurrence [2][3][4].However, the degree of benefit associated with repeated surgical excision to achieve a negative resection margin has not yet been quantitatively addressed. The true benefit of re-excision for clear margins can only be determined by a randomized trial comparing either performing or omitting re-excision in patients with positive resection margins; such clinical trials require substantial scientific evidence to be justified.In the present study, we used a different approach that may provide insight into this issue by comparing the outcomes between patients who achieved clear margins at initial lumpectomy and those who achieved clear margins after repeated excisions.The two-fold increased risk of IBTR with a positive resection margin could be the result of the biologically aggressive nature of the disease that led to the involvement of margins, technical failure leaving residual disease, or a combination of both.Regarding the two hypotheses, we assumed that if achieving negative margins by initial lumpectomy and repeated excision resulted in similar outcomes, then the increased IBTR risk associated with positive margins would be mainly due to technical issues, as repeated excision nullified the risk. In our data of 5,598 patients who underwent breast conservation, a 61% increased risk of IBTR was observed in patients with positive resection margins at the initial lumpectomy, despite all patients achieving negative resection by repeated excisions.The data indicate that a certain proportion of the two-fold increased risk of IBTR remains in patients with positive resection margins even after additional surgeries to clear the margins, as reported in a previous meta-analysis [1].Our findings raise the possibility that the benefits associated with additional surgery to achieve negative margins may be minimal, as the inherent biology of the tumor may play a significant role in determining the risk of IBTR.For example, extensive intraductal components and lobular histology are well-known biological factors that affect resection margin status and IBTR rate [11][12][13][14][15].The presence of multifocality and lymphovascular invasion of tumors also can induce a positive resection margin, resulting in local control failure [19][20][21].Additionally, changes in the microenvironment of normal breast tissue far from the main tumor induce alterations in transforming growth factorbeta signaling and affect local recurrence [22].Furthermore, we observed similar IBTR patterns on TR and NP regardless of re-excision, suggesting that technical failure to remove cancer cells is not likely to be the cause of the increased IBTR risk observed in patients who underwent re-excision. While studies have demonstrated that re-excision for breast conservation does not influence the overall survival of patients [23,24], the oncologic benefit of re-excision for positive resection margins remains unclear.Based on a propensity score matching analysis of 2,110 patients, Sorrentino et al. [25] reported no significant benefit in local control with re-excision.However, this finding differs from that of Vos et al. [24], who reported a significantly high risk of IBTR with the omission of re-excision in patients with positive 566 margins.However, in their study, > 50% of the patients with positive resection margins underwent mastectomy for re-excision, which made direct comparison difficult.The conflicting results regarding the benefit of re-excision for patients undergoing lumpectomy with positive resection margins, along with our present observation of a persistently increased risk of IBTR after re-excision, warrant further prospective clinical trials that can properly address this issue. To investigate the effect of delayed adjuvant treatment due to reoperation, we sub-grouped the re-excision group according to the timing of further resection.Among the 793 patients, 663 (83.6%) underwent further resection immediately after the excision of the main specimen, while 130 (16.4%) underwent delayed reoperation.Kaplan-Meier graphs revealed that the immediate re-excision group had an IBTR rate comparable to the delayed reoperation group (hazard ratio, 1.34; 95% CI, 0.45-4.47;p = 0.632) (Supplementary Figure 3A).In contrast, the group still displayed a worse IBTR-free survival rate than those who achieved a clear resection margin at first lumpectomy (hazard ratio, 1.67; 95% CI, 1.06-2.64;p = 0.025) (Supplementary Figure 3B).Previous studies investigated the effects of delayed adjuvant treatment on survival outcomes.Jobsen et al. [26] reported that the timing of radiotherapy in breast-conserving patients was not associated with the local recurrence rate, while Buchholz et al. [27] displayed a high local failure rate for those who started radiotherapy after 6 months or more.Another study reported that 91 or more days of delay in adjuvant chemotherapy was associated with a poor overall survival rate [28].In the current study, the median interval between the first lumpectomy and delayed re-excision was 21 days (range, 9-51) days.Thus, an assumption can be made that the delay in adjuvant treatment did not affect the IBTR survival rate in the re-excision group. Cox regression analysis revealed that young age at surgery, presence of lymphovascular invasion, and positive HER2 status were also significant predictors of poor IBTR-free survival.Consistent with our results, previous studies have reported higher IBTR rates in young patients than in older patients owing to aggressive features, small resection volume, and the possibility of genetic predisposition [29][30][31].The elevated IBTR rate for HER2-positive tumors can be the result of increased resistance to radiotherapy via the focal adhesion kinasemediated pathway or resistance to endocrine therapy via the interaction effect of crosstalk between estrogen receptors and HER2 [32,33].Finally, tumors with lymphovascular invasion have a two-fold high risk of local recurrence [34].Tumors presenting with lymphovascular invasion are usually accompanied by more aggressive tumor features and are associated with poor survival rates [35,36].In the current study, patients with lymphovascular invasion were included in the re-excision group but displayed no statistical significance between the two groups.This suggests that other biological features, such as multifocality or intraductal components, may have influenced the elevated IBTR rate in the re-excision group. Our study has several limitations.First, the retrospective and single-institution nature of the study requires further validation.Moreover, the lack of a pathological review by a central pathologist may have resulted in a hidden bias.However, our institution has a standardized pathology report form, and all pathologists who participated in the pathological examinations specialized in breast cancer.Second, we compared the IBTR rates of patients with positive resection margins who did not undergo additional surgery; however, only 35 patients (0.6%) did not undergo further surgery for their positive margins during the study period.Therefore, obtaining sufficient statistical evidence to determine an increased risk of IBTR in patients with positive margins was not possible.Third, a possibility may be present of inaccurate re-excision of residual tumors, which should be considered a technical issue, leaving the tumor cells in the cavity.To address this issue, we investigated the presence of tumor cells in the excised specimen.Although most patients had no pathology report for further resected specimens as we sutured them to the main specimen, 180 patients in the re-excision group were able to discern the presence of tumor cells, and 44 (24.4%) patients had no residual tumor cells, which was comparable to that reported in other studies [37,38].Only 1 of 44 patients demonstrated IBTR while 5 patients recurred among those with residual tumor cells in the cavity (χ 2 p-value = 0.652).Additionally, the IBTR-free survival rate was not significantly different between the two groups (p = 0.839, hazard ratio, 0.80, 95% CI, 0.09-6.89,data not displayed).Interestingly, among patients with no residual tumor, nine did not undergo cavity shaving, and none of them exhibited IBTR events.Moreover, to further investigate the possible effects of technical failure, we reviewed 115 patients who underwent total mastectomy to achieve a clear resection margin during the same period as the current study.Compared to patients who underwent breast-conserving surgery for re-excision, no significant difference was observed in the number of patients without residual tumor cells or the IBTR-free survival rate (Supplementary Figure 4).These findings further confirm that the impact of technical failure on IBTR was sufficiently negated.Finally, we could not elucidate differences in the pathological characteristics of the tumors between the groups.The presence of extensive intraductal components, the presence of microscopic multifocal tumors, or the nature of the spatial tumor growth can all be the pathologic phenotypes identified in the patients with initial positive margins.However, we could not perform a detailed review of the tumor pathology specimens. In conclusion, our data demonstrated that patients with positive resection margins after breast conservation have an increased risk of IBTR, which cannot be completely nullified by achieving negative resection margins through re-excision.Patients who underwent reexcision for initial positive margins had an increased risk of IBTR, even after achieving a final negative margin, compared to patients with initially negative margins.This increased risk of IBTR is mostly observed in young patients and delayed cases. Figure 1 . Figure1.The Kaplan-Meier curves according to initial resection margin status and age at surgery.Kaplan-Meier curves of IBTR for patients who achieved clear resection margins after re-excision and for those with clear resection margins at the first lumpectomy (A).After stratification according to age at operation, the survival curves of IBTR for patients younger than 50 years (B) and older than or equal to 50 years (C) are displayed.The p-values are calculated using the log-rank test and hazard ratios are calculated using the Cox regression test.IBTR = ipsilateral breast tumor recurrence; CI = confidence interval. Figure 2 . Figure2.Patterns of annual recurrence incidence according to resection margin status.The annual recurrence pattern displays that the re-excision group had a higher incidence of IBTR 5 years after surgery than the other groups, while patients with involved resection margin display a high incidence within the first 5 years of surgery.IBTR = ipsilateral breast tumor recurrence. Figure 4 . Figure 4. Recurrence patterns among patients with ipsilateral breast tumor recurrence.Recurrent tumors in 121 patients with IBTR are classified based on their location and histological features (A).Regarding the spatial pattern of IBTR, the location of recurrence relative to the original tumor (B) and the distance of recurrence from the initial tumor (C) according to the two groups are demonstrated.IBTR = ipsilateral breast tumor recurrence; ns = not significant. Table 1 . Demographics and clinicopathologic characteristics of patients who achieved clear resection margin Table 2 . Univariate and multivariate analyses for ipsilateral breast tumor recurrence-free survival Hazard ratio was calculated with univariate Cox regression analysis. *† Stratified according to the American Joint Committee on Cancer (AJCC) 8th tumor, node, and metastasis (TNM) stage.
2023-11-01T15:19:16.413Z
2023-10-17T00:00:00.000
{ "year": 2023, "sha1": "ab64f08122c29dc2001da97293ae97c690d58060", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e2175bc70561d5addc35c0a02d7dd45912e5e9b1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51920193
pes2o/s2orc
v3-fos-license
The impact of PISA in teaching practices in Portugal : The case of Portuguese L 1 Portugal has participated in PISA since 2000 and the national results in reading literacy have been very poor. To improve these results, Portuguese authorities implemented an impressive exam model that has strongly affected teaching practices in Portuguese L1 classrooms. To understand some positive and negative effects of this model we will compare two actual classroom tests, used in 1992 and 2011, and will summarize the results of a report about the relation between exams results and student performance at university. The effects of the Portuguese exam model fostered by PISA seem to call for a study about the incentives PISA has been directly or indirectly encouraging in participant countries. Introduction The primary reason for the implementation of PISA “is to provide empirically grounded information which will inform policy decisions” of participant countries in order to prepare “their students to become lifelong learners and to play constructive roles as citizens in society”. National results are supposed “to provide direction for schools’ instructional efforts and for students’ learning as well as insights into curriculum strengths and weaknesses. Coupled with appropriate incentives, they can motivate students to learn better, teachers to teach better and schools to be more effective” [3: 7]. Since 2000, PISA has been assessing cross-curricular general skills, disciplinary skills and curriculum content in reading literacy, mathematical literacy and scientific literacy. Reading literacy, the one that more closely matters for the case of Portuguese L1, is “the capacity to develop interpretations of written material and to reflect on the content and qualities of texts” [3: 9]. Unlike science and mathematics, reading literacy “does not have any obvious “content” of its own” [3: 12] but its disciplinary skills are mostly cross-curricular general skills. Portugal has participated in PISA surveys since its first edition in 2000. The national results in reading literacy have been deeply disappointing. In spite of a small improvement between 2000 (470) and 2009 (489) ratings, the results are still below PISA average and they recessed one position in the OECD international ranking, from 26th to 27th. These overall results have been having a paramount impact in assessment policies and therefore in teaching practices. During this decade, Portuguese authorities have implemented a huge examination apparatus that has caused slight improvements in results but seems not to promote lifelong learning. a e-mail: paulo.feytor@ese.ips.pt This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Article available at http://www.shs-conferences.org or http://dx.doi.org/10.1051/shsconf/20151601003 SHS Web of Conferences To better understand those changes, this brief study unfolds in three steps: an outline of the assessment framework one decade before PISA and one decade after it (1-2); a comparison of two actual classroom Portuguese L1 tests answered by 7-graders in 1992 and 2011 (3); and the results of a research lead by the University of Porto about the relation between national matriculation exams’ results and students performance at university (4). 1. Assessment before and after PISA Ten years before the publication of the national results in PISA’s first edition, in 2002, there were no national or local assessment standards whatsoever. These depended on each teacher’s interpretation of the curriculum and its relation to the marks range available. Between 1989 and 1993, there was a general matriculation exam assessing L1 proficiency and cultural knowledge. It was the only national exam in the whole school system and it was compulsory to all candidates to university. After three years without exams, they were resumed in 1996. Since then all matriculates have three disciplinary exams at the end of secondary school: one in Portuguese L1, compulsory for all, and two exams in the specific subjects of their choice. Also in 1996 were published the results of the first National Literacy Survey, inspired by OEDC’s International Adult Literacy Survey (IALS) that Portugal would integrate in 1998 [2: 3]. Ten years after PISA assessment of Portuguese L1 has strikingly changed. Exams have been gradually introduced in grades 4, 6 and 9, only for Portuguese L1 and Mathematics. First the exams results were not considered in the pupils’ final mark and only a national sample was tested. Currently, the exams season goes from mid-May to mid-July, including two exams for each subject and grade (1st call and 2nd call), besides the matriculation exams in 21 subjects. In the whole school system, with a population of 1.360.000 pupils, around 340.000 of them answer to 33 different exams, summing up to circa 750.000 centrally controlled tests, every year. The results of the exams are published in the media where schools are ranked according to their pupils’ results and these results are compared with continuous assessment results emphasizing that good schools should have the same result in continuous assessment and exams. Besides, school funding is conditioned by exams’ results. This overwhelming examination apparatus with wide public support and just some opponents has obviously had an impact in teaching practices. Changes also occurred in continuous assessment. Now, every local group of schools – they comprehend all grades, from kindergarten to grade 12 – has to set its assessment standards for every syllabus subject. In Portuguese L1 they are usually divided into five topics: attitudes and behaviour; oracy; reading; writing and grammar or language awareness. Due to national rankings pressure local school standards have gradually become similar to exam standards for writing and similar to both PISA and exam standards for reading. 2. Current reading and writing assessment standards Reading and writing literacy relates to the understanding and the production of written material. According to PISA [3] the features of this material, hence the strategies to read and write it, depend on three factors: (i) the situation or context of use of the text – private, public, occupational or educational –, (ii) the type of text – continuous or non-continuous – and (iii) the instruction given by questions. PISA reading assessment measures five aspects associated with the full understanding of a text: broad understanding, retrieve information, develop an interpretation, reflect on content and reflect on form. Figure 1 organizes these aspects and shows the relationships between them. Since PISA does not assess writing literacy, the standards used in national exams have become the general reference for Portuguese L1 testing. The Ministry of Education (MOE) set seven criteria to assess texts written by the students during exams: subject and genre; coherence and relevance of information; structure and cohesion; morphology and syntax; vocabulary; spelling; and size [1]. Introduction The primary reason for the implementation of PISA "is to provide empirically grounded information which will inform policy decisions" of participant countries in order to prepare "their students to become lifelong learners and to play constructive roles as citizens in society".National results are supposed "to provide direction for schools' instructional efforts and for students' learning as well as insights into curriculum strengths and weaknesses.Coupled with appropriate incentives, they can motivate students to learn better, teachers to teach better and schools to be more effective" [3: 7]. Since 2000, PISA has been assessing cross-curricular general skills, disciplinary skills and curriculum content in reading literacy, mathematical literacy and scientific literacy.Reading literacy, the one that more closely matters for the case of Portuguese L1, is "the capacity to develop interpretations of written material and to reflect on the content and qualities of texts" [3: 9].Unlike science and mathematics, reading literacy "does not have any obvious "content" of its own" [3: 12] but its disciplinary skills are mostly cross-curricular general skills. Portugal has participated in PISA surveys since its first edition in 2000.The national results in reading literacy have been deeply disappointing.In spite of a small improvement between 2000 (470) and 2009 (489) ratings, the results are still below PISA average and they recessed one position in the OECD international ranking, from 26 th to 27 th .These overall results have been having a paramount impact in assessment policies and therefore in teaching practices.During this decade, Portuguese authorities have implemented a huge examination apparatus that has caused slight improvements in results but seems not to promote lifelong learning. SHS Web of Conferences To better understand those changes, this brief study unfolds in three steps: an outline of the assessment framework one decade before PISA and one decade after it (1-2); a comparison of two actual classroom Portuguese L1 tests answered by 7-graders in 1992 and 2011 (3); and the results of a research lead by the University of Porto about the relation between national matriculation exams' results and students performance at university (4). Assessment before and after PISA Ten years before the publication of the national results in PISA's first edition, in 2002, there were no national or local assessment standards whatsoever.These depended on each teacher's interpretation of the curriculum and its relation to the marks range available.Between 1989 and 1993, there was a general matriculation exam assessing L1 proficiency and cultural knowledge.It was the only national exam in the whole school system and it was compulsory to all candidates to university.After three years without exams, they were resumed in 1996.Since then all matriculates have three disciplinary exams at the end of secondary school: one in Portuguese L1, compulsory for all, and two exams in the specific subjects of their choice.Also in 1996 were published the results of the first National Literacy Survey, inspired by OEDC's International Adult Literacy Survey (IALS) that Portugal would integrate in 1998 [2: 3]. Ten years after PISA assessment of Portuguese L1 has strikingly changed.Exams have been gradually introduced in grades 4, 6 and 9, only for Portuguese L1 and Mathematics.First the exams results were not considered in the pupils' final mark and only a national sample was tested.Currently, the exams season goes from mid-May to mid-July, including two exams for each subject and grade (1 st call and 2 nd call), besides the matriculation exams in 21 subjects.In the whole school system, with a population of 1.360.000pupils, around 340.000 of them answer to 33 different exams, summing up to circa 750.000 centrally controlled tests, every year.The results of the exams are published in the media where schools are ranked according to their pupils' results and these results are compared with continuous assessment results emphasizing that good schools should have the same result in continuous assessment and exams.Besides, school funding is conditioned by exams' results.This overwhelming examination apparatus with wide public support and just some opponents has obviously had an impact in teaching practices. Changes also occurred in continuous assessment.Now, every local group of schools -they comprehend all grades, from kindergarten to grade 12 -has to set its assessment standards for every syllabus subject.In Portuguese L1 they are usually divided into five topics: attitudes and behaviour; oracy; reading; writing and grammar or language awareness.Due to national rankings pressure local school standards have gradually become similar to exam standards for writing and similar to both PISA and exam standards for reading. Current reading and writing assessment standards Reading and writing literacy relates to the understanding and the production of written material.According to PISA [3] the features of this material, hence the strategies to read and write it, depend on three factors: (i) the situation or context of use of the text -private, public, occupational or educational -, (ii) the type of text -continuous or non-continuous -and (iii) the instruction given by questions. PISA reading assessment measures five aspects associated with the full understanding of a text: broad understanding, retrieve information, develop an interpretation, reflect on content and reflect on form.Figure 1 organizes these aspects and shows the relationships between them. Since PISA does not assess writing literacy, the standards used in national exams have become the general reference for Portuguese L1 testing.The Ministry of Education (MOE) set seven criteria to assess texts written by the students during exams: subject and genre; coherence and relevance of information; structure and cohesion; morphology and syntax; vocabulary; spelling; and size [1]. Portuguese L1 classroom testing The impact of the exam apparatus, gradually implemented since PISA first came to place, is apparent in classroom tests.Appendix 1 is a test created in 1992 by a group of teachers to assess reading, grammar and writing by 7-graders in the beginning of the school year.It deals with two texts, one to read and one to write.Both texts are private and continuous.The first is an excerpt of a renowned children's narrative (157 words), the second a description of a fictional situation (8-10 lines).Four questions assess four aspects of reading literacy: retrieve information, develop an interpretation, reflect on content and reflect on form.These reading aspects though are assessed through written answers of the students.The writing task is assessed according to four criteria: subject and genre; coherence and relevance of information; morphology and syntax; and size.There are four items to assess language awareness.The marks for each question or criterion are not shown. Two decades later, classroom tests seem to have improved in several different aspects.Appendix 2 is a test created in 2011 by a single teacher to assess reading, grammar and writing by 7-graders at the end of the first term.It deals with three different types of texts: one public and non-continuous to read, one public and continuous to read and one occupational and continuous to write.The first one is a newspaper cover page, the second is a newspaper news (141 words) and the third is a report about a specific event the students attended in the school library (140-180 words).Reading literacy is assessed both through writing and only transcription.Three transcription questions assess retrieving information.There are another three questions to assess this aspect through written answers.The same kind of written task is asked to measure broad understanding, interpretation and reflect on form.To assess writing skills, all seven criteria set by the MOE for exams are used.There are six items about grammar.The marks for each question or criterion are clearly shown. Between the 1990's and the 2010's, Portuguese L1 classroom tests tended to become richer and more accurate.They now deal with a wider variety and quantity of texts, reading assessment is separate from writing, there are more questions about reading and grammar, and writing literacy is measured in longer texts by seven explicit criteria.All marks are now shown. Exams' results and students performance Since school rankings based on exams results have first been published, in the early 2000's, the media and the public has enhanced the actual difference between public and private schools.Public school students' results have always been lower than the results of their peers from private schools.These facts have promoted a lively discussion about education quality in Portugal.In this context, the University of Porto (UP) made a report comparing its students' performance in 2011 with the matriculation results they had in 2008.The study aggregates all 4280 UP matriculates according to the secondary school of origin where each student has answered three exams in 2008, including Portuguese L1.Around one fifth of these students come from private schools.Figure 2 shows the school ranking by exam results.Among the 12 best schools only two are public and the first of these two is the 7 th in the ranking.It confirms the national pattern of exam excellence of private schools. However, after three years of higher education, considering those students who got more than 75% of ECTS expected for that period, the ranking by school of origin is completely different.Figure 3 shows the ranking by university performance between 2008 and 2011.The best performing UP students mostly come from public schools.Only three private schools are among the 12 best.Furthermore, within each group, schools are ranked in the opposite order when compared to Fig. 2: Garcia Orta is better than the absent Aurelia Sousa; Rosario is worse than Paulo VI and Lamas, the best private in UP performance is not among the 12 best in exam success.The second ranking actually looks like an inverted picture of the first one showing a very weak relation between matriculation exam success and good university performance.In other words, students that perform well in exams are not the best students at university.These results reveal a very negative impact of the exam apparatus in teaching practices namely in Portuguese L1.Because of the strong official and public demand for good exam results teachers tend to centre classroom activity both in exam-like tasks and in the part of the curriculum assessed in exams.Oracy, research or collaborative work are hence excluded and a lot of time is spent in answering exams of previous years.This tendency is more acute in private schools since only these can be chosen by parents -public schools are (supposed to be) attended by youngsters of their circumscription.For both public and private schools, the MOE prepares every year intermediate optional exams either in intermediate grades without exams or to be answered some months before the actual exams.This policy has made 01003-p.4some schools become "exam training camps", as opponents put it.The ultimate example is a school in the Algarve, considered the best school of the region in 2010 because of its excellent results in exams.The only extra-curricular activity they have is "exam training".Every now and then, the whole school stops classes for pupils to simulate an exam: formal call, one student in each table, official answer sheets, two supervising teachers of a different discipline and, of course, an exam-like test to answer. Final remarks Since the beginning of the 21 st century, Portuguese authorities, under conservative and socialist governments, have implemented a huge exam apparatus.This apparatus seems to be directly connected with the urgent need to improve national results in PISA reading literacy ratings.For the public, exams are an effort to improve literacy quality that should reflect on PISA.However, national results have only slightly improved so far and exams are not able to accurately promote lifelong learners. During the decade, Portuguese L1 teaching practices have been affected by this impressive assessment model.They gradually became PISA-oriented.Some positive new practices have been introduced.Classroom tests now have a wider variety of texts, reading literacy is assessed without writing and there are local assessment standards based on a national framework.But negative new practices seem to put education quality at risk.A much narrower curriculum, both of knowledge and skills, caused by focus on exam-like tasks and by continuous assessment based in exam-like tests. These effects question whether Portugal and/or PISA are actually promoting the appropriate incentives to provide the best direction for students, teachers and schools.In PISA's words, is this assessment model motivating "students to learn better, teachers to teach better and schools to be more effective"?After five editions or cycles of PISA assessment now there seems to emerge a need for some kind of OECD assessment of the processes it has been directly or indirectly causing in each different participant country.The primary reason for the implementation of PISA is to provide information that should affect policy decisions.So far data provision has been delivered.We now lack information about policies influenced by that information. Figure 3 . Figure 3. Best schools of origin of students: university success [4].
2018-08-03T19:57:14.080Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "d963442e472175e12fdb29975d6bc916ba69d6df", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2015/03/shsconf_iaimte2013_01003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d963442e472175e12fdb29975d6bc916ba69d6df", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Engineering" ] }
186404081
pes2o/s2orc
v3-fos-license
EFFICIENT TRACKING AREA MANAGEMENT FRAMEWORK FOR 5G NERWORK To accommodate a various growing number of user equipment (UEs) on 5g networks. Managing with the huge signaling overhead expected from UEs is a main problem to tackle hence as to achieve this objective. In this thesis, they develop an ef ficient tracking area list management (ETAM) system for 5G cloud-based mobile networks. The proposed system contains of two parts. The first part is executed offline. The executed offline is responsible of assigning tracking areas (TAs) to TA lists (TALs). The second one is executed online. The executed online is responsible of the distribution of TALs on user equipment (UEs) during their movements across TA. For the first par t, they propose three keys, which are: (a) F-PAGING favoring the paging overhead over tracking area update (TAU), (b) FTAU favoring TAU over paging, and (c) FOTA (i.e., Fair and Optimal Assignment of TALs to TAs) for a solution that uses bargaining game to ensure a fair tradeoff between TAU and paging overhead. For the second part, two solutions are proposed to assign in real time, TALs to different UEs. The computation load is kept lightweight in both solutions not to reduce the network performance. Also, both solutions do not need any additional new messages when assigning TALs to UEs. The first solution takes into account only the priority between TALs. As for the second one, in additio n to the priority between TALs, it takes into account the UEs activities (i.e., in terms of incoming communication frequency and mobility patterns) to improve further the network performance. The presentation of ETAM is estimated through investigation and imitations, and the achieved results validate its feasibility and ability in achieving its thesis goals, improving the network performance by minimizing the cost related with paging and TAU. I. INTRODUCTION A set of connections is a collection of computers, servers, mainframes, network devices, peripherals, or other devices connected to one another to allow the sharing of data [1] [3]. A network makes it possible to centralize data. All files shared by users are stored in a central location, which securities reliability and simplifies the update process [2]. Multiple levels of defense can be implemented on a network, making it more difficult to obtain unauthorized access to data [4]. A network can be operational with a backup system that runs at specific intervals, ensuring that critical data is available from a secondary source if needed [9]. Information technologies have become an integral part of our society, having a profound socio-economic impact, and elevating our daily lives with a plethora of services from media diversion (e.g. video) to more sensitive and safety-critical applications (e.g. e-commerce, e-Health, first responder services, etc.) [6]. If analysts' predictions are correct, just about each physical object they see (e.g. clothes, cars, trains, etc.) will also be related to the networks by the end of the decade (Internet of Things) [5]. Also, according to a Cisco forecast of the use of IP (Internet Protocol) networks by 2017, Internet traffic is evolving into a more dynamic traffic pattern [10]. The global IP traffic will communicate to 41 million DVDs per hour in 2017 and video communication will continue to be in the range of 80 to 90% of total IP traffic [8]. This market estimate will surely divide the growth in mobile traffic with current predictions suggesting a 1000x increase over the next decade [7]. II. EXISTING SYSTEM The 4G system was originally envisioned by the Defense Advanced Research Projects Agency (DARPA). The DARPA selected the distributed architecture and end-to-end Internet protocol (IP), and believed at an early stage in peerto-peer networking in which every mobile device would be both a transceiver and a router for other devices in the network, eliminating the spoke-and-hub weakness of 2G and 3G cellular systems. Since the 2.5G GPRS system, cellular systems have provided dual infrastructures: packet switched nodes for data services, and circuit switched nodes for voice calls. In 4G systems, the circuit-switched infrastructure is abandoned and only a packet-switched network is provided, while 2.5G and 3G systems require both packet-switched and circuit-switched network nodes, i.e. two infrastructures in parallel. This means that in 4G, traditional voice calls are replaced by IP telephony. A) Demerits • Equipment has not been fully developed for network • Network has more complex security issues • Network protocols and standardization have not been defined • Not many areas have 4 III. PROPOSED SYSTEM In this thesis, proposed an efficient tracking area list management (ETAM) system for 5G cloud-based mobile networks. The proposed system contains of two parts. The first part is executed offline. The executed offline is responsible of assigning tracking areas (TAs) to TA lists (TALs). The second one is executed online. The executed online is responsible of the distribution of TALs on user equipment (UEs) during their movements across TAs.For the first part, they pro pose three keys, which are: (a) F-PAGING favoring the paging overhead over tracking area update (TAU), (b) F-TAU favoring TAU over paging, and (c) FOTA (i.e., Fair and Optimal Assignment of TALs to TAs) for a solution that uses bargaining game to ensure a fair tradeoff between TAU and paging overhead. For the second part, two solutions are proposed to assign in real time, TALs to different UEs. The computation load is kept lightweight in both solutions not to reduce the network performance. Also, both solutions do not need any additional new messages when assigning TALs to UEs. The first solution takes into account only the priority between TALs. As for the second one, in addition to the priority between TALs, it takes into account the UEs activities (i.e., in terms of incoming communication frequency and mobility patterns) to improve further the network performance. A) Merits • High efficiency • Feasibility • Network has complex security. • Network protocols and standardization have been clear • Many areas have 4G service • Low cost • A) High Performance In these experiments cost. OTCM gives a result with throughput of 25%.TCC gives a result with throughput of 40%.MTC gives a result with throughput of 50%.ETAM gives a result with throughput of 70%. B) Cost In these experiments cost. OTCM gives a result with throughput of 50%.TCC gives a result with throughput of 60%.MTC gives a result with throughput of 40%.ETAM gives a result with throughput of 30%. C) Maximum Throughput In these experiments maximum throughput. OTCM gives a result with throughput of 40%.TCC gives a result with throughput of 60%.MTC gives a result with throughput of 55%.ETAM gives a result with throughput of 80%. V.CONCLUSION The proposed efficient tracking area list management (ETAM) system for 5G cloud-based mobile networks. The proposed system contains of two parts. The first part is executed offline. The executed offline is responsible of assigned tracking areas (TAs) to TA lists (TALs). The second one is executed online. The executed online is responsible of the distributed of TALs on user equipment (UEs) during their movements across TAs.For the first part, they proposed three keys, which are: (a) F-PAGING favoring the paging overhead over tracking area updated (TAU), (b) F-TAU favoring TAU over paging, and (c) FOTA (i.e., Fair and Optimal Assignment of TALs to TAs) for a solution that uses bargaining game to ensure a fair tradeoff between TAU and paging overhead. For the second part, two solutions are proposed to assigning real time, TALs to different UEs. The computation load is kept lightweight in both solutions not to reduce the network performance. Also, both solutions do not need any additional new messages when assigning TALs to UEs. The first solution taken into account only the priority between TALs. As for the second one, in addition to the priority between TALs, it taken into account the UEs activities (i.e., in terms of incoming communication frequency and mobility patterns) is improved the network performance. The presentation of ETAM is estimated through investigation and imitations, and the achieved results validate its feasibility and ability in achieving its thesis goals, improved the network performance by minimizing the cost related with paging and TAU. VI. FUTURE WORK The future work persists, they plan to improve the network performance, feasibility and security.
2019-06-13T13:10:53.290Z
2017-08-30T00:00:00.000
{ "year": 2017, "sha1": "401d8ec808b2daf177bddb09614777d5661a7f32", "oa_license": "CCBY", "oa_url": "http://ijarcs.info/index.php/Ijarcs/article/download/4868/4233", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "33ba658851cf7cd814a5344246ad8a818bc3e7a2", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
119285070
pes2o/s2orc
v3-fos-license
The galactic habitable zone of the Milky Way and M31 from chemical evolution models with gas radial flows The galactic habitable zone is defined as the region with sufficient abundance of heavy elements to form planetary systems in which Earth-like planets could be born and might be capable of sustaining life, after surviving to close supernova explosion events. Galactic chemical evolution models can be useful for studying the galactic habitable zones in different systems. We apply detailed chemical evolution models including radial gas flows to study the galactic habitable zones in our Galaxy and M31. We compare the results to the relative galactic habitable zones found with"classical"(independent ring) models, where no gas inflows were included. For both the Milky Way and Andromeda, the main effect of the gas radial inflows is to enhance the number of stars hosting a habitable planet with respect to the"classical"model results, in the region of maximum probability for this occurrence, relative to the classical model results. These results are obtained by taking into account the supernova destruction processes. In particular, we find that in the Milky Way the maximum number of stars hosting habitable planets is at 8 kpc from the Galactic center, and the model with radial flows predicts a number which is 38% larger than what predicted by the classical model. For Andromeda we find that the maximum number of stars with habitable planets is at 16 kpc from the center and that in the case of radial flows this number is larger by 10 % relative to the stars predicted by the classical model. INTRODUCTION The Circumstellar Habitable Zone (CHZ) has generally been defined to be that region around a star where liquid water can exist on the surface of a terrestrial (i.e., Earthlike) planet for an extended period of time (Huang 1959, Shklovsky & Sagan 1966, Hart 1979. Kasting et al. (1993) presented the first one-dimensional climate model for the calculation of the width of the CHZ around the Sun and other main sequence stars. Later on, several authors improved that model: Underwood et al. (2003) computed the evolution of the CHZ during the evolution of the host star. Moreover, Selsis et al. (2007) considered the case of low mass stars, and Tarter et al. (2007) However what is most relevant to our paper is that it exists a well-established correlation between metallicity of the stars and the presence of giant planets: the host stars are more metallic than a normal sample ones (Gonzalez, 1997;Gonzalez et al., 2001;Santos et al., 2001Santos et al., , 2004Fischer & Valenti, 2005;Udry et al., 2006;Udry & Santos, 2007). In particular, Fisher & Valenti (2005) and Sousa at al. (2011) presented the probabilities of the formation of giant planets as a function of [Fe/H] values of the host star. In Sozzetti et al. 2009, Mortier et al. (2013a these probabilities are reported for different samples of stars. The galactic chemical evolution can substantially influence the creation of habitable planets. In fact, the model of Johnson & Li (2012) showed that the first Earth-like planets likely formed from circumstellar disks with metallicities Z 0.1Z⊙. Moreover, Buchhave et al. (2012), analyzing the mission Kepler, found that the frequencies of the planets with earth-like sizes are almost independent of the metallicity, at least up to [Fe/H] values ∼ 0.6 dex. This is it was confirmed by Sousa et al. (2011) from the radial velocity data. Data from Kepler and from surveys of radial velocities from earth show that the frequencies of planets with masses and radii not so different from the Earth, and with habitable conditions are high: ∼ 20% for stars like the Sun (Petigura et al. 2013), between the 15 % (Dressing & Charbonneau 2013) and 50 % for M dwarf stars (Bonfils et al. 2013). Habitability on a larger scale was considered for the first time by Gonzalez et al. ( 2001), who introduced the concept of the galactic habitable zone (GHZ). The GHZ is defined as the region with sufficient abundance of heavy elements to form planetary systems in which Earth-like planets could be found and might be capable of sustaining life. Therefore, a minimum metallicity is needed for planetary formation, which would include the formation of a planet with Earth-like characteristics (Gonzalez et al. 2001, Lineweaver 2001, Prantzos 2008. Gonzalez et al. (2001) estimated, very approximately, that a metallicity at least half that of the Sun is required to build a habitable terrestrial planet and the mass of a terrestrial planet has important consequences for interior heat loss, volatile inventory, and loss of atmosphere. On the other hand, various physical processes may favor the destruction of life on planets. For instance, the risk of a supernova (SN) explosion sufficiently close represents a serious risk for the life (Lineweaver et al. 2004, Prantzos 2008, Carigi et al. 2013. Lineweaver et al. (2004), following the prescription of Lineweaver et al. (2001) for the probability of earth-like planet formation, discussed the GHZ of our Galaxy. They modeled the evolution of the Milky Way in order to trace the distribution in space and time of four prerequisites for complex life: the presence of a host star, enough heavy elements to form terrestrial planets, sufficient time for biological evolution, and an environment free of life-extinguishing supernovae. They identified the GHZ as an annular region between 7 and 9 kpc from the Galactic center that widens with time. Prantzos (2008) discussed the GHZ for the Milky Way. The role of metallicity of the protostellar nebula in the formation and presence of Earth-like planets around solar-mass stars was treated with a new formulation, and a new probability of having Earths as a function of [Fe/H] was introduced. In particular, Prantzos (2008) criticized the modeling of GHZ based on the idea of destroying life permanently by SN explosions. Recently, Carigi et al. (2013) presented a model for the GHZ of M31. They found that the most probable GHZ is located between 3 and 7 kpc from the center of M31 for planets with ages between 6 and 7 Gyr. However, the highest number of stars with habitable planets was found to be located in a ring between 12 and 14 kpc with a mean age of 7 Gyr. 11% and 6.5% of all the formed stars in M31 may have planets capable of hosting basic and complex life, respectively. However, Carigi at al. (2013) results are obtained using a simple chemical evolution model built with the instantaneous recycling approximation which does not allow to follow the evolution of Fe, and where no inflows of gas are taken into account. In this work we investigate for the first time the effects of radial flows of gas on the galactic habitable zone for both our Galaxy and M31, using detailed chemical evolution models in which is relaxed the instantaneous recycle approximation, and the core collapse and Type Ia SN rates are computed in detail. For the GHZ calculations we use the Prantzos (2008) probability to have life around a star and the Carigi et al (2013) SN destruction effect prescriptions. In this work we do not take into account the possibility of stellar migration (Minchev et al. 2013, andKybrik at al. 2013) and this will be considered in a forthcoming paper. The paper is organized as follows: in Sect. 2 we present our galactic habitable zone model, in Sect. 3 we describe our "classical" chemical evolution models for our Galaxy and M31, in Sects. 4 the reference chemical evolution models in presence of radial flows are shown. The results of the galactic habitable zones for the "classical" models are presented in Section. 5, whereas the ones in presence of radial gas flows in Section 6. Finally, our conclusions are summarized in Section 7. THE GALACTIC HABITABLE ZONE MODEL Following the assumptions of Prantzos et al. (2008) Fischer & Valenti (2005) studied the probability of formation of a gaseous giant planet which is a function of metallicity. In particular they found the following relation for FGK type stars and in the metallicity range -0.5 < [Fe/H] < 0.5: where GGP stands for gas giant planets. The PF E , and PGGP probabilities vs [Fe/H] used in this work are reported in the upper panel of Fig. 1. Prantzos (2008) identified PGGP (eq. 1) from Fisher & Valenti (2005) with the probability of the formation of hot jupiters, althought this assumption could be questionable. In fact this probability should be related to the formation of giant gas planets in general. However, Carigi et al. (2013) computed the GHZ for M31 testing different probabilities of terrestrial planet formation taken by Lineweaver et al. (2004), Prantzos (2008) and the one from The Extrasolar Planets Encyclopaedia on March 2013. As it can be seen from their Fig.8, the choice of different probabilities does not modify in a substantial way the GHZ. In the lower panel of Fig. 1 the probability of having stars with Earth like planets but not gas giant planets which destroy them, is reported as PE. This quantity is simply given by: We define PGHZ(R, t) as the fraction of all stars having Earths (but no gas giant planets) which survived supernova explosions as a function of the galactic radius and time: This quantity must be interpreted as the relative probability to have complex life around one star at a given position, as suggested by Prantzos (2008). In eq. (3) SF R(R, t ′ ) is the star formation rate (SFR) at the time t ′ and galactocentric distance R, PSN (R, t ′ ) is the probability of surviving supernova explosion. For this quantity we refer to the work of Carigi et al. (2013). The authors explored different cases for the life annihilation on formed planets by SN explosions. Among those they assumed that the SN destruction is effective if the SN rate at any time and at any radius has been higher than the average SN rate in the solar neighborhood during the last 4.5 Gyr of the Milky Way's life (we call it < RSNSV >). Throughout our paper we refer to this condition as "case 1)". Because of the uncertainties about the real effects of SNe on life destruction, we also tested a case in which the annihilation is effective if the SN rate is higher than 2× < RSNSV >, and we call it "case 2)". This condition is almost the same as that used by Carigi et al. (2013) to describe their best models. They imposed that, since life on Earth has proven to be highly resistant, there is no life if the rate of SN during the last 4.5 Gyr of the planet life is higher than twice the actual SN rate averaged over the last 4.5 Gyr (2× < RSNSV >). For < RSNSV > we adopt the value of 0.01356 Gyr −1 pc −2 using the results of the S2IT model of Spitoni & Matteucci (2011). Some details of this model will be provided in Section 3.1. Here, we just recall that in this model the Galaxy is assumed to have formed by means of two main infall episodes: the first formed the halo and the thick disk, and the second the thin disk. In Carigi et al. (2013) work it was considered for < RSNSV > a value of 0.2 Gyr −1 pc −2 , we believe it is a just a typo and all their results are obtained with the correct value for < RSNSV >. We consider also the case where the effects of SN explosions are not taken into account, and with this assumption eq. (3) simply becomes: Detailed chemical evolution models can be an useful Table 1. Chemical evolution models for the Milky Way Models Infall type Spitoni et al. (2013) tool to estimate the GHZ for different galactic systems. In the next two Sections we present the models for our Galaxy and for M31. We will call "classical" the models in which no radial inflow of gas was considered. Finally, we define the total number of stars formed at a certain time t and galactocentric distance R hosting Earthlike planet with life N ⋆ lif e (R, t), as: where N⋆tot(R, t) is the total number of stars created up to time t at the galactocentric distance R. THE "CLASSICAL" CHEMICAL EVOLUTION MODELS In this section we present the best "classical" chemical evolution models we will use in this work. For the Milky Way our reference classical model is the "S2IT" of Spitoni & Matteucci (2011), whereas for M31 we refer to the model "M31B" we proposed in Spitoni et al. (2013). The Milky Way (S2IT model) To follow the chemical evolution of the Milky Way without radial flows of gas, we adopt the model S2IT of Spitoni & Matteucci (2011) which is an updated version of the two infall model Chiappini et al. (1997) model. This model assumes that the halo-thick disk forms out of an infall episode independent of that which formed the disk. In particular, the assumed infall law is where τH is the typical timescale for the formation of the halo and thick disk is 0.8 Gyr, while tmax = 1 Gyr is the time for the maximum infall on the thin disk. The coefficients a(r) and b(r) are obtained by imposing a fit to the observed current total surface mass density in the thin disk as a function of galactocentric distance given by: where Σ0 = 531 M pc −2 is the central total surface mass density and RD = 3.5 kpc is the scale length. Moreover, the formation timescale of the thin disk is assumed to be a function of the Galactocentric distance, leading to an inside-out scenario for the Galaxy disk build-up. The Galactic thin disk is approximated by several independent rings, 2 kpc wide, without exchange of matter between them. A threshold gas density of 7 M⊙ pc −2 in the SF process (Kennicutt 1989(Kennicutt , 1998Martin & Kennicutt 2001;Schaye 2004) is also adopted for the disk. The halo has a constant surface mass density as a function of the galactocentric dis- tance at the present time equal to 17 M⊙ pc −2 and a threshold for the star formation in the halo phase of 4 M⊙ pc −2 , as assumed for the model B of Chiappini et al. (2001). The assumed IMF is the one of Scalo (1986), which is assumed constant in time and space. The adopted law for the SFR is a Schmidt (1958) like one: where Σgas(r, t) is the surface gas density with the exponent k equal to 1.5 (see Kennicutt 1998;and Chiappini et al. 1997). The quantity ν is the efficiency of the star formation process, and it is constant and fixed to be equal to 1 Gyr −1 . In Table 1 the principal characteristics of the S2IT model are reported: in the second column the infall type is reported, in the third and forth ones the time scale τ d of the thin disk formation, and the time scale τH of the halo formation are drawn. The dependence of τ d on the Galactocentric distance R, as required by the inside-out formation scenario, is expressed in the following relation: is in column 5. The adopted threshold in the surface gas density for the star formation, and total surface mass density for the halo are reported in column 6 and 7, respectively. In the left panel of Fig. 2 the surface density profile for the stars as predicted by the model S2IT is reported. Observational data are the ones used by Chiappini et al. (2001). M31 (M31B model) To reproduce the chemical evolution of M31, we adopt the best model M31B of Spitoni et al. (2013). The surface mass density distribution is assumed to be exponential with the scale-length radius RD = 5.4 kpc and central surface density Σ0 = 460M⊙pc −2 , as suggested by Geehan et al. (2006). It is a one infall model with inside-out formation, in other words we consider only the formation of the disk. The time scale for the infalling gas is a function of the Galactocentric radius: τ (R) = 0.62R + 1.62. The disk is divided in several shells 2 kpc wide as for the Milky Way. In order to reproduce the gas distribution they adopted for the model M31B the SFR of eq. (8) with the following star formation efficiency: ν(R) = 24/R − 1.5, until it reaches a minimum value of 0.5 Gyr −1 and then is assumed to be constant. Finally, a threshold in gas density for star formation of 5 M⊙/pc 2 is considered, as suggested in Braun et al. (2009). The model parameters for the time scale of the infalling gas, for the star formation efficiency, and for the threshold are summarized in Table 2. The assumed IMF is the one of Kroupa et al. (1993). Our reference model of M31 overestimates the presentday SFR data. In fact, our "classical" model is similar to that of Marcon-Uchida et al. (2010). The same model, however, well reproduces the oxygen abundance along the M31 disk. We think that more uncertainties are present in the derivation of the star formation rate than in the abundances. THE CHEMICAL EVOLUTION MODELS WITH RADIAL FLOWS OF GAS In this section we present the best models in presence of radial gas flows we will use in this work. For the Milky Way our reference radial gas flow model is the "RD" of Mott et al. (2013), whereas for M31 we refer to the model "M31R" we proposed in Spitoni et al. (2013). The Milky Way (RD model) We consider here the best model RD presented by Mott et al. (2013) to describe the chemical evolution of the Galactic disk in presence of radial flows. From Table 1 we see that this model shows the same prescriptions of the S2IT model for the inside-out formation and for the SFR efficiency fixed at 1 Gyr −1 . On the other hand the RD model has not a threshold in the SF and a different modeling of the surface density for the halo was used: the total surface mass density in the halo σH(R) becoming very small for distances 10 kpc, a more realistic situation than that in model S2IT. The radial flow velocity pattern is shown in Fig. 2 labeled as "pattern III". The range of velocities span between 0 and 3.6 km s −1 . We recall here that in the implementation of the radial inflow of gas in the Milky Way, presented by Mott et al. (2013), only the gas that resides inside the Galactic disk within the radius of 20 kpc can move inward by radial inflow, and as a boundary condition we impose that there is no flow of gas from regions outside the ring centered at 20 kpc. In the left panel of Fig. 2 the surface density profile for the stars predicted by the model RD is reported. Observational data are the ones used by Chiappini et al. (2001). M31 (M31R model) M31R is the best model for M31 in presence of radial flows presented by Spitoni et al. (2013). It assumes a constant star formation efficiency, fixed at the value of 2 Gyr −1 and, it does not include a star formation threshold. At variance with the RD model, where the radial flows was applied to a two-infall model, for the M31R model it was possible to find a linear velocity pattern as a function of the galactocentric distance. The radial inflow velocity pattern requested to reproduce the data follows this linear relation: and spans the range of velocities between 1.55 and 0.65 km s −1 as shown in Fig. 2. Therefore in the external regions the velocity inflow for the Milky Way model is higher than the M31 velocity flows as shown in Fig. 2. At 20 kpc the ratio between the inflow velocities is vRD/vM31R ≃ 2.5. The model M31R fits the O abundance gradient in the disk of M31 very well. The other model parameters are reported in Table 2. We recall here that in the implementation of the radial inflow of gas in M31 presented by Spitoni et al. (2013), only the gas that resides inside the Galactic disk within the radius of 22 kpc can move inward by radial inflow, and as boundary condition we impose that there is no flow of gas from regions outside the ring centered at 22 kpc, as already discussed for the Milky Way. THE CLASSICAL MODEL GHZ RESULTS In this section we report our results concerning the GHZ using "classical" chemical evolution models. The Milky Way model results We start to present the Milky Way results for the models without any radial flow of gas. In Fig. 3 the probability PGHZ(R, t) for the model S2IT of our Galaxy without the effects of SN are reported, at 1, 2, 4, 8, 13 Gyr. We recall that S2IT is a two infall model. Comparing Table 1 with Table 2 we notice another important difference between the S2IT and M31B models. For the Milky Way the star formation efficiency ν is constant and taken equal to 1 Gyr −1 whereas for M31 is ν= 24/(R[kpc])-1.5. This is the reason why our results are different from the Prantzos (2008) ones. In fact the reference chemical evolution model for the Milky Way in Prantzos (2008) is the one described in Boissier & Prantzos (1999), where the SFR is ∝ R −1 . Hence at variance with the Prantzos (2008) GHZ results, the probability that a star has a planet with life is high also in the external regions at the early times. At 1 Gyr we notice that the PGHZ values are constant along the disk. The reason for that resides in the fact that in the first Gyr the SFR is the same at all radii, since it reflects the SF in the halo (see Fig.1 of Spitoni et al. 2009). For our Galaxy we just show the results of the SN case 2) model reported in Fig. 4. This is our best model for the Milky Way considering our SN rate history. With this model the region with the highest probability that a formed star can host a Earth-like planet with life is between 8 and 12 kpc, and the maximum located at 10 kpc. The Milky Way outer parts are affected by the SN destruction: in fact, the predicted SN rate using Galaxy evolution as time goes by can overcome the value fixed by the case 2): 2× < RSNSV >, and consequently PGHZ drops down. This is shown in Fig. 5 where the SN rates at 8 and 20 kpc are reported for our Galaxy. In Fig. 4 it is shown that the probability PGHZ increases in the outer regions as time goes by, in agreement with the previous works of of Lineweaver et al. (2004) and Prantzos (2008). In both works it was also found that the peak of the maximum probability moves outwards with time. With our chemical evolution model such an effect is not present, and the peak is always located at the 10 kpc from the Galactic center. This effect is probably due to the balance of the destruction by SNe and SFR occurring at this distance. In Fig. 6 we present our results concerning the quantity N ⋆lif e , i.e. total number of stars as a function of the Galactic time and the Galactocentric distance for the model S2IT in presence of the SN destruction effect (case 2). As found by Prantzos (2008) the GHZ, expressed in terms of the total number of host stars peaks at galactocentric distances smaller than in the case in which are considered the fraction of stars (eq. 3). This is due to the fact that in the external regions the number of stars formed at any time is smaller than in the inner regions. In fact, the maximum numbers of host stars peaks at 8 kpc whereas the maximum fraction of stars peaks at 10 kpc (see Fig. 4). Our results are in perfect agreement with Lineweaver et al. (2004) who identified for the Milky Way the GHZ as an annular region between 7 and 9 kpc from the Galactic center that widens with time. M31 model results In Fig. 7 we show the probability PGHZ(R, t) as a function of the galactocentric distance of having Earths where the effects of SN explosions are not considered, for the classical model M31B at 1, 2, 4, 8, 13 Gyr. The shape and evolution in time of the PGHZ(R, t) for M31 is similar to the one found by Prantzos (2008) for the Milky Way. The similar behavior is due to the choice of similar prescriptions for the SFR. In fact for M31, as it can be inferred in Table 2, we have a similar law for the SFR, in fact our SF efficiency is a function of the galactocentric distance with the following law: ν= 24/(R[kpc])-1.5. In Fig. 7 we can see that as time goes by non-negligible PGHZ values extend to the external parts of Galaxy. Early on, at 1 Gyr non zero values of PGHZ can be found just in the inner regions for distances smaller than 10 kpc from the galactic center. In this plot, it can be visualized the gradual extension of the fraction of stars with habitable planets up to the outer regions, during the galactic time evolution. The GHZ for the classical model of M31 taking into account case 1) for the SN destruction effects, is reported in the left panel of Fig. 8. Comparing our results with the ones of Carigi et al. (2013) with the same prescriptions for the SN destruction effect (their Fig. 6, first upper panels), we find that substantial differences in the inner regions (R 14 kpc): at variance with that paper we have a high enough SN rate to annihilate the life on formed planets. In Fig. 9, where the total number of stars having Earths (N ⋆lif e ) is reported, it is clearly shown that the region with no host stars spans all galactocentric distances smaller or equal than 14 kpc during the entire galactic evolution. On the other hand, the two model are in very good agreement for the external regions. We have to remind that at variance with our models, the Carigi et al. (2013) one is not able to follow the evolution of [Fe/H], because they did not consider Type Ia SN explosions. In the total budget of the SN rate they did not consider the SN Type Ia. In Fig. 10 we show the contribution of Type Ia SN rate expressed in Gyr −1 pc −2 for the M31B model at 8 kpc. We note that it is not negligible and it must be taken into account for a correct description of the chemical evolution of the galaxy. It is fair to recall here that our model M31B overestimates the present-day SFR. Therefore, we are aware that in recent time there could have been favorable conditions for the growth of the life as found in Carigi et al. (2013) work also in regions at Galactocentric distances R 14. Anyway, as stated by Renda (2005) and Yin et al. (2009), the Andromeda galaxy had an overall higher star formation efficiency than the Milky Way. Hence, during the galactic history the higher SFR had probably led to not favorable condition for the life. The case 2) model is reported in the right panel of Fig. 8. As expected, the habitable zone region increases in the inner regions, reaching non-zero values for radius > 10 kpc. At variance with case 2) for S2IT model, we see that the external regions are not affected by the SN destruction. In fact, the SF efficiency is lower in the external regions when compared with those of the Milky Way. Moreover, the SN rates from the galaxy evolution of M31 are always below the value fixed by the case 2): 2× < RSNSV >, and consequently PGHZ does not change. RADIAL FLOWS GHZ RESULTS We pass now to analyze our results concerning both M31 and our Galaxy in presence of radial flow of gas. The Milky Way model results in presence of radial flows In the left panel of Fig. 11 the RD model results without SN destruction are presented. Although the RD model does not include any threshold in the SF, at 20 kpc we note a deep drop in the probability PGHZ(R, t), at variance with what we have shown for the "classical" model S2IT without SN effects (Fig. 3). The explanation can be found in Fig. 2: in the outer parts of the Galaxy the inflow velocities are roughly 2.5 times larger than the M31 ones, creating a tremendous drop in the SF. In Fig. 12 we report the SFR and [Fe/H] histories at 20 kpc for RD and S2IT models respectively. For the [Fe/H] plot we fixed the lower limit at -1 dex, because this is the threshold for the creation of habitable planets. Concerning the RD model the high inflow velocity has the effect of removing a not negligible amount of gas from the shell centered at 20 kpc (the outermost shell for the Milky Way model). In the left lower panel of Fig. 12 we see that the maximum value of the SFR for the RD model is 0.01 M⊙ pc −2 Gyr −1 . It is important also to recall here that in the RD model for Galactocentric distances 10 kpc a constant surface mass density of σH 0.01 M⊙ pc −2 Gyr −1 is considered for the halo, at variance with the model S2IT where it has been fixed at 17 M⊙ pc −2 Gyr −1 . The PGHZ value depends on the product of the [Fe/H] and SFR quantities. Because the RD model at 20 kpc shows [Fe/H] > -1 in small ranges of time (0.6 < t < 1 Gyr and t > 10.3 Gyr), we find very small values for PGHZ during the whole Galactic history. In the right side of Fig. 12 the different behavior of the model S2IT at 20 kpc is reported. In this case even if there is a threshold in the star formation we have [Fe/H] > -1 dex at all times, and the SFR when is not zero is also 800 times higher than in the RD model. This is why in Fig. 3 the model S2IT shows higher values of PGHZ at 20 kpc. We discuss now the probability PGHZ (R, t) at 2 Gyr in the model RD. It drops almost to zero for galactocentric distances 10 kpc. In Fig. 13 we report the SFR and [Fe/H] histories for the model RD at 10 kpc. We recall that in the model RD the halo surface density is really small for R 10 kpc (0.01 M⊙ pc −2 Gyr −1 ). We can estimate the PGHZ (10 kpc, 2 Gyr) value obtained with eq. (4) using simple approximated analytical calculations. For the numerator we note that in the interval of time between 0 and 2 Gyr, [Fe/H] > -1 dex only approximately in the range 0.5, 1 Gyr, where PE=0.4. In the lower panel of Fig. 13 it is shown that for this interval the SFR value. is roughly constant and ≃ 4 × 10 −4 M⊙ pc −2 Gyr −1 . Hence, we can approximate the numerator integral as: The denominator of eq. (4) is The integral of the SFR is negligible in the first Gyr compared with the one computed in the interval between 1 and 2 Gyr. A lower limit estimate of D is given by: where a linear growth of the SFR from 0 to 1.35 M⊙ pc −2 Gyr −1 in the interval 1-2 Gyr was considered in reference of the Fig. 13. Here again, as done for the Milky Way "classical" model, we report the results for the best model in presence of SN destruction which is capable to predict the existence of stars hosting Earth-like planets in the solar neighborhood. The SN destruction case 2) gives again the best results (see Fig. 11). We notice that the external regions are not affected from the SN destruction at variance with what we have seen above for the "classical" model S2IT. Comparing the "classical" model of Fig. 4 with the right panel of Fig. 11 we see that for the Milky Way the main effect of a radial inflow of gas is to enhance the probability PGHZ in outer regions when the supernova destruction are also taken into account. In Fig. 14 the quantity N ⋆lif e is drawn for the RD model. We see that the region with the maximum number of host stars centers at 8 kpc, and that this number decreases toward the external regions of the Galaxy. The reason for this is shown in the right panel of Fig. 2: in the RD model the radial inflow of gas is strong enough in the external parts of the Milky Way to lower the number of stars formed. Hence, although the probability PGHZ is still high in at large Galactocentric distances, the total number of the stars formed is smaller during the entire Galactic history compared to the internal Galactic regions. At the present time, at 8 kpc the total number of host stars is increased by 38 % compared to the S2IT model results. M31 model results in presence of radial flows The last results are related to the GHZ of M31 in presence of radial flows of gas. In Fig. 15 we report the probability PGHZ(R, t) of having Earths without the effects of SN explosions for the model M31R at 1, 2, 4, 8, 13 Gyr. The main effect of the gas radial inflow with the velocity pattern of eq. (10), is to enhance at anytime the probability to find a planet with life around a star in outer regions of the galaxy compared to the classical result reported in Fig. 7 also when the destruction effect of SN is not taken into account. This behavior is due to two main reasons: 1) the M31R model has a constant SFR efficiency as a function of the galactocentric distances, this means higher SFR in the external regions; 2) the radial inflow velocities are small in the outer part of M31 compared to the one used for the Milky Way model RD (see Fig. 2), therefore the gas removed from the outer shell in the M31R model (the one centered at 22 kpc) is very small. In fact, in Fig. 15 the drop in the PGHZ(R, t) quantity at 22 kpc is almost negligible if compared to the one reported in Fig. 11 for the RD model of the Milky Way, described in the Section 6.1. For the M31R model with radial flows we will show just the results with the case 1) with SN destruction. In Fig. 16 the effects of the case 1) SN destruction on the M31R model are shown. The external regions for galactocentric distances 16 kpc are not affected by SNe, on the other hand for radii < 14 kpc due to the higher SNR relative to < RSNSV >, there are not the condition to create Earth-like planets at during whole the galactic time. In Fig. Figure 16. The probability P GHZ (R, t) as a function of the galactocentric distance of having Earths including the effects of SN explosions of the case 1) prescription for the model M31R with radial gas flows at 1, 2, 4, 8, 13 Gyr. 17 the total number of host stars N ⋆lif e are shown. Also in this case, in the external parts the total number of stars formed and consequently the ones hosting habitable planets are small compared to the inner regions. The galactocentric distance with the maximum number of host stars is 16 kpc. Presently, at this distance the total number of host stars N ⋆lif e is increased by 10 % compared to the M31B model results. Comparing Fig. 17 with Fig. 9 we note that at variance with the Milky Way results the N ⋆lif e value for the M31R model is always higher compared to the M31B results. This is due to the slower inflow of gas and the constant star formation efficiency which favor the formation of more stars. For the case 2), we just mention that as expected the GHZ is wider than the case 1) described above and for radii < 12 kpc, there are not the condition to create Earth-like planets at during whole the galactic time. CONCLUSIONS In this paper we computed the habitable zones (GHZs) for our Galaxy and M31 systems, taking into account "classical" models and models with radial gas inflows. We summarize here our main results presented in the previous Sections. Concerning the "classical" models we obtained: • The Milky Way model which is in agreement with the work of Lineweaver et al. (2004) assumes the case 2) for the SN destruction (the SN destruction is effective if the SN rate at any time and at any radius is higher than two times the average SN rate in the solar neighborhood during the last 4.5 Gyr of the Milky Way's life). With this assumption, we find that the Galactic region with the highest number of host stars of an Earth-like planet is between 7 and 9 kpc with the maximum localized at 8 kpc. • For Andromeda, comparing our results with the ones of Carigi et al. (2013), with the same prescriptions for the SN destruction effects, we find substantial differences in the inner regions (R 14 kpc). In particular, in this region there is a high enough SN rate to annihilate life on formed planets at variance with Carigi et al. (2013). Nevertheless, we are in agreement for the external regions. It is important to stress the most important limit of the Carigi et al. (2013) model: Type Ia SN explosions were not considered. We have shown instead, that this quantity is important both for Andromeda and for the Galaxy. In this work for the first time the effects of radial flows of gas were tested on the GHZ evolution, in the framework of chemical evolution models. • Concerning the models with radial gas flows both for the Milky Way and M31 the effect of the gas radial inflows is to enhance the number of stars hosting a habitable planet with respect to the "classical" model results in the region of maximum probability for this occurrence, relative to the classical model results. In more details we found that: • At the present time, for the Milky Way if we treat the SN destruction effect following the case 2) criteria, the total number of host stars as a function of the Galactic time and Galactocentric distance tell us that the maximum number of stars is centered at 8 kpc, and the total number of host stars is increased by 38 % compared to the "classical" model results. • In M31 the main effect of the gas radial inflow is to enhance at anytime the fraction of stars with habitable planets, described by the probability PGHZ, in outer regions compared to the classical model results also for the models without SN destruction. This is due to the fact that: i) the M31R model has a fixed SFR efficiency throughout all the galactocentric distances, this means that in the external regions there is a higher SFR compared to the "classical" M31B model; ii) the radial inflow velocities are smaller in the outer part of the galaxy compared to the ones used for the Milky Way model RD, therefore not so much gas is removed from the outer shell. The galactocentric distance with the maximum number of host stars is 16 kpc. Presently, at this distance the total number of host stars is increased by 10 % compared to the M31B model results. These values for the M31R model are always higher than the M31B ones. In spite of the fact that in in the future it will be very unlikely to observe habitable planets in M31, that could confirm these model results about the GHZ, our aim here was to test how GHZ models change in accordance with different types of spiral galaxies, and different chemical evolution prescriptions.
2014-03-10T15:27:12.000Z
2014-03-10T00:00:00.000
{ "year": 2014, "sha1": "80ddf27ca186180e6ba552309962b0b48ebb55b6", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/440/3/2588/23992065/stu484.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "80ddf27ca186180e6ba552309962b0b48ebb55b6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237262762
pes2o/s2orc
v3-fos-license
Mixed effects and mechanisms of cannabinoids for triple-negative breast cancer treatment Triple-negative breast cancer (TNBC) is a subtype of breast cancer characterized by the lack of estrogen receptors (ER), progesterone receptors, and HER-2 receptors. Thus, TNBC tumours do not benefit from the current therapies targeting ER or HER-2. Therefore, there is an urgent need to develop novel treatment for this subtype of breast cancer. Marijuana is a common name given to Cannabis plants, a group of plants in the Cannabis genus of the Cannabaceae family. Cannabis plants are among the oldest cultivated crops, traced back at least 12,000 years and are well known for their multi-purpose usage, including medicinal purposes. The main active compounds extracted from Cannabis plants are 21-carbon-containing terpenophenolics, which are referred to as phytocannabinoids. Of these, the tetrahydrocannabinol (THC) group contains highly potent cannabinoids, including delta-9tetrahydrocannabinol (∆9-THC) and delta-8-tetrahydrocannabinol (∆8-THC), which are the most abundant THCs and are largely responsible for psychological and physiological effects of marijuana. The use of Cannabis plants for medicinal purposes was first recorded in 2337 BC in China, where Cannabis plants were used to treat pains, rheumatism, and gout. Recently, several cannabinoids have been approved for a number of treatments, one of which is the treatment of nausea and vomiting caused by chemotherapy in cancer patients. Furthermore, increasing evidence shows that cannabinoids not only attenuate side effects due to cancer treatment, but might also potentially possess direct antitumor effects in several cancer types, including breast cancer. However, anti-tumour activity of marijuana has been variable in different studies and even promoted tumour growth in some cases. In addition, the mechanisms of cannabinoid action in cancer remain unclear. This review summarizes evidence about the mixed actions of cannabinoids in cancer in general and triple-negative breast cancer in particular. Introduction Breast cancer is the most commonly diagnosed cancer in women worldwide. In 2019, the total number of female patients diagnosed with breast cancer for the first time worldwide was approximately 2 million, which accounted for 18.9% of all female cancer incidence [1]. The incidence of breast cancer in developed countries is 66.4/100,000 people, which is twice as high as that in developing countries [2]. In New Zealand, the rate of breast cancer in 2019 was approximately 80 per 100,000 [1]. Geographical regions that have high incidences of breast cancer include West and North Europe, Australia/New Zealand and North America [1]. It has been estimated that the possibility for a woman to develop breast cancer in her lifetime is approximately one in ten [3]. Although mammography screening has decreased the breast cancer death rates [4,5], breast cancer remains the leading cause of cancer mortality in women. In 2019, there were approximately 689,000 breast cancer deaths for females globally, accounting for 15.9% of total female cancer mortality [1]. Breast cancer is a burden for patients and society and like with any other malignant diseases, being diagnosed with breast cancer is always an acute emotional shock, which may permanently and substantially affect the physical and intellectual capacity of the patients [6,7]. In Sweden. the total annual cost required from a patient younger than 50 years of age with metastatic breast cancer was estimated to $43,565 USD [8]. In France, trastuzumab accounted for 44% of total treatment cost (Poncet, 2009) and in Canada, annual cost for trastuzumab required for a metastatic breast cancer patient was $28,350 USD [9]. Therefore, it is imperative that breast cancer be effectively managed, and for that purpose, searching for novel treatment is crucial. Classification of breast cancer It has been established that the expression status of estrogen receptor (ER), progesterone receptor (PR) and HER-2 imposes significant influences on the development, prognosis and treatment outcome of breast cancer [10]. Therefore, breast cancer has been classified according to differential expression of ER, PR and HER-2. Based on the presence of ER in cancer cells, breast cancer is classified into ER positive (ER+), which expresses the estrogen receptor α gene, and ER negative, which lacks the estrogen α gene. The survival and growth of ER+ breast cancer cells are dependent on estrogen binding, whereas ER-cells rely on other growth factors, including epidermal growth factor (EGF) and endothelial vascular growth factor [11]. ERbreast cancer accounts for approximately 75% of all breast cancer. Compared to ER+, ERbreast cancer is often associated with a poorer prognosis and more aggressive progression. In contrast to ER+, ER-is unlikely to respond to hormonal treatments [12]. The presence of PR in the cancer cells, along with ER, is more predictive of a hormonal responsive tumor than the presence of ER alone [13,14]. Tumors with ER+/PR+ were reported to exhibit a response rate of 75% to hormonal therapies, compared to the response rate of 33% in ER+/PR-tumors [15]. Furthermore, approximately one in five breast cancer patients exhibit amplification of the HER-2 gene and overexpression of the receptor. HER-2 positive breast cancer is associated with more aggressive behaviors of the tumors [16]. In addition, HER-2 positive breast cancer is also unlikely to respond to hormonal therapies. However, they respond to anti-HER-2 monoclonal antibody such as trastuzumab, a specific targeted treatment for HER-2 positive breast cancer [12]. Triple-negative breast cancer Triple negative breast cancer (TNBC) is a subtype of breast cancer that is characterized by the lack of ER, PR and HER-2. TNBC accounts for 10-20% of all breast cancer [11]. Compared to other subtypes of breast cancer, clinical features of TNBC include poor outcome, aggressive progression of the primary tumors and metastasis, shorter survival and high mortality rate. In a cohort study of 1601 breast cancer patients, Dent (2007) found that 11.2% of the patients were of TNBC subtype [17]. In addition, the results showed that TNBC has an increased risk of distant recurrence and death within 5 years, but not thereafter. Similarly, it has been noted that there is a sharp decrease in survival during the first 5 years after diagnosis of TNBC. A study on 496 patients with invasive breast cancer from the Carolina Breast Cancer Study showed that TNBC accounted for 26% of the total breast cancer patients in the study [18]. TNBC tumors were mainly grade 3 with high mitotic index. In addition, TNBC was more frequent in premenopausal patients and often associated with overexpression of EGFR and p53 [19]. TNBC was also reported to closely relate to BCRA1 mutation. It was reported that 75% of breast cancer tumors with BCRA1 mutation have a TNBC phenotype [10]. TNBC lacks ER and HER-2, therefore TNBC do not benefit from endocrine therapy or trastuzumab [12]. The major treatment options for TNBC are surgery, chemotherapy and radiation. In addition to these conventional treatment options, several targeted therapies including EGFR inhibitors, PARP inhibitors and angiogenesis inhibitors are in clinical trials [20,21]. As there are currently no targeted treatment for TNBC, there is an urgent need to develop novel treatment for this subtype of cancer. Marijuanna-derived compounds Cannabinoids can be classified into three main groups, including natural cannabinoids that are derived from the plant Cannabis sativa (phytocannabinoids), endogenous cannabinoids that are originated within the body (endocannabinoids), and cannabinoid analogues that are artificially synthesied (synthetic cannabinoids) [22,23]. Phytocannabinoids are 21-carboncontaining terpenophenolic compounds derived from the plants Cannabis sativa and Cannabis indica. At least 85 phytocannabinoids has been isolated and chemically characterized. Based on their chemical structures, phytocannabinoids are further categorized into different groups. Of these, the tetrahydrocannabinol (THC) group contains highly potent cannabinoids, including delta-9-tetrahydrocannabinol (∆9-THC) and delta-8-tetrahydrocannabinol (∆8-THC), which are the most abundant and largely responsible for psychological and physiological effects of marijuana [23]. After the presence of CB1 and CB2 cannabinoid receptors in the brain and the immune system was recognized, another group of cannabinoids has been found. Endocannabinoids are a group of ligands for cannabinoid receptors that are endogenously biosynthesized within the body. Endocannabinoids are all arachidonic acid derivatives, including arachidonoylethanolamine (anandamide or AEA), 2-arachidonoyl glycerol (2-AG), 2-arachidonoyl glyceryl ether (noladin ether), N-arachidonoyl-dopamine (Gupta GP), and O-arachidonoyl-ethanolamine (Virodhamine or OAE) [24,25]. Endocannabinoids and cannabinoids receptors present within the body are two major components of the endocannabinoid system, which has now been revealed to have important roles in modulation of neurotransmitter release; control of cell survival, transformation, proliferation and metabolism; regulation of pain perception, cardiovascular, gastrointestinal and respiratory functions [26]. Synthetic cannabinoids are a large group of structurally diverse compounds that are able to bind to the cannabinoid receptors and have cannabimimetic activities. Classical synthetic cannabinoids refer to compounds that retain parts of the dibenzopyran ring of THC [27]. The first generation of classical synthetic cannabinoids includes HU-210 and Nabilone (Cesamet®, Lilly) the latter of which was approved by the U.S Food and Drug Administration (FDA) for treatment of chemotherapy-induced nausea and vomiting [28]. The second generation of classical synthetic cannabinoids synthesized by JW Huffman's group has a variety of core ring structures. The compounds of this group are named as JWH-compounds to honor JW Huffman, the creator of JWH-018 and other JWH-compounds, including JWH-133, JWH-018, JWH-075 and JWH-019 [29]. In contrast to classical synthetic cannabinoids, non-classical compounds lack the dibenzopyran in their molecular structures, including cyclohexylphenol compounds and aminoalkylindole (AAI) compounds. CP cannabinoids were developed by Pfizer and include CP-47,497 and CP-55,940 [30]. Aminoalkylindole (AAI) cannabinoids were developed at Sterling Winthrop, with WIN55,212-2 to be the most potent compound in this series [31]. Cannabinoid receptors To date, two specific receptors for cannabinoid ligands have been cloned from mammalian cells. The first cannabinoid receptor, the CB1 receptor, was cloned in 1990 from a rat cerebral cortex cDNA library [32]. The second cannabinoid receptor, CB2 receptor, was cloned in 1993 by Munro et al. from human promyelocytic leukemic HL-60 cells [33]. Although the two receptors have a large number of common cannabinoid ligands, they differ substantially from each other in many aspects, including amino acid sequence, distribution location within the body and downstream signaling cascades. Human and rat CB1 receptor proteins consist of 427 and 473 amino acids, respectively, with 97-99% amino acid sequence identity across species. CB2 receptor was cloned from human leukemic HL-60 cells as a cDNA fragment that encodes a protein of 360 amino acids, with 82% and 81% amino acid sequence identity to mouse and rat CB2 receptors, respectively. CB1 and CB2 receptor proteins show 44% identity in general and 68% for the transmembrane residues, which are considered to determine ligand specificity for the receptors [23]. The distribution location of CB1 and CB2 cannabinoid receptors is also different. CB1 receptors are found primarily in the brain and are the most abundant G proteincoupled receptor in the brain. Particularly, the CB1 receptor is highly expressed in basal ganglia and cerebellum, cortex and hippocampus, amyglada, thalamus, hypothalamus, pons, and medulla [23]. In addition, the CB1 receptor is also expressed in peripheral nerve terminals and extraneural tissues such as testes, eyes, vascular endothelium and spleen [34]. CB2 receptors are found primarily in the immune cells (B and T cells and macrophages) and tissues (spleen, tonsils and lymph nodes) [35]. Cannabinoid receptors belong to the super family of G protein-coupled receptors (GPCRs), also known as seven-transmembrane domain (7TM) receptors [36]. It has been established that CB1 receptors are coupled to Gi/o and Gs proteins, while CB2 receptors are coupled to Gi/o proteins [37,38]. Activation of CB1 and CB2 receptors leads to inhibition of adenylyl cyclase. In addition, activated cannabinoid receptors can also modulate signaling cascades that are involved in regulation of cell survival, growth, and proliferation [39]. Major downstream signaling cascades of cannabinoid receptors include extracellular signal-regulated kinase, c-Jun N-terminal kinase and p38 mitogen-activated protein kinase, phosphatidylinositol 3kinase/Akt and focal adhesion kinase [40]. In addition, CB1 receptor can modulate certain types of ion channels, which plays a crucial role in neuromodulatory actions of the endocannabinoids. CB1 receptor can inhibit N-, L-and P-or Q-type voltage-sensitive Ca 2+ channels [41,42] and activate G protein-activated inwardly rectifying K + channels [43]. Anticancer effects of cannabinoids in triple-negative breast cancer TNBC lacks ER and HER2, the targets for selective estrogen receptor modulators (SERMs) and anti-HER2 monoclonal antibodies. Therefore, it is imperative that alternative treatment for TNBC be developed in order to effectively manage this poor prognosis and highly aggressive subtype of breast cancer. Due to the lack of specific treatment for TNBC and to the fact that cannabinoids produced antitumoral effects in some other cancers, recent attention has also been drawn to the possibility for cannabinoids to be used as a treatment for TNBC [44]. It has been reported that cannabinoid receptors are overexpressed in primary human breast tumors compared to normal breast tissue [44]. In breast cancer tissue, Caffarel (2006) showed that CB2 expression was higher than CB1 expression in the same tumor [45]. In addition, CB2 expression appeared to correlate positively with the grades of the tumors. Compared to ER(+), PR(+), and HER2(+) breast cancer tumors, CB2 mRNA were found to be higher than in ER(-), PR(-), and HER2(-), respectively [45]. Studies have also confirmed the presence of CB1 and CB2 receptors in TNBC cells using reverse transcriptase and real-time PCR coupled with confocal microscopy [44,46]. In vitro, treatment with cannabinoids has been carried out in various TNBC cells, of which the most commonly used cell lines are MDA-MB-231 and MDA-MB-468. Studies demonstrated that cannabinoids inhibit the survival and proliferation of TNBC cells in a dosedependent and time-dependent manner [45]. In addition, using CB1 and CB2 receptor antagonists was found to prevent cannabinoid-induced cell death, suggesting that the inhibitory effects of cannabinoids against TNBC cells were mediated via cannabinoid receptors. Antiproliferative effects of cannabinoids in vitro can be attributed to induction of apoptosis and cell cycle arrest [44,47]. In vivo, cannabinoids have been reported to suppress the growth of TNBC xenografts. In a study conducted by , MDA-MB-231 cells were subcutaneously inoculated into the dorsal right side of male athymic mice [47]. The mice were intratumorally treated with ∆9-THC or cannabidiol (5mg/kg) twice a week for 16 days. It was found that ∆9-THC and cannabidiol significantly reduced the volume of the xenografts. Using SCID mice, Qamri (2009) reported that both JWH-133 and WIN55,212-2 suppressed the growth of MDA-MB-231 xenografts, and the effects were mediated via CB1 and CB2 receptors. Cannabinoids have been also reported to inhibit the metastasis of TNBC tumors [44]. Treatment with cannabidiol (i.p. injection of 5mg/kg every 72h for 21 days) significantly reduced metastatic lung infiltration from the primary tumors induced by injection of MDA-MB-231 cells into the left paw of the mice [47]. Similarly, treatment with JWH-133 and WIN55,212-2 was shown to reduce lung metastasis by 65% to 80% respectively [44]. The involvement of CB1 and CB2 receptors has been reported both in vitro and in vivo. CB1 and CB2 receptor antagonists have been demonstrated to reverse the inhibitory effects of cannabinoids on cultured TNBC cells and on TNBC xenografts, suggesting cannabinoidinduced cell death in vitro and tumor growth suppression in vivo were mediated via CB1 and CB2 receptors. Furthermore, the mediative role of CB2 receptor has been confirmed using CB2 targeting siRNA that was transfected into MDA-MB-231 cells. The results showed that CB2 siRNA was able to block JWH-133 and WIN55,212-2 effects by decreasing level of CB2 expression in the transfected [44]. Taken together, studies indicate that the anticancer effects of cannabinoids in TNBC were mediated via CB1 and CB2 receptor. Cellular mechanisms of anticancer effects of cannabinoids So far, several mechanisms for anticancer effects of cannabinoids have been identified, including apoptosis induction, cell cycle arrest, antiangiogenesis and inhibition of migration and invasion. Cannabinoids were found to induce apoptosis in glioma cells and other cancer cells in culture [48][49][50]. Cannabinoids also increased apoptotic activities in tumors treated with cannabinoids, which was associated with the suppression of the tumor growth [51,52]. Existing evidence indicated that cannabinoid-induced apoptosis was mediated via the activation of cannabinoid receptors, which in turn triggers the proapoptotic mitochondrial intrinsic pathway. In glioma cells and pancreatic cancer cells, activation of cannabinoid receptors resulted in two peaks of ceramide generation by the mechanisms of sphingomyelin hydrolysis and de novo synthesis, respectively [53,54]. Studies showed that the second peak of ceramide accumulation accounted for the apoptosis action induced by cannabinoids [55]. The mechanism by which accumulation of the spingolipid ceramide leads to apoptosis has been reported to be mediated by the stress-regulated protein p8, which is upregulated by ceramide accumulation. p8 upregulation leads to the upregulation of the activating transcription factor 4 (ATF-4) and the C/EBP-homologous protein (CHOP) and through which induce apoptosis [56]. Cannabinoids also cause cell cycle arrest in cancer cells of prostate carcinoma [56], thyroid epithelioma [57], breast carcinoma [44], lung carcinoma [51], and gastric carcinoma [58]. It has been suggested that activation of cannabinoid receptors lead to the inhibition of adenylyl cyclase and the cAMP/protein kinase A (PKA) pathway. As PKA inhibits Raf-1, cannabinoids prevent the inhibition of Raf-s and consequently result in prolonged activation of Raf-1/MEK/ERK signaling cascade. The activation of Raf-1/MEK/ERK has been found to be associated with cell arrest in various cancer cells. Also, prolonged activation of Raf-1/MEK/ERK may induce cell cycle arrest by modulating the expression of molecules that involve in the cell cycle regulation, including p16 Ink4a , p15 Ink4b and p21 Cip1 , which can lead to the cell cycle arrest at the G1 phase [59,60]. In addition, cannabinoid-induced cell cycle arrest in thyroid epithelioma was also reported to be mediated via the induction of the cyclindependent kinase inhibitor p27 kip1 [61]. For a tumor to grow beyond the minimal size, it must recruit new blood vessels by producing proangiogenic factors that promote the formation of new vessels for nutrition, gas exchange, and waste disposal. Targeting the formation of new blood vessels in the tumor, therefore, is an important approach in anticancer drug development. Studies indicated that cannabinoid treatment can change the blood vessel pattern in xenograft tumors from a hyperplasic network of dilated vessels to a pattern of blood vessels characterized by narrow, differentiated and impermeable capillaries [62]. The anti-angiogenesis effects of cannabinoids is associated with inhibition of the expression of VEGF and other proangiogenic factors such as placental growth factor and angiopoietin 2 [61]. In addition, cannabinoids can also inhibit the formation of new blood vessels by downregulating the expression of VEGF receptors [61]. It has been suggested that ceramide synthesis de novo involved in the mechanism of antiangiogenesis of cannabinoids. Inhibition of ceramide synthesis de novo prevented the cannabinoid-induced inhibition of VEGF production in vitro and in vivo [39]. Recently, increasing evidence shows that cannabinoids can also inhibit the invasiveness of cancer tumors. WIN55,212-2 and JWH-015 significantly decreased in vitro chemotaxis and chemoinvasion of lung cancer cells and inhibited in vivo metastasis from the xenografts to the lungs [51]. Similarly, 2-methyl-arachidonyl-2'-fluoro-ethylamide (Met-F-AEA) significantly reduced the number of metastatic nodes following injection of Lewis lung cancer cells into the paw [61]. Mechanisms for the anti-migration and anti-invasion effects of cannabinoids may be related to the inhibition of Akt, which is involved in the regulation of migration. In addition, cannabinoids were also found to downregulate the expression and activity of matrix metalloproteinase-2 (MMP2), which plays an important role in tissue remodeling and closely associated with angiogenesis, tissue repair and metastasis [44,62]. Modulation of Receptor Expression by Synthetic Cannabinoids In normal noncancerous human breast tissue, cannabinoid receptors are expressed at lower levels compared to breast cancer tissue [44]. In breast cancer tissue, it was reported that CB2 expression was higher than the CB1 expression in the same tumors and CB2 expression seemed to correlate with the grades of the tumors [45]. Compared to ER(+) and PR(+) breast cancer tumors, CB2 mRNA were found to be 3.6-and 2.3-fold in ER(-) and PR(-) tumors, respectively [45]. Alteration in cannabinoid receptor expression was also associated with cannabinoid exposure. Chronic treatment with cannabinoids led to downregulation of cannabinoid receptors [63][64][65]. Daily treatment with ∆9-THC for 14 days caused a 30% reduction in cannabinoid receptor binding [66]. Downregulation of cannabinoid receptors following long-term treatment with cannabinoid is attributed to the internalization and degradation of the receptors. Cannabinoid receptors belong to G protein-coupled receptors (GPCRs); and like many other GPCRs, cannabinoid receptors undergo agonist-induced or constitutive internalization from the cell membrane to low pH endosomes, where a certain amount of the receptors is recycled back to the cell surface, another part stays in the cytoplasm and contributes to the intracellular pool of the receptors, and the rest of the endocytosed receptors are sent to lysosomes for degradation [67,68]. The intracellular reservoir of CB1 receptor is reported to account for 85% of total cellular CB1 cannabinoid receptor, however functions of the intracellular pool remain to be elucidated. As reviewed by Rozenfeld (2011), the intracellular pool of cannabinoid receptors serves as a source from which surface CB1 receptors are replenished for endocytosed receptors [69]. However, Grimsey (2010) reported that the intracellular pool of the cannabinoid receptors does not contribute to the recycling to the cell surface receptor population [70]. Mechanisms for cannabinoid receptor internalization and recycle are not fully established. HU308-induced endocytosis of CB2 cannabinoid receptors was found to be mediated via Rab5, a small GTPase localised to early endosomes, while CB2 recycle was mediated via a recycling endosome-Rab11-dependent pathway [71]. WIN55,212-2 induced CB1 and CB2 downregulation and CB1 receptors were sorted in lysosomal compartments for degradation by a G-proteinassociated sorting protein (GASP-1) and an adaptor protein 3 (AP-3) [65,72]. In contrast to ER(+) breast cancer where estrogen-ER-ERE pathway plays the central role in cellular processes, ER(-) breast cancer is under the regulation from other signalling pathways that are independent of ER. Previous findings have demonstrated that overexpression of EGFR is common in ER(-) breast cancer [73], suggesting EGFR signalling may play important roles in regulating the cancer cell fate. Dominant negative EGFR tumors presents a strong reduction of vascular epidermal growth factor (VEGF) expression and an increase in the apoptotic rate. And EGFR-dependent Ha-ras activation has a crucial role in VEGF expression, tumor angiogenesis and growth [74]. The vital importance of EGFR in various cancer types has made it a key target for cancer treatment therapies, with the general goal being EGFR downregulation. Downregulation in EGFR expression after cannabinoid treatment has been previously reported in various cancer types. Anandamide induced cell death and a decrease of EGFR levels on LNCaP, DU145 and PC3 prostate cancer cells and inhibited the EGFstimulated growth of the cells (Mimeault, 2003). Another cannabinoid, WIN55,212-2, was reported to induce growth inhibition of PDV.C57 epidermal tumor xenografts and decrease EGFR and phosphorylated EGFR [62]. Modulation of Downstream Signaling of EGFR and Cannabinoid Receptors An increasing number of studies have demonstrated that p38-MAPK plays a crucial role in cannabinoid receptor downstream signalling, in which activation of p38-MAPK is associated with apoptosis induction. Cannabinoid-induced activation of p38 MAPK has been reported in vitro cell lines and in a number of other cancer types. Also, p38-MAPK-related apoptosis following cannabinoid treatment in various cancer cell types. In Jurkat human leukemia cells, ∆9-THC and JWH-133 treatment induced CB2-mediated cell death and activation of p38-MAPK (Herrera, 2005). In mantle cell lymphoma (MCL) cells, R(+)-methanandamide and WIN55,212-2 were reported to induce apoptosis via a sequence of events, including accumulation of de novo synthesized ceramide, activation of p38 MAPK and depolarization of the mitochondrial membrane [75]. Not only in cancer cells, cannabinoid-induced activation of p38 MAPK also occurs in central nervous system cells, however the effect is mediated via CB1 cannabinoid receptor. In rat and mouse hippocampal slices, anadamide, 2arachidonoylglycerol, WIN55,212-2 and ∆9-THC activated p38 MAPK via CB1 receptor, but the cannabinoids did not activate c-Jun N-terminal kinase (JNK), another mitogen-activated protein kinase [76]. The mechanisms on how activation of cannabinoid receptors leads to activation of p38 MAPK is still not fully understood. In some cell types, G protein-coupled receptors can stimulate p38 MAPK and JNK activities via the protein kinase C and the tyrosine kinase Src [76]. In other cells from hippocampus, using a specific inhibitor of the Src-family kinase, PP2, did not prevent the cannabinoid-induced activation of p38 MAPK, suggesting that activation of p38 MAPK by cannabinoids is independent of Src-family kinases in these cells [76]. Another mechanism for p38 MAPK activation is that CB1 activation stimulates PI3K [77], which in turn can be upstream of p38 MAPK in some certain cell types [78]. After being activated, the following targets of p38 MAPK are of Bcl-2 family proteins. Specifically, Cai (2006) reported that apoptosis of PC12 pheochromocytoma cells by sodium arsenite treatment may be due to direct phosphorylation of a Bcl-2 family protein, Bim, at Ser-65 by p38 MAPK [79]. Alternatively, p38 MAPK can also be upstream of caspase activation [80]. It is commonly implicated that ERK activation leads to cell proliferation. However, increasing evidence indicated that biological effects of ERK activation depend on several factors, of which the duration of ERK activation acts as a key factor. Prolonged activation of ERK induces cell death via apoptosis rather than proliferation [40,81]. Mechanisms for cannabinoid-stimulated activation of ERK include prolonged accumulation of ceramide and/or inhibition of the adenylyl cyclase (AC)-protein kinase A (PKA) pathway. Activation of cannabinoid receptors led to two peaks of ceramide generation; the short-term peak is associated with sphingomyelin hydrolysis via sphingomyelinase, while the long-term is associated with palmitoyltransferase induction and enhanced ceramide synthesis de novo [77,82]. The second peak of ceramide accumulation is related to ERK activation and apoptosis through a mechanism, in which ceramide directly binds to the ceramide-binding motif of Raf-1 and results in Raf-1 activation. Another mechanism is that cannabinoids inhibit the AC/PKA pathway, through which inhibit indirectly Raf-1. By preventing inhibitory effects of Raf-1, cannabinoids can indirectly activate MEK/ERK and consequently induce apoptosis [83]. Taken together, increasing evidence suggest that activation of MAPKs pathways is likely to play a crucial role for the tumor suppression effect of cannabinoids. NF-κB is a key regulator for the genes involving in immune responses and anti-apoptotic activities [84]. Importantly, NK-κB belongs to downstream signaling pathways of both cannabinoid receptors and EGFR. Activated NF-κB is found in many cancer types, and studies have showed that activation of NF-κB can initiate the expression of genes encoding antiapoptotic, angiogenesis, cell cycle regulatory and growth factors, which promote the formation and development of malignant tumors. Conversely, inhibition of NF-κB results in increased apoptotic activities and cell cycle arrest, consequently inducing tumor regression. Therefore, NF-κB is an important target for cancer treatment therapies. NF-κB inhibition by cannabinoids and other compounds has been reported in a number of in vitro studies and with other cancer types. The inhibition of NF-κB in the tumors may be partly responsible for the tumor regression observed in mice treated with WIN55,212-2. Similarly, AS602868, a specific inhibitor of IKK2, blocked NF-κB activation and led to apoptosis in human primary acute myeloid leukemia (AML) cells [85]. Non-specific inhibitors of NF-κB such as anti-inflammatory agents and non-steroidal anti-inflammatory drugs (NSAIDs) mediate the regression of adenomatous polyps of the colon, prevent the development of colon cancer, and increase cancer cell apoptosis in existing malignant tumors [86]. In A549 lung adenocarcinoma epithelial cells, anadamide was found to inhibit TNFα-induced NF-κB activation by direct inhibition of the IκB kinase (IKK) β and the IKKα subunits of κB inhibitor (IκB) kinase complex [87]. To determine the involvement of cannabinoid receptors, Sancho (2003) utilized A549 and 5.1 cell lines of expressing only CB1 and CB2 cannabinoid receptors, respectively. The results showed that anandamide-induced NF-κB inhibition was independent of both CB1 and CB2 cannabinoid receptors. WIN,55212-induced downregulation of NF-κB expression suggests a potential of the compound to be used as an adjuvant treatment as NF-κB activation may induce the expression of the multidrug resistance P-glycoprotein, and inhibition of NF-κB has shown to increase the apoptotic response to chemotherapy and radiation therapy [88]. Also, mere inhibition of NF-κB may be insufficient for a pronounced apoptotic response, therefore combinations of NF-κB inhibitors and conventional chemo/radio therapies may enhance the effectiveness of the treatment and reduce the risk for drug resistance. WIN55,212-2 inhibited the expression of NF-κB in vivo, therefore it would be interesting to further evaluate WIN55,212-2 as a potential adjuvant treatment for TNBC by combining WIN55,212-2 treatment in conjunction with cytotoxic agents such as anthracyclins or platinum compounds, and/or with radiation therapy to determine whether cotreatment with WIN55,212-2 can increase sensitivity of TNBC cancer cells to chemo/radio therapies, thereby enabling a decreased dose. Beside the MAPKs pathways, the survival PI3K/Akt/mTOR pathway is also involved in downstream signaling of both cannabinoid receptors and EGFR. It has been well established that the PI3K/Akt/mTOR pathway plays a pivotal role in cellular survival processes, including cell growth, proliferation, invasion and migration. Common results from previous studies indicated that the inhibitory effects of cannabinoids in cancer cells were frequently associated with Akt inhibition, and conversely sustained activation of Akt were related to the protective effects of cannabinoids in neuronal cells. Therefore, PI3K/Akt/mTOR pathway is considered an important target for novel treatments. Cannabinoid-induced inhibition of Akt was found to result from de novo synthesis of ceramide. In glioma cells, cannabinoids-induce intracellular ceramide accumulation by mechanisms of sphingomyelin hydrolysis and ceramide synthesis de novo [54,89]. In turn, de novo synthesized ceramide leads to activation of ERK and inhibition of Akt. Blockade of the synthesis de novo of ceramide by L-cycloserine prevented THC-induced ERK activation and THC-induced Akt inhibition [77]. In addition to Akt inhibition, Akt activation by cannabinoid treatment was also reported. Lung carcinoma and glioblastoma cells treated with either THC, WIN55,212-2 or HU-210 showed activation of both ERK and Akt. The activating effect was abolished by blockade of EGFR signal transactivation with the selective EGFR inhibitor AG1478 or with the metalloprotease inhibitor BB94. This suggested the cannabinoid-induced activation of ERK and Akt was dependent on EGFR function [90]. Co-activation of ERK and Akt following cannabinoid treatment was reported to protect astrocytes from ceramide-induced apoptosis [77]. Mixed actions of cannabinoids in other cancer types A number of studies also have reported that cannabinoids increased the proliferation of a number of cancer cell types. One possible reason could be that cannabinoids may develop different biological effects on different cell types and their expression levels of cannabinoid receptors. Cannabinoids were found to have protective effects towards cultured neurons against excitotoxicity. In contrast, studies also demonstrated cytotoxicity effects of cannabinoids on various cancer cells. Furthermore, even to the same cell type, cannabinoids have also been reported to develop different effects. Delta-9-THC induced hippocampal neuron death through neuronal apoptosis mechanism [91]; the compound was also reported to protect spinal neurons from excitotoxicity produced by kainate [92]. Not only depending on cell types, there exists evidence that cannabinoids may have biological actions in different directions at different ranges of concentrations. Hart et al. reported that THC only induced apoptosis in cancer cells at relatively high concentrations. In contrast, nanomolecular concentrations of THC accelerate the proliferation of the cancer cells in a EGFR-and metalloprotease-dependent mechanism [93]. The difference is important in regard to clinical relevance as after oral or rectal administration of THC or its derivatives, the maximum serum concentrations of THC were only 35 -350 nM [94,95]. Evidences suggests that some cannabinoids can trigger the proliferation of cancer cells at suitably low concentrations. In a study by Sanchez (2003), THC and R-/(+)-methanandamide (MET) at nanomolar concentrations induced accelerated proliferation of PC-3 prostate cancer cells. In addition, the stimulation was associated with cannabinoid-induced activation of phosphoinositide kinase-3 (PI3K) cascade and nerve growth factor (NGF) synthesis, a neurotrophic factor previously reported to involve in prostate cells' proliferation (Sanchez, 2003). In other cell types of glioblastoma and lung carcinoma, THC at nanomolar concentrations also accelerated the proliferation of cancer cells and this effect was mediated by EGFR activity in a mechanism where cannabinoid-induced EGFR transactivation was mediated via metalloprotease and tumor necrosis factor α-converting enzyme (TACE/ADAM17) [90]. In a study by Sanchez et al., THC and R-/(+)-methanandamide (MET) at nanomolar concentrations induced accelerated proliferation of PC-3 prostate cancer cells. In addition, the stimulation is associated with cannabinoid-induced activation of PI3K cascade and NGF synthesis, a neurotrophic factor previously reported to involve in prostate cells' proliferation [96]. In other cell types of glioblastoma and lung carcinoma, THC at nanomolar concentrations also accelerate the proliferation of cancer cells and this effect was mediated by EGFR activity in a mechanism where cannabinoid-induced EGFR transactivation was mediated via metalloprotease and tumor necrosis factor α-converting enzyme (TACE/ADAM17) [93]. JWH133 with the chemical name 3-(1′,1′-Dimethylbutyl)-1-deoxy-Δ8-THC possesses a molecular structure that is remarkably similar to that of Δ9-THC [97,98]. It therefore may potentially possess the similar ability to induce accelerated proliferation in cancer cells, which was observed with Δ9-THC. Another possible mechanism for cannabinoid stimulation effect on tumor growth is that cannabinoids promote the formation of new blood vessels, which better nourish tumor cells and make the tumor to grow faster. It was previously reported that THC induced activation of NGF synthesis [96]; and in a study by Romon, recombinant NGF and NGF produced by breast cancer cells promote breast cancer angiogenesis and endothelial cell invasion [99]. NGF increased the secretion of VEGF in both endothelial and breast cancer cell. NGF also lead to the activation of the PI3K/Akt pathway, which was previously reported to be also activated by THC [96,100]. In another aspect, it has been established that EGFR activation may stimulate invasion, angiogenesis and metastasis of cancer tumors [as reviewed by 101]. This may explain why cetuximab, an EGFR monoclonal antibody, is found more effective in vivo, where the drug can develop its effects towards invasion, angiogenesis, and metastasis, than in vitro [102]. THC was found to lead to EGFR transactivation and activation of ERK and Akt/PKB survival pathways in a EGFR-dependent manner [93]. Therefore, further models for assessing the possibility of cannabinoids to promote cancer growth are needed.
2021-08-23T03:47:12.246Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e0964feaffd9bc9de7eb81565cab896a9eca1fee", "oa_license": "CCBY", "oa_url": "https://www.preprints.org/manuscript/202107.0614/v1/download", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "e0964feaffd9bc9de7eb81565cab896a9eca1fee", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
53450975
pes2o/s2orc
v3-fos-license
Submillimeter H2O and H2O+ emission in lensed ultra- and hyper-luminous infrared galaxies at z ~ 2-4 (abridged) We report rest-frame submillimeter H2O emission line observations of 11 HyLIRGs/ULIRGs at z~2-4 selected among the brightest lensed galaxies discovered in the Herschel-ATLAS. Using the IRAM NOEMA, we have detected 14 new H2O emission lines. The apparent luminosities of the H2O emission lines are $\mu L_{\rm{H_2O}} \sim 6-21 \times 10^8 L_\odot$, with velocity-integrated line fluxes ranging from 4-15 Jy km s$^{-1}$. We have also observed CO emission lines using EMIR on the IRAM 30m telescope in seven sources. The velocity widths for CO and H2O lines are found to be similar. With almost comparable integrated flux densities to those of the high-J CO line, H2O is found to be among the strongest molecular emitters in high-z Hy/ULIRGs. We also confirm our previously found correlation between luminosity of H2O ($L_{\rm{H_2O}}$) and infrared ($L_{\rm{IR}}$) that $L_{\rm{H_2O}} \sim L_{\rm{IR}}^{1.1-1.2}$, with our new detections. This correlation could be explained by a dominant role of far-infrared (FIR) pumping in the H2O excitation. Modelling reveals the FIR radiation fields have warm dust temperature $T_\rm{warm}$~45-75 K, H2O column density per unit velocity interval $N_{\rm{H_2O}}/\Delta V \gtrsim 0.3 \times 10^{15}$ cm$^{-2}$ km$^{-1}$ s and 100 $\mu$m continuum opacity $\tau_{100}>1$ (optically thick), indicating that H2O is likely to trace highly obscured warm dense gas. However, further observations of $J\geq4$ H2O lines are needed to better constrain the continuum optical depth and other physical conditions of the molecular gas and dust. We have also detected H2O+ emission in three sources. A tight correlation between $L_{\rm{H_2O}}$ and $L_{\rm{H_2O^+}}$ has been found in galaxies from low to high redshift. The velocity-integrated flux density ratio between H2O+ and H2O suggests that cosmic rays generated by strong star formation are possibly driving the H2O+ formation. Introduction After molecular hydrogen (H 2 ) and carbon monoxide (CO), the water molecule (H 2 O) can be one of the most abundant molecules in the interstellar medium (ISM) in galaxies. It provides some important diagnostic tools for various physical and chemical processes in the ISM (e.g. van Dishoeck et al. 2013, and references therein). Prior to the Herschel Space Observatory (Pilbratt et al. 2010), in extragalactic sources, non-maser H 2 O rotational transitions were only detected by the Infrared Space Observatory (ISO, Kessler et al. 1996) in the form of farinfrared absorption lines (González-Alfonso et al. 2004, 2008. Observations of local infrared bright galaxies by Herschel have revealed a rich spectrum of submillimeter (submm) H 2 O emission lines (submm H 2 O refers to rest-frame submillimeter H 2 O emission throughout this paper if not otherwise specified). Many of these lines are emitted from high-excitation rotational levels with upper-level energies up to E up /k = 642 K (e.g. van der Werf et al. 2010;González-Alfonso et al. 2010, 2012Rangwala et al. 2011;Kamenetzky et al. 2012;Spinoglio et al. 2012;Meijerink et al. 2013;Pellegrini et al. 2013;Pereira-Santaella et al. 2013). Excitation analysis of these lines has revealed that Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. they are probably excited through absorption of far-infrared photons from thermal dust emission in warm dense regions of the ISM (e.g. González-Alfonso et al. 2010). Therefore, unlike the canonical CO lines that trace collisional excitation of the molecular gas, these H 2 O lines represent a powerful diagnostic of the far-infrared radiation field. Using the Herschel archive data, Yang et al. (2013, hereafter Y13) have undertaken a first systematic study of submm H 2 O emission in local infrared galaxies. H 2 O was found to be the strongest molecular emitter after CO within the submm band in those infrared-bright galaxies, even with higher flux density than that of CO in some local ULIRGs (velocity-integrated flux density of H 2 O(3 21 -3 12 ) is larger than that of CO(5-4) in four galaxies out of 45 in the Y13 sample). The luminosities of the submm H 2 O lines (L H 2 O ) are near-linearly correlated with total infrared luminosity (L IR , integrated over 8-1000 µm) over three orders of magnitude. The correlation is revealed to be a straightforward result of far-infrared pumping: H 2 O molecules are excited to higher energy levels through absorbing far-infrared photons, then the upper level molecules cascade toward the lines we observed in an almost constant fraction (Fig. 1). Although the galaxies dominated by active galactic nuclei (AGN) have somewhat lower ratios of L H 2 O /L IR , there does not appear to be a link between the presence of an AGN and the submm H 2 O emission (Y13). The H 2 O emission is likely to trace the far-infrared radi-Article number, page 1 of 23 arXiv:1607.06220v3 [astro-ph.GA] 19 Sep 2016 A&A proofs: manuscript no. ms ation field generated in star-forming nuclear regions in galaxies, explaining its tight correlation with far-infrared luminosity. Besides detections of the H 2 O lines in local galaxies from space telescopes, redshifted submm H 2 O lines in high-redshift lensed Ultra-and Hyper-Luminous InfraRed Galaxies (ULIRGs, 10 13 L > L IR ≥ 10 12 L ; HyLIRGs, L IR ≥ 10 13 L ) can also be detected by ground-based telescopes in atmospheric windows with high transmission. Strong gravitational lensing boosts the flux and allows one to detect the H 2 O emission lines easily. Since our first detection of submm H 2 O in a lensed Herschel source at z = 2.3 (Omont et al. 2011) using the IRAM NOrthern Extended Millimeter Array (NOEMA), several individual detections at high-redshift have also been reported (Lis et al. 2011;van der Werf et al. 2011;Bradford et al. 2011;Combes et al. 2012;Lupu et al. 2012;Bothwell et al. 2013;Omont et al. 2013;Vieira et al. 2013;Weiß et al. 2013;Rawle et al. 2014). These numerous and easy detections of H 2 O in high-redshift lensed ULIRGs show that its lines are the strongest submm molecular lines after CO and may be an important tool for studying these galaxies. We have carried out a series of studies focussing on submm H 2 O emission in high-redshift lensed galaxies since our first detection. Through the detection of J = 2 H 2 O lines in seven high-redshift lensed Hy/ULIRGs reported by Omont et al. (2013, hereafter O13), a slightly super-linear correlation between L H 2 O and L IR (L H 2 O ∝ L IR 1.2 ) from local ULIRGs and high-redshift lensed Hy/ULIRGs has been found. This result may imply again that far-infrared pumping is important for H 2 O excitation in high-redshift extreme starbursts. The average ratios of L H 2 O to L IR for the J = 2 H 2 O lines in the high-redshift sources tend to be 1.8 ± 0.9 times higher than those seen locally (Y13). This shows that the same physics with infrared pumping should dominate H 2 O excitation in ULIRGs at low and high redshift, with some specificity at high-redshift probably linked to the higher luminosities. Modelling provides additional information about the H 2 O excitation. For example, through LVG modelling, Riechers et al. (2013) argue that the excitation of the submm H 2 O emission in the z ∼ 6.3 submm galaxy is far-infrared pumping dominated. Modelling of the local Herschel galaxies of Y13 has been carried out by González-Alfonso et al. (2014, hereafter G14). They confirm that far-infrared pumping is the dominant mechanism responsible for the submm H 2 O emission (except for the groundstate emission transitions, such as para-H 2 O transition 1 11 -0 00 ) in the extragalactic sources. Moreover, collisional excitation of the low-lying (J ≤ 2) H 2 O lines could also enhance the radiative pumping of the (J ≥ 3) high-lying lines. The ratio between lowlying and high-lying H 2 O lines is sensitive to the dust temperature (T d ) and H 2 O column density (N H 2 O ). From modelling the average of local star-forming-and mild-AGN-dominated galaxies, G14 show that the submm H 2 O emission comes from regions with N H 2 O ∼ (0.5-2) × 10 17 cm −2 and a 100 µm continuum opacity of τ 100 ∼ 0.05-0.2, where H 2 O is mainly excited by warm dust with a temperature range of 45-75 K. H 2 O lines thus provide key information about the properties of the dense cores of ULIRGs, that is, their H 2 O content, the infrared radiation field and the corresponding temperature of dust that is warmer than the core outer layers and dominates the far-infrared emission. Observations of the submm H 2 O emission, together with appropriate modelling and analysis, therefore allows us to study the properties of the far-infrared radiation sources in great detail. So far, the excitation analysis combining both low-and highlying H 2 O emission has only been done in a few case studies. Using H 2 O excitation modelling considering both collision and far-infrared pumping, González-Alfonso et al. (2010) and van der Werf et al. (2011) estimate the sizes of the far-infrared radiation fields in Mrk 231 and APM 08279+5255 (APM 08279 hereafter), which are not resolved by the observations directly, and suggest their AGN dominance based on their total enclosed energies. This again demonstrates that submm H 2 O emission is a powerful diagnostic tool which can even transcend the angular resolution of the telescopes. The detection of submm H 2 O emission in the Herschel-ATLAS 1 (Eales et al. 2010, H-ATLAS hereafter) sources through gravitational lensing allows us to characterise the far-infrared radiation field generated by intense star-forming activity, and possibly AGN, and learn the physical conditions in the warm dense gas phase in extreme starbursts in the early Universe. Unlike standard dense gas tracers such as HCN, which is weaker at high-redshift compared to that of local ULIRGs (Gao et al. 2007), submm H 2 O lines are strong and even comparable to high-J CO lines in some galaxies (Y13; O13). Therefore, H 2 O is an efficient tracer of the warm dense gas phase that makes up a major fraction of the total molecular gas mass in high-redshift Hy/ULIRGs (Casey et al. 2014). The successful detections of submm H 2 O lines in both local (Y13) and the high-redshift universe (O13) show the great potential of a systematic study of H 2 O emission in a large sample of infrared galaxies over a wide range in redshift (from local up to z ∼ 4) and luminosity (L IR ∼ 10 10 -10 13 L ). However, our previous high-redshift sample was limited to seven sources and to one J = 2 para-H 2 O line (E up /k = 100-127 K) per source (O13). In order to further constrain the conditions of H 2 O excitation, to confirm the dominant role of far-infrared pumping and to learn the physical conditions of the warm dense gas phase in high-redshift starbursts, it is essential to extend the studies to higher excitation lines. We thus present and discuss here the results of such new observations of a strong J = 3 ortho-H 2 O line with E up /k = 304 K in six strongly lensed H-ATLAS galaxies at z ∼ 2.8-3.6, where a second lowerexcitation J = 2 para-H 2 O line was also observed ( Fig. 1 for the transitions and the corresponding E up ). We describe our sample, observation and data reduction in Section 2. The observed properties of the high-redshift submm H 2 O emission are presented in Section 3. Discussions of the lensing properties, L H 2 O -L IR correlation, H 2 O excitation, comparison between H 2 O and CO, AGN contamination will be given in Section 4. Section 5 describes the detection of H 2 O + lines. We summarise our results in Section 6. A flat ΛCDM cosmology with H 0 = 71 km s −1 Mpc −1 , Ω M = 0.27, Ω Λ = 0.73 (Spergel et al. 2003) is adopted throughout this paper. Sample and observation Our sample consists of eleven extremely bright high-redshift sources with F 500µm > 200 mJy discovered by the H-ATLAS survey (Eales et al. 2010). Together with the seven similar sources reported in our previous H 2 O study (O13), they include all the brightest high-redshift H-ATLAS sources (F 500µm > 170 mJy), but two, imaged at 880 µm with SMA by Bussmann et al. (2013, hereafter B13). In agreement with the selection according to the methods of Negrello et al. (2010), the detailed lensing modelling performed by B13 has shown that all of them are strongly lensed, but one, G09v1.124 (Ivison et al. 2013, see below of our present study is thus well representative of the brightest high-redshift submillimeter sources with F 500µm > 200 mJy (with apparent total infrared luminosity ∼ 5-15 × 10 13 L and z ∼ 1.5-4.2) found by H-ATLAS in its equatorial ('GAMA') and north-galactic-pole ('NGP') fields, in ∼ 300 deg 2 with a density ∼ 0.05 deg −2 . In our previous project (O13), we observed H 2 O in seven strongly lensed high-redshift H-ATLAS galaxies from the B13 sample. In this work, in order to observe the high-excitation ortho-H 2 O(3 21 -3 12 ) line with rest frequency of 1162.912 GHz with the IRAM/NOEMA, we selected the brightest sources at 500 µm with z 2.8 so that the redshifted lines could be observed in a reasonably good atmospheric window at ν obs 300 GHz. Eight sources with such redshift were selected from the B13 H-ATLAS sample. B13 provide lensing models, magnification factors (µ) and inferred intrinsic properties of these galaxies and list their CO redshifts which come from Harris et al. (2012); Harris et al. (in prep.); Lupu et al. (in prep.); Krips et al. (in prep.) and Riechers et al. (in prep.). In our final selection of the sample to be studied in the H 2 O(3 21 -3 12 ) line, we then removed two sources, SDP 81 and G12v2.30, that were previously observed in H 2 O (O13, and also ALMA Partnership, Vlahakis et al. 2015 for SDP 81), because the J = 2 H 2 O emission is too weak and/or the interferometry could resolve out some flux considering the lensing image. The observed high-redshift sample thus consists of two GAMA-field sources: G09v1.97 and G12v2.43, and four sources in the H-ATLAS NGP field: NCv1.143,NAv1.195,NAv1.177 and NBv1.78 (Tables 1 and 2). Among the six remaining sources at redshift between 2.8 and 3.6, only one, NBv1.78, has been observed previously in a low-excitation line, para-H 2 O(2 02 -1 11 ) (O13). Therefore, we have observed both para-H 2 O line 2 02 -1 11 or 2 11 -2 02 and ortho-H 2 O(3 21 -3 12 ) in the other five sources, in order to compare their velocity-integrated flux densities. In addition, we also observed five sources mostly at lower redshifts in para-H 2 O lines 2 02 -1 11 or 2 11 -2 02 (Tables 1 and 2) to complete the sample of our H 2 O low-excitation study. They are three strongly lensed sources, G09v1.40, NAv1.56 and SDP11, a hyper-luminous cluster source G09v1.124 (Ivison et al. 2013), and a z ∼ 3.7 source, NCv1.268 for which we did not propose a J = 3 H 2 O observation, considering its large linewidth which could bring difficulties in line detection. As our primary goal is to obtain a detection of the submm H 2 O lines, we carried out the observations in the compact, D configuration of NOEMA. The baselines extended from 24 to 176 m, resulting in a synthesised beam with modest/low resolution of ∼ 1.0 × 0.9 to ∼ 5.6 × 3.3 as shown in Table 1. The H 2 O observations were conducted from January 2012 to December 2013 in good atmospheric conditions (seeing of 0.3 -1.5 ) stability and reasonable transparency (PWV ≤ 1 mm). The total on source time was ∼ 1.5-8 hours per source. 2 mm, 1.3 mm and 0.8 mm bands covering 129-174, 201-267 and 277-371 GHz, respectively, were used. All the central observation frequencies were chosen based on previous redshifts given by B13 according to the previous CO detections (Table 2). In all cases but one, the frequencies of our detections of H 2 O lines are consistent with these CO redshifts. The only exception is G09v1.40 where our H 2 O redshift disagrees with the redshift of z = 2.0894 ± 0.0009 given by Lupu et al. (in prep.), which is quoted by B13. We find z = 2.0925 ± 0.0001 in agreement with previous CO(3-2) observations (Riechers et al., in prep.). We used the WideX correlator which provided a contiguous frequency coverage of 3.6 GHz in dual polarisation with a fixed channel spacing of 1.95 MHz. The phase and bandpass were calibrated by measuring standard calibrators that are regularly monitored at the IRAM/NOEMA, including 3C279, 3C273, MWC349 and 0923+392. The accuracy of the flux calibration is estimated to range from ∼10% in the 2 mm band to ∼20% in the 0.8mm band. Calibration, imaging, cleaning and spectra extraction were performed within the GILDAS 2 packages CLIC and MAPPING. Notes. I CO is the velocity-integrated flux density of CO; ∆V CO is the linewidth (FWHM) derived from fitting a single Gaussian to the line profile. To compare the H 2 O emission with the typical molecular gas tracer, CO, we also observed the sources for CO lines using the EMIR receiver at the IRAM 30m telescope. The CO data will be part of a systematic study of molecular gas excitation in H-ATLAS lensed Hy/ULIRGs, and a full description of the data and the scientific results will be given in a following paper (Yang et al., in prep.). The global CO emission properties of the sources are listed in Table 3 where we list the CO fluxes and linewidths. A brief comparison of the emission between H 2 O and CO lines will be given in Section 4.3. Results A detailed discussion of the observation results for each source is given in Appendix A, including the strength of the H 2 O emission, the image extension of H 2 O lines and the continuum ( Fig. A.1), the H 2 O spectra and linewidths ( Fig. 2) and their comparison with CO (Table 3). We give a synthesis of these results in this section. General properties of the H 2 O emissions To measure the linewidth, velocity-integrated flux density and the continuum level of the spectra from the source peak and from the entire source, we extract each spectrum from the CLEANed image at the position of the source peak in a single synthesis beam and the spectrum integrated over the entire source. Then we fit them with Gaussian profiles using MPFIT (Markwardt 2009). We detect the high-excitation ortho-H 2 O(3 21 -3 12 ) in five out of six observed sources, with high signal to noise ratios (S /N > 9) and velocity-integrated flux densities comparable to those of the low-excitation J = 2 para-H 2 O lines (Table 4 and Figs. 2 & A.1). We also detect nine out of eleven J = 2 para-H 2 O lines, either 2 02 -1 11 or 2 11 -2 02 , with S /N ≥ 6 in terms of their velocity-integrated flux density, plus one tentative detection of H 2 O(2 02 -1 11 ) in SDP11. We present the values of velocity-integrated H 2 O flux density detected at the source peak in a single synthesised beam, I H 2 O pk , and the velocity-integrated H 2 O flux density over the entire source, I H 2 O ( (Tables 3 & 4 and Section 4.3). The majority of the images (7/11 for J = 2 lines and 3/4 for J = 3) are marginally resolved with I H 2 O pk /I H 2 O ∼ 0.4-0.7. They show somewhat lensed structures. The others are unresolved with I H 2 O pk /I H 2 O > 0.8. All continuum emission flux densities (S ν (ct) pk for the emission peak and S ν (ct) for the entire source) are very well detected (S /N ≥ 30), with a range of total flux density of 9-64 mJy for S ν (ct). The ratio S ν (ct) pk /S ν (ct) and S ν (H 2 O) pk /S ν (H 2 O) are in good agreement within the error. However, for NCv1.143 in which S ν (ct) pk /S ν (ct) = 0.55 ± 0.01 and S ν (H 2 O) pk /S ν (H 2 O) = 0.74 ± 0.16, the J = 3 ortho-H 2 O emission appears more compact than the dust continuum. Generally it seems unlikely that we have a significant fraction of missing flux for our sources. Nevertheless, the low angular resolution (∼ 1 at best) limits the study of spatial distribution of the gas and dust in our sources. A detailed analysis of the images for each source is given in Appendix A. The majority of the sources have H 2 O (and CO) linewidths between 210 and 330 km s −1 , while the four others range between 500 and 700 km s −1 (Table 4). Except NCv1.268, which shows a double-peaked line profile, all H 2 O lines are well fit by a single Gaussian profile (Fig. 2). The line profiles between the J = 2 and J = 3 H 2 O lines do not seem to be significantly different, as shown from the linewidth ratios ranging from 1.26 ± 0.14 to 0.84 ± 0.16. The magnification from strong lensing is very sensitive to the spatial configuration, in other words, differential lensing, which could lead to different line profiles if the different velocity components of the line are emitted at different spatial positions. Since there is no visible differential effect between their profiles, it is possible that the J = 2 and J = 3 H 2 O lines are from similar spatial regions. Lensing properties All our sources are strongly gravitationally lensed (except G09v1.124, see Appendix A.11), which increases the line flux densities and allows us to study the H 2 O emission in an affordable amount of observation time. However, the complexity of the lensed images complicates the analysis. As mentioned above, most of our lensed images are either unresolved or marginally resolved. Thus, we will not discuss here the spatial distribution of the H 2 O and dust emissions through gravitational lensing modelling. However, we should keep in mind that the correction of the magnification is a crucial part of our study. In addition, differential lensing could have a significant influence when compar- ing H 2 O emission with dust and even comparing different transitions of same molecular species (Serjeant 2012), especially for the emission from close to the caustics. In order to infer the intrinsic properties of our sample, especially L H 2 O as in our first paper O13, we adopted the lensing magnification factors µ (Table 2) computed from the modelling of the 880 µm SMA images (B13). As shown in the Appendix, the ratio of S ν (ct) pk /S ν (ct) and S ν (H 2 O) pk /S ν (H 2 O) are in good agreement within the uncertainties. Therefore, it is unlikely that the magnification of the 880 µm continuum image and H 2 O can be significantly different. However, B13 were unable to provide a lensing model for two of our sources, G12v2.43 and NAv1.177, because their lens deflector is unidentified. This does not affect the modelling of H 2 O excitation and the comparison of H 2 O and infrared luminosities since the differential lensing effect seems to be insignificant as discussed in Sections 4 and Appendix A. Discussion Using the formula given by Solomon et al. (1992) Using the lensing magnification correction (taking the values of µ from B13), we have derived the intrinsic H 2 O luminosities (Table 5). The error of each luminosity consists of the uncertainty from both observation and the gravitational lensing modelling. After correcting for lensing, the H 2 O luminosities of our high-redshift galaxies appear to be one order of magnitude higher than those of local ULIRGs, as well as their infrared lumi-nosities (Table 5), so that many of them should rather be considered as HyLIRGs than ULIRGs. Though the ratio of L H 2 O /L IR in our high-redshift sample is close to that of local ULIRGs (Y13), with somewhat a statistical increase in the extreme high L IR end (Fig. 3). As displayed in Fig. 3 for H 2 O of the three observed lines, because we have extended the number of detections to 21 H 2 O lines, distributed in 16 sources and 3 transitions, we may independently study the correlation of L H 2 O(2 02 -1 11 ) and L H 2 O(2 11 -2 02 ) with L IR , while we had approximately combined the two lines in O13. As found in O13, the correlation is slightly steeper than linear (L H 2 O ∼ L IR 1.2 ). To broaden the dynamical range of this comparison, we also included the local ULIRGs from Y13, together with a few other H 2 O detections in high-redshift Hy/ULIRGs, for example, HLSJ 0918 (HLSJ 091828.6+514223) (Combes et al. 2012;Rawle et al. 2014), APM 08279 (van der Werf et al. 2011), SPT 0538 (SPT-S J0538165030.8) (Bothwell et al. 2013) and HFLS3 (Riechers et al. 2013, with the magnification factor from Cooray et al. 2014) (Fig. 3). In the fitting, however, we excluded the sources with heavy AGN contamination (Mrk 231 and APM 08279) or missing flux resolved out by the interferometry (SDP 81). We also excluded the H 2 O(3 21 -3 12 ) line of HFLS3 considering its unusual high L H 2 O(3 21 -3 12 ) /L IR ratio as discussed above, that could bias our fitting. We have performed a linear regression in log-log space using the Metropolis-Hastings Markov Chain Monte Carlo (MCMC) algorithm sampler through linmix_err (Kelly 2007) to derived the α in The fitted parameters are α = 1.06 ± 0.19, 1.16 ± 0.13 and 1.06 ± 0.22 for H 2 O line 2 02 -1 11 , 2 11 -2 02 and 3 21 -3 12 , respectively. Comparing with the local ULIRGs, the high-redshift lensed ones have higher L H 2 O /L IR ratios (Table 6). These slopes confirm our first result derived from 7 H 2 O detections in (O13). The slight super-linear correlations seem to indicate that far-infrared pumping play an important role in the excitation of the submm H 2 O emission. This is unlike the high-J CO lines, which are determined by collisional excitation and follow the linear correlation between the CO line luminosity and L IR from the local to the high-redshift Universe (Liu et al. 2015). As demonstrated in G14, using the far-infrared pumping model, the steeper than linear growth of L H 2 O with L IR can be the result of an increasing optical depth at 100 µm (τ 100 ) with increasing L IR . In local ULIRGs, the ratio of L H 2 O /L IR is relatively low while most of them are likely to be optically thin (τ 100 ∼ 0.1, G14). On the other hand, for the high-redshift lensed Hy/ULIRGs with high values of L IR , the continuum optical depth at far-infrared wavelengths is expected to be high (see Section 4.2), indicating that the H 2 O emission comes from very dense regions of molecular gas that are heavily obscured. Similar to what we found in the local ULIRGs (Y13), we find again an anti-correlation between T d and L H 2 O(3 21 -3 12 ) /L IR . The Spearman s rank correlation coefficient for the five H 2 O(3 21 -3 12 ) detected H-ATLAS sources is ρ = −0.9 with a two-sided significance of its deviation from zero, p = 0.04. However, after including the non-detection of H 2 O(3 21 -3 12 ) in NAv1.195, the correlation is much weaker, that is to say, ρ −0.5 and p ∼ 0.32. No significant correlation has been found between T d and L H 2 O(2 02 -1 11 ) /L IR (ρ = −0.1 and p = 0.87) nor L H 2 O(2 11 -2 02 ) /L IR (ρ = −0.3 and p = 0.45). As explained in G14, in the optically thick and very warm galaxies, the ratio of L H 2 O(3 21 -3 12 ) /L IR is expected to decrease with increasing T d . And this anti-correlation can not be explained by optically thin conditions. However, a larger sample is needed to increase the statistical significance of this anti-correlation. Although, it is important to stress that the luminosity of H 2 O is a complex result of various physical parameters such as dust temperature, gas density, H 2 O abundance and H 2 O gas distribution relative to the infrared radiation field, etc, it is striking that the correlation between L H 2 O and L IR stays linear from local young stellar objects (YSOs), in which the H 2 O molecules are mainly excited by shocks and collisions, to local ULIRGs (far-infrared pumping dominated), extending ∼ 12 orders of magnitudes (San José-García et al. 2016), implying that H 2 O indeed traces the SFR proportionally, similarly to the dense gas (Gao & Solomon 2004) in the local infrared bright galaxies. However, for the high-redshift sources, the L H 2 O emissions are somewhat above the linear correlations which could be explained by their high τ 100 (or large velocity dispersion). As shown in Table 6, HFLS3, with a τ 100 > 1 has extremely large ratios of L H 2 O /L IR which are stronger than the average of our H-ATLAS sources by factors ∼ 2 for the J = 2 lines and ∼ 4 for J = 3 (see Fig. 3). The velocity dispersions of its H 2 O lines are ∼ 900 km s −1 (with uncertainties from 18% to 36%), which is larger than all our sources. For optically thick systems, larger velocity dispersion will increase the number of absorbed pumping photons, and boost the ratio of L H 2 O /L IR (G14). For the AGN-dominated sources (i.e. APM 08279, G09v1.124-W and Mrk 231) as shown in Fig. 3, most of them (except for the H 2 O(3 21 -3 12 ) line of Mrk 231) are well below the fitted correlation (see Section 4.4). This is consistent with the average value of local strong-AGN-dominated sources. The J 3 H 2 O lines are far-infrared pumped by the 75 and 101 µm photons, thus the very warm dust in strong-AGNdominated sources is likely to contribute more to the L IR than the J 3 H 2 O excitation (see also Y13). H 2 O excitation We have detected both J = 2 and J = 3 H 2 O lines in five sources out of six observed for J = 3 ortho-H 2 O lines. By comparing the line ratios and their strength relative to L IR , we are able to constrain the physical conditions of the molecular content and also the properties of the far-infrared radiation field. To compare the H 2 O excitation with local galaxies, we plot the velocity-integrated flux density of ortho-H 2 O(3 21 -3 12 ) normalised by that of para-H 2 O(2 02 -1 11 ) in our source on top of the local and high-redshift H 2 O SLEDs (spectral line energy distributions) in Fig. 4. All the six high-redshift sources are located within the range of the local galaxies, with a 1 σ dispersion of ∼ 0.2. Yet for the z = 6.34 extreme starburst HFLS3, the value of this ratio is at least 1.7 times higher than the average value of local sources (Y13) and those of our lensed high-redshift Hy/ULIRGs at 3 σ confidence level (Fig. 4). This probably traces different excitation conditions, namely the properties of the dust emission, as it is suggested in G14 that the flux ratio of H 2 O(3 21 -3 12 ) over H 2 O(2 02 -1 11 ) is the most direct tracer of the hardness of the far-infrared radiation field which powers the submm H 2 O excitation. However, the line ratios are still consistent with the strong saturation limit in the far-infrared pumping model with a T warm 65 K. The large scatter of the H 2 O line ratio between 3 21 -3 12 and 2 02 -1 11 indicates different local H 2 O excitation conditions. As far-infrared pumping is dominating the H 2 O excitation, the ratio therefore reflects the differences in the far-infrared radiation field, for example, the temperature of the warmer dust that excites the H 2 O gas, and the submm continuum opacity. It is now clear that far-infrared pumping is the prevailing excitation mechanism for those submm H 2 O lines rather than collisional excitation (G14) in infrared bright galaxies in both the local and high-redshift Universe. The main path of far-infrared pumping related to the lines we observed here are 75 and 101 µm as displayed in Fig. 1. Therefore, the different line ratios are highly sensitive to the difference between the monochromatic A&A proofs: manuscript no. ms Notes. The luminosity ratios between each H 2 O line and their total infrared, and the velocity-integrated flux density ratio of different H 2 O transitions. T d is the cold-dust temperature taken from B13, except for the ones in brackets which are not listed B13, that we infer them from single modified black-body dust SED fitting using the submm/mm photometry data listed in Table 2. All the errors quoted for T d are significantly underestimated especially because they do not include possible effects of differential lensing and make the assumption of a single-temperature. Line ratios in brackets are derived based on the average velocity-integrated flux density ratios between 2 11 -2 02 and 2 02 -1 11 lines in local infrared galaxies. The local strong-AGN sources are the optically classified AGN-dominated galaxies and the local H ii+mild-AGN sources are starforming-dominated galaxies with possible mild AGN contribution (Y13). The first group of the sources are from this work; and the sources in the second group are the previously published sources in O13; the third group contains the previously published high-redshift detections from other works: HFLS3 (Riechers et al. 2013), APM 08279 (van der Werf et al. 2011), HLSJ 0918 (Combes et al. 2012;Rawle et al. 2014) and SPT 0538 (Bothwell et al. 2013); the last group shows the local averaged values from Y13. flux at 75 and 101 µm. We may compare the global T d measured from far-infrared and submm bands (B13). It includes both cold and warm dust contribution to the dust SED in the rest-frame, which is, however, dominated by cold dust observed in SPIRE bands. It is thus not surprising that we find no strong correlation between T d and I H 2 O(3 21 -3 12 ) /I H 2 O(2 02 -1 11 ) (r ∼ −0.3). The Rayleigh-Jeans tail of the dust SED is dominated by cooler dust which is associated with extended molecular gas and less connected to the submm H 2 O excitation. As suggested in G14, it is indeed the warmer dust (T warm , as shown by the colour legend in Fig. 5) dominating at the Wien side of the dust SED that corresponds to the excitation of submm H 2 O lines. To further explore the physical properties of the H 2 O gas content and the far-infrared dust radiation related to the submm H 2 O excitation, we need to model how we can infer key parameters, such as the H 2 O abundance and those determining the radiation properties, from the observed H 2 O lines. For this purpose, we use the far-infrared pumping H 2 O excitation model described in G14 to fit the observed L H 2 O together with the corresponding L IR , and derive the range of continuum optical depth at 100 µm (τ 100 ), warm dust temperature (T warm ), and H 2 O column density per unit of velocity interval (N H 2 O /∆V) in the five sources with both J = 2 and J = 3 H 2 O emission detections. Due to the insufficient number of the inputs in the model, which are L H 2 O of the two H 2 O lines and L IR , we are only able to perform the modelling by using the pure far-infrared pumping regime. Nevertheless, our observed line ratio between J = 3 and J = 2 H 2 O lines suggests that far-infrared pumping is the dominant excitation mechanism and the contribution from collisional excitation is minor (G14). The ±1 σ contours from χ 2 fitting are shown in Fig. 5 for each warm dust temperature component (T warm = 35-115 K) per source. It is clear that with two H 2 O lines (one J = 2 para-H 2 O and ortho-H 2 O(3 12 -3 12 )), we will not be able to well constrain τ 100 and N H 2 O /∆V. As shown in the figure, for T warm 75 K, both very low and very high τ 100 could fit the observation data together with high N H 2 O /∆V, while the dust with T warm 95 K are likely favouring high τ 100 . In the low continuum optical depth part in Fig. 5, as τ 100 decreases, the model needs to increase the value of N H 2 O /∆V to generate sufficient L H 2 O to be able to fit the observed L H 2 O /L IR . This has been observed in some local sources with low τ 100 , such as in NGC 1068 and NGC 6240. There are no absorption features in the far-infrared but submm H 2 O emission have been detected in these sources (G14). The important feature of such sources is the lack of J ≥ 4 H 2 O emission lines. Thus, the observation of higher excitation of H 2 O will discriminate between the low and high τ 100 regimes. Among these five sources, favoured key parameters are somewhat different showing the range of properties we can expect for such sources. Compared with the other four Hy/ULIRGs, G09v1.97 is likely to have the lowest T warm as only dust with T warm ∼ 45 − 55 K can fit well with the data. NCv1.143 C. . Parameter space distribution of the H 2 O far-infrared pumping excitation modelling with observed para-H 2 O 2 02 -1 11 or 2 11 -2 02 and ortho-H 2 O(3 21 -3 12 ) in each panel. ±1 σ contours are shown for each plot. Different colours with different line styles represent different temperature components of the warm dust as shown in the legend. The explored warm dust temperature range is from 35 K to 115 K. The temperature contours that are unable to fit the data are not shown in this figure. From the figure, we are able to constrain the τ 100 , T warm and N H 2 O /∆V for the five sources. However, there are strong degeneracies. Thus, we need additional information, such as the velocity-integrated flux densities of J ≥ 4 H 2 O lines, to better constrain the physical parameters. and NAv1.177 have slightly different diagnostic which yields higher dust temperature as T warm ∼ 45-75 K, while NBv1.78 and G12v2.43 tend to have the highest temperature range, T warm ∼ 45-95 K. The values of T warm are consistent with the fact that H 2 O traces warm gas. We did not find any significant differences between the ranges of N H 2 O /∆V derived from the modelling for these five sources, although G09v1.97 tends to have lower N H 2 O /∆V (Table 7). As shown in Section 4.4, there is no evidence of AGN domination in all our sources, the submm H 2 O lines likely trace the warm dust component that connect to the heavily obscured active star-forming activity. However, due to the lack of photometry data on the Wien side of the dust SEDs, we will not be able to compare the observed values of T warm directly with the ones derived from the modelling. By adopting the 100 µm dust mass absorption coefficient from Draine (2003) of κ 100 = 27.1 cm 2 g −1 , we can derive the dust opacity by where σ dust is the dust mass column density, M dust is the dust mass, A is the projected surface area of the dust continuum source and r half is the half-light radius of the source at submm. Article number, page 11 of 23 A&A proofs: manuscript no. ms As shown in Table 2, among the five sources in Fig. 5, the values of M dust and r half in G09v1.97, NCv1.143 and NBv1.78 have been derived via gravitational lensing (B13). Consequently, the derived approximate dust optical depth at 100 µm in these three sources is τ 100 ≈ 1.8, 7.2 and 2.5, respectively. One should note that, the large uncertainty in both the κ 100 and r half of these high-redshift galaxies can bring a factor of few error budget. Nevertheless, by adopting a gas-to-dust mass ratio of X = 100 (e.g. Magdis et al. 2011), we can derive the gas depletion time using the following approach, where M gas is the total molecular gas mass and Σ SFR is the surface SFR density derived from L IR using Kennicutt (1998) calibration by assuming a Salpeter IMF (B13, and Table 2). The implied depletion time scale is t dep ≈ 35-60 Myr with errors within a factor of two, in which the dominant uncertainties are from the assumed gas-to-dust mass ratio and the half-light radius. The t dep is consistent with the values derived from dense gas tracers, like HCN in local (U)LIRGs (e.g. Gao & Solomon 2004;García-Burillo et al. 2012). As suggested in G14, the H 2 O and HCN likely to be located in the same regions, indicate that the H 2 O traces the dense gas as well. Thus, the τ 100 derived above is likely also tracing the far-infrared radiation source that powers the submm H 2 O emissions. B13 also has found that these H-ATLAS high-redshift Hy/ULIRGs are expected to be optically thick in the far-infrared. By adding the constrain from τ 100 above, we can better derive the physical conditions in the sources as shown in Table 7. From their modelling of local infrared galaxies, G14 find a range of T warm = 45-75 K, τ 100 = 0.05-0.2 and N H 2 O /∆V = (0.5-2) × 10 15 cm −2 km −1 s. The modelling results for our high-redshift sources are consistent with those in local galaxies in terms of T warm and N H 2 O /∆V. However, the values of τ 100 we found at high-redshift are higher than those of the local infrared galaxies. This is consistent with the higher ratio between L H 2 O and L IR at high-redshift (Y13) which could be explained by higher τ 100 (G14). However, as demonstrated in an extreme sample, a very large velocity dispersion will also increase the value of L H 2 O /L IR within the sources with τ 100 > 1. Thus, the higher ratio can also be explained by larger velocity dispersion (not including systemic rotations) in the high-redshift Hy/ULIRGs. Compared with local ULIRGs, our H-ATLAS sources are much more powerful in terms of their L IR . The dense warm gas regions that H 2 O traces are highly obscured with much more powerful far-infrared radiation fields, which possibly are close to the limit of maximum starbursts. Given the values of dust temperature and dust opacity, the radiation pressure P rad ∼ τ 100 σT d /c (σ is Stefan-Boltzmann s constant and c the speed of light) of our sources is about 0.8 × 10 −7 erg cm −3 . If we assume a H 2 density n H 2 of ∼ 10 6 cm −3 and take T k ∼ 150 K as suggested in G14, the thermal pressure P th ∼ n H 2 k B T k ∼ 2 × 10 −8 erg cm −3 (k B is the Boltzmann constant and T k is the gas temperature). Assuming a turbulent velocity dispersion of σ v ∼ 20-50 km s −1 (Bournaud et al. 2015) and taking molecular gas mass density ρ ∼ 2µn H 2 (2µ is the average molecular mass) would yield for the turbulent pressure P turb ∼ ρσ 2 v /3 ∼ 4 × 10 −6 erg cm −3 . This might be about an order of magnitude larger than P rad and two orders of magnitude larger than P th , but we should note that all values are very uncertain, especially P turb which could be uncertain by, at maximum, a factor of a few tens. Therefore, keeping in mind their large uncertainties, turbulence and/or radiation are likely to play an important role in limiting the star formation. Comparison between H 2 O and CO The velocity-integrated flux density ratio between submm H 2 O and submm CO lines with comparable frequencies is 0.02-0.03 in local PDRs such as Orion and M 82 (Weiß et al. 2010). But this ratio in local ULIRGs (Y13) and in H-ATLAS high-redshift Hy/ULIRGs is much higher, from 0.4 to 1.1 (Table 3 and 4). The former case is dominated by typical PDRs, where CO lines are much stronger than H 2 O lines, while the latter sources shows clearly a different excitation regime, in which H 2 O traces the central core of warm, dense and dusty molecular gas which is about a few hundred parsec (González-Alfonso et al. 2010) in diameter in local ULIRGs and highly obscured even at far-infrared. Generally, submm H 2 O lines are dominated by far-infrared pumping that traces strong far-infrared dust continuum emission, which is different from the regime of molecular gas traced by collisional excited CO lines. In the active star-forming nucleus of the infrared-bright galaxies, the far-infrared pumped H 2 O is expected to trace directly the far-infrared radiation generated by the intense star formation, which can be well correlated with the high-J CO lines (Liu et al. 2015). Thus there is likely to be a correlation between the submm H 2 O and CO emission. From our previous observations, most of the H 2 O and CO line profiles are quite similar from the same source in our high-redshift lensed Hy/ULIRGs sample (Fig. 2 of O13). In the present work, we again find similar profiles between H 2 O and CO in terms of their FWHM with an extended sample (Table 3 and 4). In both cases the FWHMs of H 2 O and CO are generally equal within typical 1.5 σ errors (see special discussion for each source in Appendix A). As the gravitational lensing magnification factor is sensitive to spatial alignment, the similar line profiles could thus suggest similar spatial distributions of the two gas tracers. However, there are a few exceptional sources, such as SDP 81 (ALMA Partnership, Vlahakis et al. 2015) and HLSJ0918 (Rawle et al. 2014). In both cases, the H 2 O lines are lacking the blue velocity component found in the CO line profiles. Quite different from the rest sources, in SDP 81 and HLSJ0918, the CO line profiles are complicated with multiple velocity components. Moreover, the velocity-integrated flux density ratios between these CO components may vary following the excitation level (different J up ). Thus, it is important to analyse the relation between different CO excitation components (from low-J to high-J) and H 2 O. Also, high resolution observation is needed to resolve the multiple spatial gas components and compare the CO emission with H 2 O and dust continuum emission within each component. AGN content It is still not clear how a strong AGN could affect the excitation of submm H 2 O in both local ULIRGs and high-redshift Hy/ULIRGs. Nevertheless, there are some individual studies addressing this question. For example, in APM 08279, van der Werf et al. (2011) found that AGN is the main power source that excites the high-J H 2 O lines and also enriches the gasphase H 2 O abundance. Similar conclusion has also been drawn by González-Alfonso et al. (2010) that in Mrk 231 the AGN accounts for at least 50 % contribution to the far-infrared radiation that excites H 2 O. From the systematic study of local sources (Y13), slightly lower values of L H 2 O /L IR are found in strong-AGN-dominated sources. In the present work, the decreasing ratio of L H 2 O /L IR with AGN is clearly shown in Fig. 3 where Mrk 231, G09v1.124-W and APM 08279 are below the correlation by factors between 2 and 5 with less than 30% uncertainties (except the H 2 O(3 21 -1 23 ) line of Mrk 231). In the far-infrared pumping regime, the buried AGN will provide a strong far-infrared radiation source that will pump the H 2 O lines. However, the very warm dust powered by the AGN will increase the value of L IR faster than the number of ≥ 75 µm photons that is dominating the excitation of J ≤ 3 H 2 O lines (e.g. Kirkpatrick et al. 2015). If we assume that the strength of the H 2 O emission is proportional to the number of pumping photons, then in the strong-AGN-dominated sources, the ratio of L H 2 O /L IR will decrease since much warmer dust is present. Moreover, strong radiation from the AGN could dissociate the H 2 O molecules. To evaluate the AGN contribution to the H-ATLAS sources, we extracted the 1.4 GHz radio flux from the FIRST radio survey (Becker et al. 1995) listed in Table 2. By comparing the far-infrared and radio emission using the q parameter (Condon 1992), q ≡ log(L FIR /3.75 × 10 12 W) − log(L 1.4 GHz /1 W Hz −1 ), we derive values of q from 1.9 to 2.5 in our sources. These values follow the value 2.3 ± 0.1 found by Yun et al. (2001) for non strong-radio AGN. This may suggest that there is also no significant indication of a high radio contribution from AGN. This is also confirmed by the Wide-field Infrared Survey Explorer (WISE, Wright et al. 2010), which does not detect our sources at 12 µm and 22 µm. However, rest-frame optical spectral observations show that G09v1.124-W is rather a powerful AGN (Oteo et al, in prep.), which is the only identified AGN-dominated source in our sample. . Therefore, H 2 O + lines are important for distinguishing between shock-or ion-chemistry origin for H 2 O in the early Universe, indicating the type of physical regions in these galaxies: shock-dominated regions, cosmic-raydominated regions or X-ray-dominated regions. Indeed, they can be among the most direct tracers of the cosmic-ray or/and Xray ionization rate (e.g. Gérin et al. 2010;Neufeld et al. 2010;González-Alfonso et al. 2013) of the ISM, which regulates the chemistry and influences many key parameters, for example, Xfactor (Bell et al. 2007) that connects the CO luminosity to the H 2 mass. Moreover, the significant detections of H 2 O + emission in high-redshift Hy/ULIRGs could help us understanding H 2 O formation in the early Universe. After subtracting the Gaussian profiles of all the H 2 O + lines in the spectrum, we find a 3 σ residual in terms of the velocity-integrated flux density around 745.3 GHz (I = 0.6 ± 0.2 Jy km s −1 , see Fig.6). This could be a tentative detection of the H 18 2 O(2 11 -2 02 ) line at 745.320 GHz. The velocity-integrated flux density ratio of H 18 2 O(2 11 -2 02 ) over H 2 O(2 11 -2 02 ) in NCv1.143 would hence be ∼ 0.1. If this tentative detection was confirmed, it would show that ALMA could easily study such lines. But sophisticated models will be needed to infer isotope ratios. The spectrum of the H 2 O(2 11 -2 02 ) line in G09v1.97 covers both the two main H 2 O + fine structure lines (Fig 6). However, due to the limited sensitivity, we have only tentatively detected the H 2 O + (2 02 -1 11 ) (5/2−3/2) line just above 3 σ (neglecting the minor contribution from H 2 O + (2 11 -2 02 ) (5/2−3/2) ), and the velocity-integrated flux density is 1.4 ± 0.4 Jy km s −1 using a single Gaussian fit. We did not perform any line de-blending for this source considering the data quality. The H 2 O + line profile is in good agreement with that of the H 2 O (blue dashed histogram in Fig. 7). The velocity-integrated flux density of the undetected H 2 O + (2 11 -2 02 ) (5/2−5/2) line could also be close to this value as discussed in the case of NCv1.143, yet somewhat lower and not detected in this source. More sensitive observation should be conducted to further derive robust line parameters. We have also tentatively detected the H 2 O + (2 11 -2 02 ) (5/2−5/2) line in G15v2.779 (S /N ∼ 4 by neglecting the minor contribution from the H 2 O + (2 02 -1 11 ) (3/2−3/2) line). The line profile is in good agreement with that of H 2 O(2 11 -2 02 ) (blue dashed histogram in Fig. 6). The velocity-integrated flux density derived from a double-peak Gaussian fit is 1.2 ± 0.3 Jy km s −1 (we did not perform any line de-blending for the H 2 O + doublet considering the spectral noise level). There could be a minor contribution from the H 2 O + (2 02 -1 11 ) (3/2−3/2) line to the velocity-integrated flux density. However, such a contribution is likely to be negligible as in the case of NCv1.143. The contribution is also within the uncertainty of the velocity-integrated flux density. Nevertheless, the position of H 2 O + has a small blue-shift compared with H 2 O, but note that the blue part of the line is cut by the limited observed bandwidth (yellow histogram). As discussed above, the AGN contribution to the excitation of the submm lines of most of our sources appears to be minor. Thus, the formation of H 2 O + is likely dominated by cosmic-ray ionization, rather than X-ray ionization. Given the average luminosity ratio of H 2 O + /H 2 O ∼ 0.3 ± 0.1 shown in Fig. 7, Meijerink et al. (2011) suggest a cosmic-ray ionization rate of 10 −14 -10 −13 s −1 . Such high cosmic-ray ionization rates drive the ambient ionization degree of the ISM to 10 −3 -10 −2 , rather than the canonical 10 −4 . Therefore, in the gas phase, an ion-neutral route likely dominates the formation of H 2 O. However, H 2 O can also be enriched through the water-ice sublimation that releases H 2 O into the gas-phase ISM. As the upper part, ∼ 90 K, of the possible range for T warm is close to the sublimation temperature of water ice. Hence, the high H 2 O abundance (N H 2 O 0.3×10 17 cm −2 , see Section 4.2) observed is likely to be the result of ion chemistry dominated by high cosmic-ray ionization and/or perhaps water ice desorption. However, further observation of H 2 O + lines of different transitions and a larger sample is needed to constrain the contribution to H 2 O formation from neutral-neutral reactions dominated by shocks. Conclusions In this paper, we report a survey of submm H 2 O emission at redshift z ∼ 2-4, by observing a higher excited ortho-H 2 O(3 21 -3 12 ) in 6 sources and several complementary J = 2 para-H 2 O emission lines in the warm dense cores of 11 high-redshift lensed extreme starburst galaxies (Hy/ULIRGs) discovered by H-ATLAS. So far, we have detected an H 2 O line in most of our observations of a total sample of 17 high-redshift lensed galaxies, in other words, we have detected both J = 2 para-H 2 O and J = 3 ortho-H 2 O lines in five, and in ten other sources only one J = 2 para-H 2 O line. In these high-redshift Hy/ULIRGs, H 2 O is the second strongest molecular emitter after CO within the submm band, as in local ULIRGs. The spatially integrated H 2 O emission lines have a velocity-integrated flux density ranging from 4 to 15 Jy km s −1 , which yields the apparent H 2 O emission luminosity, µL H 2 O ranging from ∼ 6-22×10 8 L . After correction for gravitation lensing magnification, we obtained the intrinsic L H 2 O for para-H 2 O lines 2 02 -1 11 , 2 11 -2 02 and ortho-H 2 O(3 21 -3 12 ). The luminosities of the three H 2 O lines increase with L IR as L H 2 O ∝ L IR 1.1-1.2 . This correlation indicates the importance of far-infrared pumping as a dominant mechanism of H 2 O excitation. Comparing with J = 3 to J = 6 CO lines, the linewidths between H 2 O and CO are similar, and the velocity-integrated flux densities of H 2 O and CO are comparable. The similarity in line profiles suggests that these two molecular species possibly trace similar intense star-forming regions. Using the far-infrared pumping model, we have analysed the ratios between J = 2 and J = 3 H 2 O lines and L H 2 O /L IR in 5 sources with both J H 2 O lines detected. We have derived the ranges of the warm dust temperature (T warm ), the H 2 O column density per unit velocity interval (N H 2 O /∆V) and the optical depth at 100 µm (τ 100 ). Although there are strong degeneracies, these modelling efforts confirm that, similar to those of local ULIRGs, these submm H 2 O emissions in high-redshift Hy/ULIRGs trace the warm dense gas that is tightly correlated with the massive star forming activity. While the values of T warm and N H 2 O (by assuming that they have similar velocity dispersion ∆V) are similar to the local ones, τ 100 in the high-redshift Hy/ULIRGs is likely to be greater than 1 (optically thick), which is larger than τ 100 = 0.05-0.2 found in the local infrared galaxies. However, we notice that the parameter space is still not well constrained in our sources through H 2 O excitation modelling. Due to the limited excitation levels of the detected H 2 O lines, we are only able to perform the modelling with pure far-infrared pumping. The detection of relatively strong H 2 O + lines opens the possibility to help understanding the formation of such large amount of H 2 O. In these high-redshift Hy/ULIRGs, the H 2 O formation is likely to be dominated by ion-neutral reactions powered by cosmic-ray-dominated regions. The velocity-integrated flux density ratio between H 2 O + and H 2 O (I H 2 O + /I H 2 O ∼ 0.3), is remarkably constant from low to high-redshift, reflecting similar conditions in Hy/ULIRGs. However, more observations of H 2 O + emission/absorption and also OH + lines are needed to further constrain the physical parameters of the cosmic-ray-dominated regions and the ionization rate in those regions. We have demonstrated that the submm H 2 O emission lines are strong and easily detectable with NOEMA. Being a unique diagnostic, the H 2 O emission offers us a new approach to constrain the physical conditions in the intense and heavily obscured star-forming regions dominated by far-infrared radiation at high-redshift. Follow-up observations of other gas tracers, for instance, CO, HCN, H 2 O + and OH + using the NOEMA, IRAM 30m and JVLA will complement the H 2 O diagnostic of the structure of different components, dominant physical processes, star formation and chemistry in high-redshift Hy/ULIRGs. With unprecedented spatial resolution and sensitivity, the image from the ALMA long baseline campaign observation of SDP 81 (also known as H-ATLAS J090311.6+003906, ALMA Partnership, Vlahakis et al. 2015;Dye et al. 2015;Rybak et al. 2015), shows the resolved structure of the dust, CO and H 2 O emission in the z = 3 ULIRG. With careful reconstruction of the source plane images, ALMA will help to resolve the submm H 2 O emission in high-redshift galaxies into the scale of giant molecular clouds, and provide a fresh view of detailed physics and chemistry in the early Universe. total integrated flux density ratio S ν (ct) pk /S ν (ct) = 0.54 ± 0.02, and for H 2 O(2 02 -1 11 ), the ratio S pk H 2 O /S H 2 O equals to 0.6 ± 0.3. Therefore, the spatial distributions of dust and the H 2 O emission are likely to be similar in this source. In the observation at 293 GHz, S ν (ct) pk /S ν (ct) = 0.42 ± 0.01, due to a smaller synthesis beam (Table 1). Fig. 2 shows the spectra corresponding to the two observations of NAv1.195. The H 2 O(2 02 -1 11 ) line can be fitted by a single Gaussian profile, with a linewidth equal to 328 ± 51 km s −1 . We have not detected the higher excitation H 2 O(3 21 -3 12 ) line as mentioned above. By assuming the same linewidth as the lower−J H 2 O line, we can infer a 2 σ detection limit of 2.56 Jy km s −1 . This yields a ratio of H 2 O(3 21 -3 12 )/H 2 O(2 02 -1 11 ) 0.6. This value is significantly lower than that in the five other sources where it ranges from 0.75 to 1.60 (errors are within 25%), but it remains close to the low-est values measured in local galaxies (Y13) as shown in Table 6 and Fig. 4. This low ratio of H 2 O lines probably originates from different excitation conditions, especially for the far-infrared radiation field, since the line H 2 O(3 21 -3 12 ) is mainly populated through far-infrared pumping via absorbing 75 µm photons (see Section 5). The CO(5-4) line of the source has a linewidth of 281 ± 16 km s −1 , which is comparable with the H 2 O line profile. The observed ratio of I H 2 O /I CO (CO(5-4)) is ≤ 0.4. Appendix A.5: NAv1.177 at z = 2.778 NOEMA observation of the CO line in this source gives a redshift of z = 2.778 (Krips et al., in prep.). The SMA 880 µm image shows a compact structure with two peaks ∼ 1 away along the eastwest direction, and the western component is the dominant one (Figure 2 of B13). However, due to the absence of deflector
2016-09-19T19:19:37.000Z
2016-07-21T00:00:00.000
{ "year": 2016, "sha1": "4fbba9d46280b568c4c3119d15f6fef1e6b471ac", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2016/11/aa28160-16.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "4fbba9d46280b568c4c3119d15f6fef1e6b471ac", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
26492034
pes2o/s2orc
v3-fos-license
Myocardial Na+ during ischemia and accumulation of Ca2+ after reperfusion: a study with monensin and dichlorobenzamil. The intracellular cation contents were determined in isolated perfused rat heart using cobaltic EDTA as a marker of the extracellular space. In hearts in which Na+ accumulation was induced with monensin, a Na+ ionophore, during 20 min-ischemia which otherwise did not result in accumulation of Na+, the levels of Na+ and Ca2+ were significantly higher after reperfusion with a significant decrease in K+. While the recovery of the cardiac mechanical function (CMF) was complete after reperfusion in control hearts, the recovery was incomplete in monensin-hearts. Dichlorobenzamil (DCB), the most specific inhibitor of Na(+)-Ca2+ exchanger, infused for 10 min before induction of ischemia in a dose of 10(-5) M, which produced a definite suppression of CMF (over 80%), inhibited the accumulation of Ca2+ and Na+ and the loss of K+ and ATP after 40 min-ischemia and reperfusion. The same dose of DCB given for 3 min before induction of ischemia and after reperfusion, which produced a less than 20% inhibition of CMF, failed to prevent the Ca2+ accumulation after 40 min-ischemia and reperfusion. These findings are at variance with the idea that the accumulation of Na+ during ischemia and the consequent augmented operation of Na(+)-Ca2+ exchange is responsible for accumulation of Ca2+ after reperfusion. The intracellular accumulation of Ca 2+ is a crucial factor for the irreversible myocardial injury that occurs after reperfusion following a prolonged period of ische mia (1,2). According to Bourdillon and Poole-Wilson (3), the accumulation of Ca 2+ was not due to a de crease in efflux, but due to an increase in influx. However, attempts to demonstrate the involvement of voltage-operated Ca 2+ channels in the accumulation of Ca 2+ have failed; Ca 2+ antagonists given during the period of reperfusion (4, 5) did not prevent the accu mulation of Ca2+. Ca 2+ antagonists can prevent the accumulation when given prior to induction of ische mia. However, the inhibition was closely associated with the suppression of the myocardial mechanical work before induction of ischemia. Thus, nonspecific protec tive effects may be the cause of the observed preven tion. Observing the increase in myocardial Na+ content during ischemia, several researchers (6)(7)(8) implicated Na+-Ca 2+ exchanger as another possible pathway for Ca 2+ entry. Weiss et al. (9) reported the prevention of reoxygenation-induced Ca 2+ accumulation in hypoxic heart by an inhibitor of Na+-Ca 2+ exchanger, amilo ride. However, the specificity of amiloride as an inhibi tor of Na+-Ca 2+ exchanger does not seem to be suffi ciently high to derive a reliable conclusion. In view of these circumstances, we performed the present study to reexamine the importance of Na+ accumulation during ischemia and the consequent augmented operation of Na+-Ca 2+ exchanger for reperfusion-induced accumula tion of Ca 2+. The importance of Na+ accumulation dur ing ischemia was assessed by producing an accumula tion of Na+ using monensin during a short period of ischemia. For assessment of the role played by the Na+-Ca 2+ exchange system, the most specific inhibitor of the Na+-Ca 2+ exchange system, dichlorobenzamil, was used. Particular attention was directed to changes in myocardial function produced by these agents. MATERIALS AND METHODS Experiments were performed in an isolated perfused rat heart preparation (Langendorff's method). Male rats weighing around 250-350g were lightly anesthe tized with ether. Immediately after opening the thorax, the hearts were excised and transferred to ice-chilled modified Krebs-Ringer bicarbonate solution to induce rapid cessation of the heart beat. The adherent connec tive tissue was removed and the ascending aorta cannu lated. Retrograde perfusion with a modified Krebs-Rin ger bicarbonate solution from a reservoir 75 cm above the heart was begun immediately. The perfusion fluid contained 127.2 mM NaCl, 4.7 mM KCI, 2.5 mM CaC12, 1.2 mM KH2PO4 and 24.9 mM NaHCO3. It was oxygenated with 95% 02 + 5% CO2 gas by means of an oxygenating device as described by Neely et al. (10) to ensure P02 values higher than 600 mmHg and was kept at a temperature of 38°C. Sodium pyruvate (2 mM) and glucose (5.5 mM) were added to the perfu sion fluid as substrates. The coronary inflow (CF) was measured by means of an electromagnetic flowmeter probe (Statham, 1-mm i.d.) placed in the perfusate inflow-line and an electro magnetic flowmeter (Statham 2201). The left ventricu lar pressure (LVP) was measured by a balloon filled with saline and connected to a pressure transducer (Statham P50) with a saline-filled polyethylene tubing, which was introduced into the left ventricular cavity from the left atrium through the mitral valve. Heart rate (HR) was counted with a cardiotachometer (Sanei N2507) triggered by pressure pulses of LVP. As a measure of the total mechanical energy required for contraction (11), the product of LVP X HR was calcu lated. Protocol After a 20 min equilibration perfusion, the perfusion fluid was changed to one containing 1 mM potassium cobaltic EDTA (Co-EDTA). After a further perfusion for 20 min with this solution, global ischemia was in duced by cross-clamping the aortic inflow line for 40 min. Then the hearts were reperfused for 40 min (re perfusion group). The period of reperfusion was set at 40 min to obtain a steady level of myocardial mechani cal function. When the effects of monensin was examined, the ani mals (both the control and the test animals) were ad ministered intraperitoneally with 1 mg/kg of reserpine 18-24 hours before the experiments, as it is known that monensin produces significant positive inotropic and chronotropic responses by releasing catecholamines (12,13). Prior to induction of global ischemia for 20 min, monensin (6.13 X 10-5 mg/ml) was infused for 10 min into the perfusate inflow line near the aortic can nula at a speed of 0.113 ml/min by means of an infu sion pump (Harvard 940) to achieve a concentration of around 10-6 M. Ethanol, the solvent of monensin, was infused for the control hearts. Dichlorobenzamil (DCB) (10-5 M) was infused for 10 min before ischemia or for 3 min before ischemia and then 3 min after reperfusion. The solvent of DCB, polyethylene glycol (PEG), was infused to the control hearts. At the end of perfusion, hearts were rapidly excised and immediately frozen with a pair of Wollenberger tongs precooled in liquid N2. The frozen tissue frag ments were crushed into a fine powder in a stainless steel percussion mortar cooled in liquid N2 (14). Intracellular cation contents For the determination of the intracellular myocardial cation contents, 300 400 mg of fine powder of myocar dium was dried overnight at 80°C and weighed. Na+, K+, Ca 2+ and Co2+ were extracted from the dried tis sue powder with a method developed by Sparrow and Johnstone (15), and the total tissue contents of the cat ions were determined with an atomic absorption spec trometer (Hitachi 180-30) and a flame photometer The intracellular cation contents were expressed as micromoles per gram dry weight. Adenine nucleotide contents The extraction of adenine nucleotide from the myocardium was conducted by a method developed by Khym (17). The fine powder of myocardium was homogenized at 0°C with a Polytron homogenizer (Kinematica PT 10/35) at setting 9 in ice-cold 0.6 N perchloric acid. After centrifugation at 3000 rpm for 15 min, the supernatant was neutralized with 1,1,2-tri chlorofluoroethane containing 0.5 M tri-n-octylamine. After a second centrifugation at 1,000 rpm for 2 min, the supernatant was used for the determination of ad enine nucleotide contents. Determination of ATP, ADP and AMP was per formed with high performance liquid chromatography (HPLC) (Waters model 6000A Solvent Delivery System with a 330 UV Absorbance Detector). A Radial-Pak u Bondapack C18 column (Waters) was used as the sta tionary phase with 0.025% trihydroxyfuran, 0.12M KH2PO4, 0.001 M tetrabutylammonium hydrogen sul fate and 4.5% acetonitrile (pH 6.25) as a mobile phase (18). The absorbance was monitored at 254 nm. Identi fication of the compounds was carried out on the basis of retention time and enzymatic transformation, and the concentration of each compound was calculated from the peak height measurements using the corre sponding authentic substances as standards. As sensitive measures of the energy status of the myocardium, the ATP/ADP ratio and energy charge (EC) were used. The latter was calculated as follows: EC = (ATP + ADP/2)/(ATP + ADP + AMP) Throughout the experiments, all animals were dealt with in a humane manner in accordance with recog nized guidelines on animal experiments. Chemicals and drugs The following chemicals and drugs were used: etha nol (for HPLC grade, Wako Chemicals) and tetrabutyl ammonium hydroxy sulfate (Aldrich Chemicals). DCB, a generous gift from Nippon Soda Co., Ltd., was dis solved in PEG and diluted by distilled water (final con centration of PEG was about 0.1%). All other chemi cals were obtained from Wako Chemicals. Potassium Co-EDTA (16,19) was prepared in pure crystalline form by the method of Dwyer et al. (20). Statistical analyses Data are presented as means ± S.E. Statistical assessment of the significant difference among groups was made by one-way analysis of variance followed by Bonferroni's method or Student's t-test. A difference was considered significant at a probability value of less than 0.05. RESULTS The perfusion with the Krebs-Ringer bicarbonate solution containing 1 mM Co-EDTA did not affect any parameter of the cardiac function. Tissue cobalt level reached a plateau within 10 min after the start of perfu sion. Changes in cardiac function after ischemia and reperfu sion Figure 1 depicts the recovery of cardiac function after reperfusion for 40 min following 20, 40 and 60 min of global ischemia. The recovery was expressed as a % of the values before induction of ischemia (determined af ter 40 min equilibration perfusion), which were 10.3 ± 1.4 ml/min for CF, 0 mmHg for enddiastolic pressure (EDP), 138.6 ± 7.1 mmHg for LVP and 311 ± 8 beats/min for HR. The recovery of cardiac function fol lowing 20-min ischemia was almost complete in terms of any parameters of the cardiac function examined. In contrast, the recovery of cardiac function was incom plete with the ischemia of 40 min. The recovery of the total mechanical energy required for contraction, LVP X HR, was likewise significantly decreased with ische mia of 40 min, and there occurred a significant eleva tion of EDP. Ischemia of 60 min caused further changes in these parameters, with the exception of HR. Table 1. Intracellular Na+, K+ and Ca 2+ contents, tissue ATP content, ATP/ADP, and energy charge (EC) of hearts after 20, 40 and 60 min of global ischemia and those of hearts after reperfusion following ischemia ATP/ADP ratio and EC. These parameters significant ly decreased with ischemia and remained low even after reperfusion. Figure 2 depicts the relation between the recovery of the total mechanical energy required for contraction, LVP X HR, and the myocardial intracellular contents of cations and ATP after reperfusion. A linear relation ship was observed with r values of 0.908 for Na+, 0.999 for K+, 0.988 for Ca 2+ and 1.000 for ATP, respective ly. Effects of monensin Infusion of 10-6 M of monensin increased LVP by 24.4 ± 4.1 mmHg (n = 18). However, as HR decreased by 23.2 ± 7.2 beats/min (n = 18), LVP X HR re mained at the level of 119.2 ± 6.5% of the value just before administration of the drug. The infusion of etha nol (final concentration was about 0.01%), the solvent for monensin, had no effect on LVP and HR. Changes in myocardial cation and adenine nucleotide contents produced by monensin are shown in Figs. 3 and 4. As compared with that of the ethanol group, Na+ content was higher, but the difference was not sig nificant. The only significant change was a decrease in EC. Ischemia of 20 min produced a decrease in Na+ content in the ethanol-group, while Na+ content increased in the monensin group. Therefore, the Judging from the level of ATP and EC, the metabolic insult incurred by 20-min ischemia upon the monensin-treated heart equaled that produced by ische mia of 40 min on the hearts not treated with monensin. After reperfusion, the myocardial intracellular cation contents recovered in the ethanol-group, with the ex ception of Ca 2+ that became a little higher. In contrast, in the monensin-group, Na+ and Ca 2+ increased and K+ decreased further after reperfusion. The recovery after reperfusion of ATP, ATP/ADP ratio and EC was not complete even in the ethanol group. The recovery of these parameters was worse in the monensin group. Figure 5 depicts the recovery after reperfusion of the cardiac mechanical function. While the recovery was good in the ethanol group, it was poor in monensin treated hearts. Thus, the CF, LVP, HR and LVP X HR of the latter were significantly lower than those of the former, and EDP was significantly higher. Effects of DCB The 10-min infusion of 10-6 or 3 X 10-6 M of DCB before induction of 40-min ischemia altered neither the recovery of cardiac function nor the accumulation of Na+ and Ca 2+ and the loss of K+ and ATP after re perfusion. The 10-min infusion of 10-5 M of DCB be fore ischemia markedly decreased LVP by 105.4 ± 8.6 mmHg (n = 9) and HR by 59 ± 20 beats/min, so that LVP X HR was 16.0 ± 3.1% of the value just before administration of the drugs (Fig. 6). Under this condi tion, the accumulation of Na+ and Ca 2+ and the de crease in K+, ATP, ATP/ADP ratio and EC after re perfusion were significantly suppressed as compared with the group treated with PEG, the solvent of DCB ( Figs. 6 and 7). Infusion of 10-5 M of DCB for 3 min before ischemia and for 3 min during reperfusion re sulted in a moderate decrease in LVP X HR to 78.8 ± 7.1% of the value before induction of ischemia (Fig. 6). Fig. 6. LVP X HR before ischemia (values are expressed as percents of the value just before administration of the drugs) and intracellular cation contents of the isolated perfused heart treated with dichlorobenzamil (DCB) (10-5 M) for 10 min before 40-min ischemia or for 3 min before ischemia and then 3 min after reperfusion. open column: polyethylene glycol (PEG) treatment (n = 10), cross-hatched column: treatment with DCB for 3 min before ischemia and 3 min after reperfu sion (n = 7), filled column: treatment with DCB for 10 min before ischemia (n = 9). Values represent means ± S.E. *P < 0.05, **P < 0.01 vs. PEG treatment. Fig. 7. ATP contents (,umole/g dry weight) and ATP/ADP ratio and energy charge (EC) of the isolated perfused heart treated with dichlorobenzamil (DCB) (10-5 M) for 10 min before 40-min ischemia or for 3 min before ischemia and then 3 min after reperfusion. open column: polyethylene glycol (PEG) treatment (n = 10), cross-hatched column: treatment with DCB for 3 min before ischemia and 3 min after reperfusion (n = 7), filled column: treatment with DCB for 10 min before ischemia (n = 9). Values represent means ± S.E. *P < 0.05, **P < 0.01 vs. PEG treatment. Under this condition, the accumulation of Na+ and Ca 2+ and the reduction of K+ content after reperfusion were not changed (Fig. 6), although the decline of ATP and ATP/ADP ratio was alleviated (Fig. 7). The recov cry of cardiac function after 40-min reperfusion was in complete in both the DCB and PEG-groups. There was no significant difference between the two groups. DISCUSSION In the present study, the intracellular contents of Na+, K+ and Ca 2+ were determined in isolated per fused rat hearts with Co-EDTA as a marker of the ex tracellular space. This method has an advantage in that the tissue adenine nucleotide contents can be measured in the same preparation. Furthermore, the changes in extracellular space, intracellular space and water con tent of hearts during ischemia and reperfusion can also be determined with this method. There was a tendency for myocardial Na+ content to increase with ischemia of 20 min. However, no further increase in Na+ was observed with ischemia of 40 min and longer, while there was a decrease in K+ content; the decrease was significant with ischemia of 60 min. Reperfusion after 20-min ischemia resulted in recov ery of these parameters, with the exception of Ca2+, that showed a tendency to increase further. Ischemia of 40 min and longer resulted in a significant increase in Ca 2+ and a significant decrease in K+. With 60-min ischemia, a significant increase in Na+ was observed af ter reperfusion. These results agree with those obtained by Pridjian et al. (21) and Humphrey et al. (22). On the basis of the finding that the extracellular mar ker 51Cr-EDTA did not enter the intracellular space on reoxygenation following 30-min hypoxia, Poole-Wilson et al. (23) concluded that Ca 2+ overload is not due to disruption of the plasma membrane. Nayler et al. (24) demonstrated in the isolated rat heart that the sar colemma was intact in ultrastructure after ischemia of 60 min, even though this preparation exhibited uncon trolled Ca2+ gain upon reperfusion. Therefore a possi ble route of Ca 2+ influx may be a physiological path way. Several researchers proposed that the increase in Ca 2+ influx was due to an augmented operation of the Na+-Ca 2+ exchange mechanism during reperfusion due to the accumulation of Na+ during ischemia (7,25,26). However, in the present study in which the determina tion of intracellular cations was conducted with Co EDTA as a marker of the extracellular space, Na+ showed only a tendency to increase during ische mia. Furthermore, the increase did not augment with the ischemia of 40 min and longer, while the Ca 2+ accumulation after reperfusion increased depend ing on the duration of ischemia, and significant in creases were observed with ischemia of 40 min and longer. Thus, the Ca2+ accumulation after reperfusion was observed without significant accumulation of Na+ during ischemia, and larger accumulation of Ca 2+ after reperfusion was not associated with the larger accu mulation of Na+ during ischemia. In the present study, when the heart was treated with monensin, a Na+ ionophore, prior to induction of ischemia, a significantly larger accumulation of Na+ occurred during ischemia of 20 min, and an accu mulation of Ca 2+ was observed after reperfusion. However, compared with the degree of ischemic dam ages as assessed by the level of ATP and EC, which were equivalent to those produced by 40-min ischemia without monensin, the increase in Ca 2+ was small, dis proportionately small if one additionally takes into con sideration the fact that the Na+ accumulation during ischemia was greater under this condition. Thus, the accumulation of Na+ during ischemia can not be the sole cause of Ca 2+ accumulation after re perfusion. This conclusion is in harmony with that of Crake and Poole-Wilson (27). According to them, Na+ Ca2+ exchange can only be a minor mechanism since the uptake of Ca 2+ on reoxygenation could not be in hibited by lithium substitution for sodium introduced after the onset of hypoxia. In hearts treated with monensin, K+ and ATP de creased and the mechanical function deteriorated after ischemia-reperfusion despite the fact that LVP X HR before ischemia was not different from that of the con trol group. The significant inhibitory effects on ADP stimulated (State 3) respiratory rates, respiratory con trol ratio and ADP/O ratio as reported by Schlafer and Kane (28) in isolated mitochondria from the rabbit heart with monensin (> 10-7 M) may be the cause of these deleterious effects. It is true that 10-5 of DCB, a selective inhibitor of Na+-Ca 2+ exchanger with a potency 100-fold over ami loride (29), when given before induction of ischemia prevented the accumulation of Na+ and Ca 2+ and the loss of K+ and ATP, but a marked decrease of LVP X HR to 16% of the control was observed before induc tion of ischemia with this dose of DCB; and doses of DCB lower than 10-5 M, which did not produce any inhibition of LVP X HR, failed to prevent the myocar dial accumulation of Ca 2+. Kim and Smith (30) re ported that the EC50 of DCB for inhibition of Na+ Ca2+ exchange in sarcolemmal vesicles of guinea pig heart was 6 X 10-7 M, and a K; of 4 X 10-6 M was reported for the inhibition of Na+-Ca 2+ current by Bielefeld et al. (31) in frog atrium. As was demonstrat ed in our previous paper (Shiga et al. (5)) with Ca 2+ antagonists, there exists a close relation between the level of LVP X HR before induction of ischemia and the amount of Ca 2+ accumulated after reperfusion. Thus, the prevention by DCB may be ascribed not to the specific effects on the Na+-Ca 2+ exchanger but to the inhibitory effects of this compound on the myocar dial mechanical function before induction of ischemia. The fact that 3-min infusion of 10-5 M DCB before in duction of ischemia, which produced only a slight in hibition of LVP X HR, combined with another 3-min infusion of 10-5 M DCB after reperfusion did not in hibit the accumulation of Na+ and Ca 2+ and the loss of K+ provides further support for this idea. The reason why ATP, ATP/ADP ratio and EC recovered to a level much the same as the one attained in the experi ment with 10-5 M of DCB given for 10 min before in duction of ischemia is not clear at present. As a cause of inhibition of the myocardial mechanical function, in hibition of Ca 2+ influx via slow Ca 2+ channels as re ported by Kim and Smith (30) is conceivable. The poor recovery of the mechanical function of the heart treated with DCB may also be explained on the same basis. Then what is the mechanism of accumulation of Ca 2+ after reperfusion? We have at present no clear cut explanation. However, the disruption of the surface membrane or sarcoplasmic reticulum by free radicals as reported by some investigators (32,33) may be the cause of this accumulation of Ca2+. In hearts treated with polyethylene glycol, the Na+ concentration was higher and ATP, lower after 40 min-ischemia and reperfusion than those in control hearts. This is in agreement with the results obtained in isolated perfused rat kidney which showed the ex acerbation of the hypoxic damage by polyethylene gly col (34).
2018-04-03T00:42:47.117Z
1992-01-01T00:00:00.000
{ "year": 1992, "sha1": "d89c933844ec496ded131b7c472d69c14b1aaa5f", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jphs1951/59/2/59_2_191/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e4adc955a6f801090b627b99734936373cf933bb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
55942383
pes2o/s2orc
v3-fos-license
Galactic disk winds driven by cosmic ray pressure Cosmic ray pressure gradients transfer energy and momentum to extraplanar gas in disk galaxies, potentially driving significant mass loss as galactic winds. This may be particularly important for launching high-velocity outflows of"cool"(T<10^4 K) gas. We study cosmic-ray driven disk winds using a simplified semi-analytic model assuming streamlines follow the large-scale gravitational potential gradient. We consider scaled Milky Way-like potentials including a disk, bulge, and halo with a range of halo velocities V_H = 50-300 km/s, and streamline footpoints with radii in the disk R_0=1-16 kpc at height 1 kpc. Our solutions cover a wide range of footpoint gas velocity u_0, magnetic-to-cosmic-ray pressure ratio, gas-to-cosmic-ray pressure ratio, and angular momentum. Cosmic ray streaming at the Alfv\'en speed enables the effective sound speed C_eff to increase from the footpoint to a critical point where C_eff,c = u_c ~ V_H; this differs from thermal winds in which C_eff decreases outward. The critical point is typically at a height of 1-6 kpc from the disk, increasing with V_H, and the asymptotic wind velocity exceeds the escape speed of the halo. Mass loss rates are insensitive to the footpoint values of the magnetic field and angular momentum. In addition to numerical parameter space exploration, we develop and compare to analytic scaling relations. We show that winds have mass loss rates per unit area up to ~ Pi_0 V_H^-5/3 u_0^2/3 where Pi_0 is the footpoint cosmic ray pressure and u_0 is set by the upwelling of galactic fountains. The predicted wind mass-loss rate exceeds the star formation rate for V_H<200 km/s and u_0 = 50 km/s, a typical fountain velocity. INTRODUCTION The study of galactic winds seeks to understand the loss of mass from galaxies. Mass loss through winds is believed to be responsible for substantially reducing the observed baryon mass fraction in galaxies below cosmic values, and for helping to quench ongoing star formation, especially in low-mass galaxies (e.g. Somerville & Davé 2015;Naab & Ostriker 2017). Many studies have concluded that only up to 10 − 20% of the cosmic baryons can be found in stars and gas within galaxies (e.g. Bell et al. 2003;Moster et al. 2013;Behroozi et al. 2013;Rodríguez-Puebla et al. 2017), and this fraction steeply drops off for halos either above or below ∼ 10 12 M . Except for the highest mass halos, the hot halo gas (T > ∼ 10 6 K) does not appear to make up for the baryon deficit, but substantial warm (T ∼ 10 4 K) and warm-hot (T ∼ 10 5 − 10 6 K) gas is present in circumgalactic regions for a range of halo masses and redshifts, based on absorption-line surveys and other probes (e.g. Cen & Ostriker 1999;Anderson & Bregman 2010;Chen 2012;Putman et al. 2012;Werk et al. 2014;Prochaska et al. 2017). As accretion timescales are shorter than the Hubble time, circumgalactic gas that is accreted must subsequently be removed by galactic winds, and these winds are also presumably responsible for enriching the circumgalactic and intergalactic medium with metals (e.g. Tumlinson et al. 2017). Direct evidence of winds from galaxies is given by high-velocity emission and absorption lines that probe gas at a wide range of temperatures (see Veilleux et al. 2005;Heckman & Thompson 2017, for reviews). Most observations of galactic outflows have focused on starburst systems, and indicate empirical scaling relations that have yet to be fully explained. Martin (2005) used Na I and K I absorption lines in ultra-luminous infrared galaxies to study cool gas outflows, finding that the outflow speed v ∝Ṁ 0.35 * , whereṀ * is the star formation rate. With Cosmic Origin Spectrograph Hubble Space Telescope data from 48 nearby star-forming galaxies, Chisholm et al. (2015) found that outflow velocities scale as v ∝Ṁ 0.08−0.22 * , v ∝ M 0.12−0.20 , and v ∝ v 0.44−0.87 circ , where M * is the total stellar content and v circ is the galaxy's circular velocity. Chisholm et al. (2017) extended this analysis to explore correlations between outflow rates and galaxy properties for seven galaxies, finding a ratio of mass outflow rate to star formation rateṀ wind /Ṁ * ≡ β (the "mass loading factor") β = 1.12 ± 0.27 v circ 100km/s −1. 56±0.25 . (1) Work by Heckman et al. (2015) used ultraviolet absorption lines in 39 galaxies to study warm ionized starburst-driven winds. Heckman et al. (2015) found a slightly shallower power law for mass-loading than Chisholm et al. (2017), with best-fit β ∝ v −0.98 circ for strong outflows, but their data are also roughly consistent with power laws slopes between −1 and −2. Heckman & Borthakur (2016) found that outflow velocities scale roughly as v ∝Ṁ 0.3 * and v ∝ v 1.16±0.36 circ . The variations among recent reported observations suggests that empirical wind scaling relations are not yet definitive, and it is uncertain how these may extend from starbursts to more normal star-forming galaxies. Proposed theoretical mechanisms for driving galactic winds have been reviewed by Veilleux et al. (2005); Heckman & Thompson (2017). An important early galactic wind model, motivated by the iconic starburst M82, considers a hot, adiabatic radial flow that originates with specified mass and energy input rates within the central region of a starburst nucleus (Chevalier & Clegg 1985). The hot gas in models of this kind is assumed to be created by extremely high velocity shocks arising from stellar winds and supernovae. The asymptotic velocity of the gas in this model depends on the central gas temperature, which in turn depends on the (adopted) ratio of energy to mass input rates. If an initially-hot wind of this kind has high enough energy loading to reach high velocity, but also mass loading in the regime that allows it to cool subsequent to acceleration, then radiative cooling by metal lines could in principle produce a high velocity warm or cold outflow (Wang 1995;Bustard et al. 2016; Thompson et al. 2016). However, there is only a limited range of mass-loading β hot ∼ 1 − 2 that allows a wind to cool strongly after accelerating to high velocity (Thompson et al. 2016), and it is not clear whether this range of β hot is compatible with the detailed interaction between blast waves from multiple correlated supernovae and the surrounding interstellar medium (ISM). Kim et al. (2017) show that except in extreme events, superbubbles are expected to cool before breaking out of the surrounding ISM, and that the residual hot gas at the time of breakout has β hot ∼ 0.1 − 1. Kim & Ostriker (2017, submitted) found in self-consistent simulations (for Solar neighborhood conditions) with star formation and supernova feedback that β hot ∼ 0.1 above z ∼ 1 kpc, and the hot, high-velocity outflow remains adiabatic. Another mechanism that has been proposed for driving a high-velocity warm outflow is that a hot, high-velocity flow transfers momentum to embedded warm (or even cold), dense clouds. A longstanding difficulty with this cloud entrainment model, however, is that significant acceleration of clouds is generally accompanied by cloud shredding and destruction on short timescales (e.g. Scannapieco & Brüggen 2015;Zhang et al. 2017b, and references therein). Acceleration of individual dense clouds by radiation pressure forces similarly tends to destroy them (e.g. Proga et al. 2014;Zhang et al. 2017a). Cosmic rays are believed to be accelerated in the shocks created by supernovae, with ∼ 10% of the injected energy going into cosmic rays, and the local energy density of cosmic rays comparable to other energy densities in the Milky Way's interstellar medium (e.g. Bell 2004;Grenier et al. 2015). GeV particles, which represent the largest contributor to the cosmic ray energy density, are confined within the galaxy for only ∼ 10 Myr, and in flowing out of the galaxy they interact via the magnetic field with the ISM gas (e.g. Zweibel 2017). Cosmic ray pressure gradients transfer momentum (and energy) from the cosmic rays to the gas, and may help to drive galactic winds. In this paper, we focus on analyzing the capability of cosmic ray-gas interactions to accelerate cool (T ∼ 10 4 K) gas to high velocities such that it is able to escape far into galactic halos. The first studies of a cosmic-ray-driven galactic wind were by Ipavich (1975). He found that cosmic rays can drive galactic winds with mass loss rates of 1 − 10M /yr, and that even zero temperature gas can be accelerated. A limitation of this exploratory study was that the framework adopted was a spherical, Keplerian potential in analogy to the solar wind. As we shall show, the form of the gravitational potential significantly affects the character of winds, and in particular the potential associated with an extended mass distribution in galaxies leads to constraints and types of wind solutions that are quite different from those for a Keplerian potential. Further studies by Breitschwerdt et al. (1991) incorporated a more realistic galactic potential (Miyamoto-Nagai bulge-disk and dark matter halo), and adopted the (arbitrary) assumption of vertical streamlines in which the cross-sectional area varies as A(z)/A 0 = 1 + (z/Z 0 ) 2 . They focused on non-radiative gas, and allowed for nonzero wave pressure. From their sampling of parameter space, they found that cosmic rays were necessary to drive a wind in many cases (except for very high initial temperature), and in particular, for typical conditions in the Milky Way galaxy. Cases with large initial (combined) energy density led to the highest mass-loss rates, and higher initial density tended to reduce the mass-loss rate. Recchia et al. (2016) solved similar equations to Breitschwerdt et al. (1991), except that they assumed waves are fully damped, while allowing for nonzero diffusivity that is self-consistently calculated based on the wind solution. Everett et al. (2008), motivated by diffuse X-ray observations towards the inner Galaxy, studied winds driven by a combination of cosmic ray and thermal pressure. They found that cases with cosmic ray pressure comparable to the thermal gas pressure produced the best fit to the observed Galactic diffuse soft X-ray emission. Their models indicate that thermal pressure imparts momentum and energy to the flow early on, and is more effective than cosmic ray pressure in mass-loading a wind. The terminal velocity and the evolution of the wind further from the base is more sensitive to the cosmic ray pressure. For fixed total (cosmic ray plus thermal) footpoint pressure, Everett et al. (2008) find that predominantly thermal-driven winds have higher mass-loss rates than predominantly cosmic-ray-driven winds. However, high thermal pressure is not guaranteed, and other work finds that the pressure of hot gas in the wind-launching region at z ∼ kpc is insufficient to drive strong disk winds in typical star-forming galaxy environments (Kim & Ostriker 2017, submitted). In addition to idealized analytic models, three-dimensional hydrodynamic and magnetohydrodynamic (MHD) simulations have recently been performed to explore the role of cosmic ray pressure forces in driving galactic outflows. These have adopted varying assumptions concerning the treatment of cosmic rays. For example, Uhlig et al. (2012) do not include diffusion or MHD, and assume that the cosmic ray fluid streams at the sound speed along the direction of the cosmic ray pressure gradient; Hanasz et al. (2013) and Simpson et al. (2016) neglect streaming of the cosmic ray fluid relative to the gas but include advection at the gas velocity and adopt fixed diffusion coefficients parallel and perpendicular to the magnetic field; Booth et al. (2013) and and neglect cosmic ray streaming and MHD, adopting an isotropic diffusivity; Ruszkowski et al. (2017) compare models in which cosmic rays stream along the cosmic ray pressure gradient at a speed proportional to the Alfvén speed or diffuse parallel to the magnetic field. All of these simulation studies have found that cosmic ray pressure gradients can drive significant winds, with mass-loss rates that can be comparable to star formation rates but are dependent on the detailed prescription and parameters adopted. A notable feature of simulations with a cosmic ray fluid is that galactic winds include cool (T < ∼ 10 4 K) gas. In this paper, we extend steady state one-dimensional studies of cosmic-ray driven winds to consider the case in which thermal pressure is negligible. We are motivated by observations that suggest highvelocity cool winds are ubiquitous (including even molecular gas), while at the same time simulations suggest that Type II supernovae interacting with the ISM produce hot gas at a rateṀ hot /Ṁ * = β hot < ∼ 1; taken together, this argues that heavily mass-loaded winds (β > 1) must rely on acceleration of warm and cold (rather than hot) ISM phases to speeds exceeding ∼ v circ that allow escape. Thermal pressure is included in our models by an isothermal equation of state with c s = 10 km s −1 , and it plays no role in wind acceleration. Although we do not explicitly follow the ionization level in the gas, we implicitly assume that this is high enough for the gas to be well-coupled to the magnetic field; for the low-density extraplanar warm medium under consideration, photoionization is believed to dominate (e.g. Ferrière 2001;Haffner et al. 2009). We do not include cosmic ray diffusion or explicit wave pressure, assuming that the cosmic ray fluid streams at the Alfvén speed relative to the gas. As in previous one-dimensional models, the streamline shape and cross-sectional area are prescribed, but our choices for these follow from the galactic potential rather than being arbitrary. We integrate the wind equation along streamlines to obtain the gas velocity, density, magnetic field, and cosmic ray pressure, seeking solutions that make smooth transitions through a sonic point. To complement our numerical solutions, we obtain analytic scaling relations for the properties of winds. In § 2 we describe our assumptions and mathematical formulation ( § 2.1, § 2.2), derive a onedimensional steady wind equation ( § 2.3), discuss the critical point transition and our integration method ( § 2.4), and connect to a form of the Bernoulli equation ( § 2.5). Section 3 contains our results. We specify the details of our galactic models and input parameterization ( § 3.1), give examples of wind solutions for dwarf and Milky Way galaxies ( § 3.2), and present results from our full parameter exploration of solutions to the wind equation ( § 3.3). In § 3.4 we derive analytic scaling relations for wind properties, and compare to our numerical integrations. Section 3.5 explores the effects of varying angular momentum and magnetic field strength on wind solutions. The key output of our study is a theoretical prediction for the mass-loss rates and mass-loading factors of cosmic-ray driven disk winds, which we discuss in § 3.6. Finally, § 4 summarizes and discussed our main conclusions. In Appendix A we provide estimates for the effect of ion-neutral collision-induced wave damping on the cosmic ray streaming speed, and in Appendix B we provide additional details related to the behavior of the effective sound speed. Hydrodynamic Equations We begin with the equations governing the combined gas and cosmic ray fluid flow (e.g. Breitschwerdt et al. 1991). The fluid variables are gas density ρ, gas velocity v, gas pressure P , gas internal energy density E, magnetic field B, cosmic ray pressure Π, and cosmic ray energy density E cr . The Alfvén velocity is given by v A = B/(4πρ) 1/2 , and the total gravitational potential, including both stars and dark matter, is Φ. The collective flow of cosmic rays along the magnetic field is limited by the streaming instability, in which a mean cosmic ray velocity (relative to the gas) exceeding the Alfvén speed leads to resonant excitation of Alfvén waves that then pitch-angle scatter the cosmic rays (Kulsrud & Pearce 1969). We assume that wave damping keeps the amplitude of excited waves low, and also mediates the transfer of momentum from the cosmic ray fluid to the gas (e.g. Kulsrud 2005;Zweibel 2017). Although very efficient wave damping can lead to faster streaming (Everett & Zweibel 2011;Wiener et al. 2013Wiener et al. , 2017, we shall assume that the mean velocity of the cosmic ray distribution in the rest frame of the gas is equal to v A . 1 We adopt a cylindrical coordinate system with unit vectorsR,ẑ, andφ, with z = 0 in the midplane of the galactic disk. We take Ω as the local mean rotational velocity of ISM gas in the disk where the wind originates. The inertial-frame velocity v is related to the velocity u in a frame rotating with angular velocity Ωẑ by v = u + ΩRφ. Mass conservation is expressed by ∂ t ρ + ∇ · (ρv) = 0. The momentum equation for the gas in the inertial frame is which becomes in the rotating frame. Assuming that the cosmic ray fluid streams along the magnetic field at velocity v + v A , and that cosmic ray diffusion and radiative and collisional energy losses may be neglected, the energy equation for the cosmic ray fluid is Note that v·∇Π represents the work done by the cosmic ray fluid in accelerating the gas, and v A ·∇Π represents energy losses due to generation of Alfven waves. The general form for the internal energy equation for the gas is given by where ρL is the net radiative loss per volume per time. The term v · ∇P represents work done in accelerating the flow, while the term −v A · ∇Π represents heat energy gained by wave damping. For E cr = Π/(γ cr − 1), E = P/(γ − 1), and an axisymmetric flow, the cosmic ray and gas thermal energy equations become and Previous steady state wind solutions adopt Equations 3, 5, 8, and 9 with ∂ t = 0, usually also taking L = 0. We are interested in winds consisting of warm gas that is maintained at T ∼ 10 4 K by radiative + shock heating and radiative cooling. Rather than implementing gas heating and cooling terms, for simplicitly we instead adopt an isothermal equation of state with P = ρc 2 s along streamlines for c s the constant sound speed. This is equivalent to γ = 1 in Equation 9 because ∇ · u = −u · ∇ ln ρ from the continuity equation. 2 We assume that Lorentz forces (∇×B)×B are negligible, so that in axisymmetry theφ component of Equation (5) implies angular momentum is conserved along each streamline, v φ R = (u φ + ΩR)R = J = const. (10) 2 We note that when γ = 1, for L = 0 the cosmic ray energy equation, gas thermal energy equation, and momentum equation (dotted with ρv) can be combined to obtain an equation expressing total energy conservation in the flow, ∇ · 1 2 v 2 ρv + γ γ−1 P v + Φρv + γcr γcr−1 Π(v + v A ) = 0, which is related to the Bernoulli equation. While this expression does not apply when γ = 1, a different Bernoulli-like equation can be obtained in that case (see § 2.5). With u p = u RR + u zẑ the poloidal velocity, the poloidal components of Equation (5) becomes where the effective potential incorporates centrifugal-force effects. Flow Streamlines and Conserved Quantities A major assumption in this work is that the poloidal components of the fluid and Alfvén velocities and the gradients of the pressures are all aligned with the gradient of the effective gravitational potential ∇Ψ. We assume that all these vectors lie alongŝ, the streamline direction. For streamlines in the poloidal (R-z) plane, the tangent direction iŝ The normal to the streamline in the poloidal plane is given bŷ Sinceŝ lies along the gradient of Ψ,t · ∇Ψ = 0, and the streamline can be found from the potential by solving The distance s along the streamline is obtained from The area A of a given fluid element (or the axisymmetric area A between two poloidal streamlines) varies with s as where the right-hand side is obtained from applying the divergence to Equation (13) with Equation (16). As an example, radial streamlines haveŝ =r, and d r ln A = ∇ ·r = 2/r so that A ∝ r 2 . If z is taken as the independent variable, we instead have and use Equation (17). Figure 1 shows examples of streamlines emerging from the disk for a Milky Way potential Φ, for a range of values of J in Ψ. For each footpoint the five values of J correspond to 0, 0.8, 0.9, 0.95, and 1.0 times the respective maximum value on each footpoint. These maximum values correspond to the angular momentum of a circular orbit at radii of 0.59, 1.39, 2.99, 6.74, and 15.14 kpc, respectively, for a halo with virial radius 250 kpc. These values scale with the virial radius. See § 3.1 for details regarding the potential, and § 3.5 for a discussion of J and definition of the maximum value. Henceforth, we use u to denote the magnitude of the poloidal gas velocity, with and similarly v A,p = v Aŝ . From mass conservation (Equation 3), ∇ · (ρuŝ) = 0, which implies d s ln ρ = −(d s ln u + d s ln A). Thus, where the "0" subscript denotes values at the streamline footpoint, and the "c" subscript denotes values at the streamline critical point (see § 2.4 for a discussion of critical points). Similarly, the cosmic ray energy equation (Equation 8) becomes for n cr the cosmic ray number density, this is consistent with conservation of the flow of cosmic ray particles, (u + v A )An cr = const., together with the relation Π ∝ n γcr cr . Since ∇ · B = 0 and B p = Bŝ, d s ln B = −d s ln A so B ∝ A −1 and the Alfvén This expresses the combined conservation of magnetic flux and mass flux. Note that the ratio of the Alfvén speed to the wind speed evolves as For an accelerating wind whose streamlines are opening, both u and A monotonically increase with s while ρ decreases, so v A /u must decrease with increasing s. One-dimensional Steady Wind Equation Applying the assumptions described in § 2.1 and § 2.2 to Equation 11, the poloidal momentum equation becomes After some manipulation, we find We define an effective sound speed C eff , including effects of both gas and cosmic ray pressure, by the expression Thus, C 2 eff in Equation 28 can also be written as We define a gravitational velocity V g by the expression We note that if streamlines are radial and the centrifugal term in Ψ is negligible, V 2 g = rd r Φ/2 = v 2 c (r)/2 for v c (r) the circular velocity at distance r. Thus, if the circular velocity is a nearly constant value characterized by the galaxy's dark matter halo, With the above definitions, the ordinary differential equation that describes the steady-state wind is given by Written in this way, the wind equation (Equation 32) has the same form as that of a classical Parker wind in a Keplerian potential, taking s → r, C 2 eff → dP/dρ ≡ c 2 s , V 2 g → (1/2)GM/r, and d s ln A → 2/r. In the case of general rather than radial streamlines, it is convenient to use z rather than s as the independent variable, in which case the wind equation may be written We note that the density ρ (or gas pressure ρc 2 s ) appears in the wind equation only in ratios with the magnetic pressure (in v A ) and the cosmic ray pressure. For integration of the wind equation, we therefore only require the combination rather than Equation 22 and Equation 23 separately. To obtain wind solutions, we evolve u, A, and the streamline using Equation 33, Equation 19, and Equation 16 as a set of three coupled ordinary differential equations. For any point on the streamline where we have u and A, we find v A and Π/ρ in terms of u and A via Equation 24 and Equation 34, respectively. Critical Point and Integration Method A physically realistic wind begins close to the galactic disk from a velocity u that is low compared to the effective sound speed C eff and the gravitational speed V g . From Equation 32, for an accelerating wind with d s u > 0, it must be true that V g > C eff for u < C eff , and V g < C eff for u > C eff . If the fluid is to achieve speeds that will allow it to escape into the galaxy's halo, u must exceed both C eff and V g . Since d s ln A is set by the shape of the potential Ψ, it is in general non-zero. Thus, for the flow to avoid singularities (i.e. d s u is never infinite), at the critical point where u = C eff it must also be true that V g = C eff . From Equation 28, one can show (see Appendix B) that For γ cr = 4/3, C eff will increase outward (as ρ decreases) whenever v A /u > 0.64. From Equation 25, v A /u is strictly decreasing with s if ρ is decreasing, so provided that v A /u > 0.64 at the critical point, C eff will secularly increase from the footpoint up to the critical point. A schematic showing u, C eff , and V g relative to one another as a function of streamline distance, including a critical transition, is shown in Figure 2. 3 For a given galactic potential and streamline shape, the location of the critical transition R = R c , z = Z c fully specifies the value of V g,c . Thus, a given location for the critical point also specifies the fluid velocity u c and value of C eff,c at that point. We obtain wind solutions to our set of ODEs with the following procedure: Given some desired footpoint (R 0 , Z 0 ) in the effective potential, we pre-compute the streamline which passes through that footpoint by integrating Equation 16 outward. Then, one may choose some point (R c , Z c ) along that streamline to be the critical point; this also specifies the values of V g,c = u c = C eff,c and A c based on the potential and streamline shape at the critical point. Given (R c , Z c ), one may select a value of the Alfvén speed at the critical point, v A,c . Then, applying Equation 28 at the critical point yields Π c /ρ c in terms of V g,c , v A,c , and c s . With all the fluid variables known at the critical point, the coupled ODEs may be integrated back to the footpoint (R 0 , Z 0 ) according to the procedure described at the end of § 2.3. When the streamline footpoint is reached, the starting "ISM conditions" u 0 , v A,0 , and Π 0 /ρ 0 that are consistent with the selected critical point are read off of the solution. For each footpoint, a variety of solutions can be attained by (1) varying the critical point location (R c , Z c ) along the streamline, and (2) varying the Alfvén velocity at the critical point, v A,c . In total, this implies two degrees of freedom for each footpoint and streamline shape. Equivalently, two degrees of freedom also represents choosing the footpoint values of Π 0 /ρ 0 and v 2 A,0 = B 2 0 /(4πρ 0 ), with u 0 the unique value for which a solution is able to pass through a critical point. Thus, we can explore a range of ISM properties given a footpoint, and can use 2-D root-finding to locate wind solutions (including the value u 0 ) of particular points in Π 0 /ρ 0 and B 2 0 /(4πρ 0 ) space, while using the critical point location and v A,c as inputs. More generally, any two of the three footpoint velocities (Π 0 /ρ 0 ) 1/2 , v A,0 , u 0 can be chosen to parameterize the space of possible solutions, with the third velocity constrained by the requirement that the flow makes a critical transition. s velocity V g C eff u s velocity V g C eff u Figure 2. A schematic comparing the behavior near the critical point of an isothermal Parker wind in a Keplerian potential (left) to that of a wind driven by cosmic ray pressure in a galactic potential (right). Loci depicting the gravitational velocity V g , effective sound speed C eff , and wind velocity u as a function of distance along the streamline s are shown. Note that all three curves intersect at the critical point (or sonic point). Also note that in each case, u < C eff < V g at low s, and V g < C eff < u at high s, consistent with the wind equation (Equation 32) for an accelerating flow. For the schematic Parker wind depicted, C eff is constant (isothermal) and V g is decreasing (Keplerian). For the schematic cosmic ray driven wind, V g is nearly constant (galactic potentials have close to flat rotation curves), which necessitates an increasing C eff to enable a critical transition. More generally, if P e ∝ ρ γe , C 2 eff ≡ dP e /dρ ∝ ρ γe−1 , so a critical transition is only possible in V g ∼ const galactic potentials if γ e < 1, whereas critical transitions are possible in Keplerian potentials (V g ∝ r −1/2 ) for γ e ≥ 1. To initiate integration near the critical point, we apply L'Hôpital's rule to the right-hand side of Equation 32: where we use u c = V g,c = C eff,c at the critical point and the partial derivatives with respect to u assume holding A constant and vice versa. Note that C eff can be written as a function of u and A. This yields a quadratic which must be solved for d s u, after computing ∂ A C eff , ∂ u C eff , d s A, and d s V g (see Appendix B). The two possible solutions are a decelerating wind and accelerating wind, and the accelerating solution is taken. Alternatively, using the properties of the solution topology, different values of f (u, s) = d s u can be tested. Each value of d s u will result in some u = u c − d s u∆s for a new point s = s c − ∆s. Then, taking this value of u and position on the streamline s = s c − ∆s, the derivative f (u , s ) can be calculated. The true f (u, s) = d s u will be a fixed point such that s)∆s, s − ∆s) and can be numerically found. This only holds true for the true wind solution passing through the critical transition, and does not hold true for the breeze solutions, due to the solution topology of wind flows. Any error in this technique is comparable to a shooting technique error, as even an order unity error in d s u leads to a point within ∆sd s u of the critical point. That is, we begin near the sonic point in (s,u) space, as long as the initial step ∆s is chosen to be small, which avoids the sensitive nature of d s u near the sonic point and gives us an accurate wind. Integration can proceed directly from there. From Equation 22, Equation 31, and Equation 32 it is straightforward to show that With Equation 30, Equation 37 then becomes For an isothermal equation of state for the gas, dh g = dP/ρ for gas enthalpy h g = c 2 s ln ρ. We can formally define cosmic ray enthalpy h cr via dh cr = dΠ/ρ. With this definition we have for Bernoulli parameter B. In general, Equation 29 does not yield a simple analytic form for h cr . However, in the limit of either u v A or u v A we have Π ∝ ρ γcr or Π ∝ ρ γcr/2 , respectively, such that in the two limiting cases. The case u v A has the same characteristic behavior as gas enthalpy, in that h cr is positive and both h cr and C eff decrease in magnitude as ρ decreases. The limit u v A , which is more relevant for understanding wind solutions inside the critical point, has instead very different behavior: h cr is negative, and both h cr and C eff increase in magnitude as ρ decreases. It is this behavior for h cr and C eff that allows u to increase and smoothly pass through a critical point where u = V g = C eff even when V g is nearly flat in s (see Figure 2). Model Specification Our goal is to explore the dependence of possible wind properties, and especially mass-loss rates, on the galactic environment. Winds will be affected by both the properties of the ISM in which the wind originates, and the galactic potential in which it is accelerated. To represent a range of galactic potentials, we adopt the general form of Bovy (2015) for the Milky Way potential. This includes a power law bulge, a Miyamoto-Nagai disk, and an NFW dark matter halo. To allow for a range of galaxy masses and sizes, we also wish to consider potentials with varying virial radius R vir and virial velocity V H . To do this, we consider a family of Milky-Way-like potentials in which the mean density is the same, but mass and virial velocity of the NFW halo vary with halo virial radius R vir according to M H ∝ R 3 vir and V 2 H ∝ GM H /R vir ∝ R 2 vir , with V H / km s −1 = R vir / kpc. The disk and bulge mass and size are rescaled in the same way. Within a given potential, we sample a few different footpoint locations, and for each footpoint, we consider a range of the angular momentum J (see Figure 1 and detailed parameter discussion in § 3.3). Each footpoint location R 0 and choice of angular momentum J/(ΩR 2 0 ) defines a streamline. For each streamline, we explore a two-dimensional parameter space of the sonic point location z c /R vir and Alfvén speed at the critical point v A,c . As discussed in § 2.4, this two-dimensional parameter space maps to a two-dimensional parameter space of footpoint initial conditions for the wind at a distance z = 1 kpc above the disk midplane. The ISM in the coronal region may have a range of gas, magnetic, and cosmic ray pressures. These depend on the midplane ISM properties as well as the star formation activity, which drives a galactic fountain that circulates gas from the midplane to coronal regions. We non-dimensionalize the problem so that the three relevant pressures are captured as two ratios: the thermal gas pressure to cosmic ray pressure ratio P 0 /Π 0 = c 2 s (Π 0 /ρ 0 ) −1 , and the magnetic field pressure to cosmic ray We note that v 2 A,0 and Π 0 /ρ 0 are obtained from outputs of the wind integration starting at the critical point and ending at the footpoint. We also non-dimensionalize all of the velocities as ratios with respect to c s , which we set to be 10 km/s for a "cool" wind consisting of warm-phase ISM gas. We are interested in cases where the magnetic-to-cosmic ray pressure ratio brackets equipartition by an order of magnitude (above and below). Since this ratio is close to equipartition in the Solar neighborhood, and the scale heights of these components are large, we expect that at z = 1 kpc they remain roughly in equipartition. Sample Wind Solutions Examples of wind solutions for a dwarf galaxy halo with V H = 50 km s −1 and a Milky Way-like halo with V H = 250 km s −1 are shown in Figure 3 and Figure 4. For each halo potential, cases with initial launch velocity u 0 = 5 km s −1 and u 0 = 50 km s −1 are shown. In all cases, the footpoint cosmic ray pressure and magnetic pressure are chosen to be in equipartition. For the dwarf model, the footpoint radius is R 0 = 1 kpc, while for the Milky Way model the footpoint radius is R 0 = 4 kpc. The angular momentum parameter is set to J = 0. Specification of u 0 and B 2 0 /(8πΠ 0 ) selects a unique wind solution for a given halo potential and streamline. For all solutions shown, u secularly increases with distance, while V g secularly decreases. C eff increases outward inside the critical point, and then decreases at large distance. The Alfvén speed v A exceeds u inside the critical point, but drops off to small values at large distance. The density ρ and cosmic ray pressure Π secularly decrease with distance. In detail, u becomes nearly constant at large distance, which for a radial flow implies that ρ ∝ (uA) −1 ∝ r −2 . Thus, from Equation 25, v A ∝ uρ 1/2 ∝ r −1 at large distance, which in turn implies that the effective sound speed declines slowly, as C eff ∝ ρ (γcr−1)/2 ∝ ρ 1/6 ∝ r −1/3 , at large distance (modulo flattening due to c s ). Since both V g and C eff are equal at the sonic point and decrease slowly thereafter, they tend to be similar up until large distance where C eff ∼ c s . The escape speed V esc ≡ −2(Ψ − Ψ ∞ ) is larger than V H but decreases with distance, so that u eventually exceeds V esc and in the absence of intervening halo gas, the wind would escape. In practice, wind propagation at large distance would ultimately be limited by interaction with surrounding halo gas. Wind Parameter Exploration We have extensively explored the parameter space of galaxies' potentials and footpoint ISM properties. In particular, we have considered potentials with V H in the range 50 − 300 km s −1 . Our standard set of footpoint locations is R 0 = 1, 2, 4, 8, 16 kpc, and we vary the angular momentum parameter J by selecting values up to a maximum value for each footpoint, described in § 3.5. To explore a range of footpoint ISM conditions for each potential and each streamline, in practice we begin by sampling a grid of critical point locations and Alfvén speeds. Some of these points yield footpoint solutions that fall within a few orders of magnitude of equipartition between gas, magnetic field, and cosmic ray pressure. Interpolating between those points yields estimates for values of the critical point location and Alfvén speed whose corresponding winds begin near desired points in the space of footpoint pressure ratios. This allows us to fill in the pressure space even though integration begins from the critical point. Every computation yields either a wind accelerating through the sonic point or fails immediately by decelerating through the sonic point, which helps to delimit the boundaries of the space in which interesting wind solutions exist. Here, we focus on wind solutions in which u secularly increases with distance. Since accelerating winds require C eff,0 < V g,0 , a lower limit to ρ 0 c 2 s /Π 0 is set by conditions that yield C eff,0 = V g ∼ V H . If ρ 0 /Π 0 is too low, C eff,0 exceeds V g,0 , and the wind does not accelerate. This lower limit is roughly illustrated by the black dashed horizontal line denoting Π 0 /ρ 0 = (V 2 H − c 2 s ) in the upper-left panels of Figure 5 and Figure 6. When v A,c /u c is small, C eff is decreasing through the critical point (see Equation 35). Since V g must decrease faster than C eff for a critical transition to exist, small v A,c ends up producing a sonic point at a large distance. But to yield a sonic point at large distance, C eff,0 must be large, and this implies small ρ 0 c 2 s /Π 0 . Thus, as we not interested in solutions with sonic points at extremely large distance, this places another lower limit on ρ 0 c 2 s /Π 0 . For example, the lower left sector of Figure 6 is excluded by these considerations, as can be seen by the large values of z c and the small values of v A,c . Winds with large v A,c tend to have strong acceleration, implying lower u 0 to reach a given V g,c ∼ V H . Although solutions to the wind equation exist for large v A,c , we limit v A,c to avoid unrealistically small u 0 . This consideration excludes the upper right sector of Figure 6. For each wind solution, we are particularly interested in the mass-loss rate. Other parameters of interest are the critical point location and Alfvén speed. In addition, to decide whether a given cosmic-ray driven wind solution can be realistically produced, it is important to consider the footpoint velocity u 0 . Supernova-driven fountains can transfer warm ISM gas from the midplane to the corona, but the velocity of "fountain" gas at z > ∼ kpc distances above the midplane is typically < ∼ 100 km s −1 . 4 A cosmic-ray driven wind must be able to match its footpoint conditions to the available gas mass and momentum flux into the corona from below, which implies an upper limit on the value of u 0 . In characterizing the mass loss produced in our wind solutions, we non-dimensionalize the mass flux by taking the ratio at the footpoint to Π 0 /c s . Considering only the z component of the wind setting A c /A 0 = 1 (dashed black curve), with reduction at larger R 0 in part due to lowerŝ ·ẑ. The top right panel shows that the sonic Alfvén speed v A,c scales approximately linearly with V H (dashed black line represents v A,c = V H ), and is slightly larger for smaller R 0 . The bottom left panel shows that the location of the critical point is typically not far from the launch point for winds with near-equipartition cosmic ray and magnetic pressure, with z c ∝ V H ∝ R H (dashed line). Sonic points in more massive halos are further out because the wind must accelerate more to reach u = V g . The bottom right panel shows that by the time the wind reaches the virial radius, it would reach a speed a few times greater than V H . velocity to get the mass loss per unit area of the galactic disk, we havė Thus, for a given ratio of gas-to-cosmic-ray pressure at the footpoint, the normalized mass flux is set by the normalized vertical component of the footpoint velocity, u 0,z /c s . Examples showing the space of two dimensional pressure ratios for which wind solutions have been found (on a given streamline in a given potential) are shown in Figure 5 and Figure 6. For each point in the identified wind solution space, values of the mass-loss rate, the footpoint velocity, the vertical distance to the critical point, and the Alfvén speed at the critical point are shown in color scale in separate panels. From the top two panels in Figure 5 and Figure 6, the mass loss rate and initial wind velocity appear to be primarily a function of the gas density with very little dependence on the strength of the magnetic field, when magnetic and cosmic ray pressures are within an order of magnitude of equipartition. Furthermore, comparing winds from massive galaxies (large V H ) to dwarf galaxies shows that increasing V H shifts the solution space towards lower density (lower ρ 0 c 2 s /Π 0 ), and also leads to lower scaled mass loss (lowerΣ z c s /Π 0 ). This is not qualitatively surprising, as the potential well is deeper (larger V g ∼ V H ) in a more massive galaxy, and therefore larger C eff is needed to drive outflows to reach escape speed. Since C 2 eff ∼ Π/ρ, the mean density of winds in more massive galaxies must be lower if they are to successfully escape. In § 3.4, we demonstrate analytically and numerically that a relationship is expected between mass-loss rate and the ratio ρ 0 c 2 s /Π 0 . Then, from the definition of our dimensionless mass loss rate, a relationship between the gas density and the mass loss rate also fixes the initial wind velocity. Naively, it might seem surprising that there is a lower bound on the density (or an upper bound on the cosmic ray pressure) for which wind solutions exist in dwarf galaxies. However, the reason for this lower limit is that we are only interested in accelerating winds with low initial velocity. This requires C eff,0 < V g,0 at the footpoint, as discussed in § 2.4. Since C 2 eff ∼ Π/ρ, there is an upper limit on what C 2 eff,0 (and hence the cosmic ray pressure) can be that is still consistent with a given (low) value of V g ∼ V H . Lower density winds with higher C eff,0 that are already escaping with u 0 > C eff,0 > V g,0 are mathematically allowed. However, these are not of interest for the present work, because they are not driven by cosmic ray pressure gradients above the main body of the ISM. Some general characteristics of winds are illustrated in Figure 7. In this figure, we consider a range of halo velocities (V H = 50 − 300 km s −1 ) and footpoint radii (R 0 = 1, 4, 16 kpc). We show results of solutions for which the footpoint magnetic field is in equipartition with the cosmic ray pressure (B 2 0 /(8π) = Π 0 ) and the footpoint launch speed is u 0 = 50 km s −1 , with angular momentum J = 0 and footpoint height z 0 = 1 kpc. For each wind solution, we show the scaled mass-loss rate (Σ z c s /Π 0 , Equation 41), the Alfvén speed at the critical point (v A,c /c s ), the vertical distance of the critical point from the footpoint (z c /z 0 ), and the flow velocity at large distance relative to the halo velocity (u[z = R vir ]/v H ). As might be expected, v A,c and the wind velocity at the virial radius are roughly proportional to V H , and z c increases roughly linearly with R vir . The critical point is relatively near the launch point when B 2 0 /(8π) = Π 0 , as is also evident in Figure 5 and Figure 6. For a given V H ,Σ z c s /Π 0 is larger for smaller footpoint radius R 0 , and the differential effect is largest at small V H . The dependence ofΣ z c s /Π 0 on R 0 is largely because streamlines are most vertical for small R 0 . The scaled mass-loss rateΣ z c s /Π 0 decreases with V H ; we discuss the specific scaling behavior (dashed curve) in § 3.4. Wind Scaling Relations Equation 35 shows that C eff increases outward provided v A /u > 0.64, which implies that v A /u 1, and in practice v A /u 1 (see Figure 3, Figure 4) for most of the evolution between the footpoint and the critical point. In the limit of v A u, Equation 23 becomes and the effective sound speed (see Equation 28 and Equation 29) becomes As discussed in § 2.5, this implies that for γ cr = 4/3, C 2 eff ∝ ρ −1/3 , which increases outward as ρ decreases outward. At the critical point, C 2 eff,c = V 2 g,c = u 2 c . Furthermore, since a typical galactic rotation curve is close to flat, V g,c ∼ V H where V H is a characteristic halo velocity. In particular, Figure 8 shows that the range of ratios V g,c /V H = 0.9 − 1.5 for V H = 50 − 300 km s −1 . Finally, conservation of mass flux implies ρ 0 u 0 A 0 = ρ c u c A c . Combining these relations (and using γ cr = 4/3), Equation 43 may be solved for the footpoint mass flux ratio or pressure ratio aṡ Equation 44 shows that for a fixed halo potential (V H ), at large ρ 0 c 2 s /Π 0 the normalized mass fluẋ Σc s /Π 0 and footpoint velocity u 0 /c s must be small. This is consistent with the behavior evident in the numerical wind solution results shown in the top panels of Figure 5 and Figure 6 for V H = 50 km s −1 and V H = 250 km s −1 , respectively. Also, since u 0 < V g,c ∼ V H and A 0 < A C , from Equation 45 a lower limit on the footpoint density is given by This limit is roughly shown with a dashed horizontal line in Figure 5 and Figure 6. The scaling relation in Equation 45 can be compared to the dependence of the footpoint pressure ratio Π 0 /(ρ 0 c 2 s ) on V H and u 0 found in our numerical wind solutions. Figure 9 shows the dependence of Π 0 /(ρ 0 c 2 s ) on V H for actual solutions of the wind equation with a series of u 0 values, compared to the analytic estimate Equation 45 taking A c /A 0 = 1. Evidently, the analytic prediction is in quite good agreement with the numerical results. Figure 9 also shows that the solutions are insensitive to the footpoint radius. Dimensional analysis would suggest that if the momentum flux associated with the cosmic ray footpoint pressure, Π 0 , is directly transferred to momentum flux in a wind with characteristic velocity ∼ V H and density ∼ ρ 0 , then one would naively expect Π 0 /ρ 0 ∼ V 2 H . Red lines in each panel of Figure 9 shows that this naive expectation is not bad as a zeroth order estimate, but that it increasingly fails to fit the true wind solutions at high V H and low u 0 . Instead, the prediction Π 0 /ρ 0 ∼ (3/2)V Equation 45 can be rearranged to provide an estimate for the "carrying capacity" mass flux in a galactic disk wind that originates in a coronal region where the cosmic ray pressure is Π 0 and ISM material at T ∼ 10 4 K is fed from below by a supernova-driven fountain flow with velocity u 0 . This carrying capacity isΣ Of course, u 0 < V g,c ∼ V H , soΣ < ∼ Π 0 /V H for low velocity halos. We compare the carrying capacity toΣ z in the top left panel of Figure 7. The difference is at most 10% for R 0 = 1 kpc, a factor of 2 for R 0 = 4 kpc, and a factor of 7 for R 0 = 16 kpc. The variation for different R 0 is primarily due to the geometric factorŝ ·ẑ. For smaller R 0 , the streamline following the gravitational potential starting at z = 1 kpc is more vertical, whereas distant R 0 have more radial streamlines. The relation in Equation 46 shows that winds driven by cosmic ray pressure are not expected to follow either the "momentum" (Σ z ∝ V −1 H ) or "energy" (Σ z ∝ V −2 H ) scalings that have commonly been adopted in "subgrid" wind models in galaxy formation simulations (Somerville & Davé 2015). Instead, the scaling with V H is intermediate between these two limits, and an additional dependence on the "feeding" velocity u 0 is also present. We emphasize that the far-field wind velocity does, however, scale nearly linearly with V H , as shown in Figure 7. Finally, we remark that Equation 46 is the carrying capacity for winds driven by cosmic ray pressure, but more generally for any driving effective pressure P e , Equation 32 will still hold for C 2 eff → dP e /dρ (see Equation 30), and C 2 eff,c = u 2 c = V 2 g,c must still hold at the critical point. If P e ∝ ρ γe , then for a galactic wind with V g,c ∼ V H the generalization of Equation 46 iṡ where P e,0 is the driving pressure at the footpoint. With C 2 eff = γ e (P e,0 /ρ 0 )(ρ/ρ 0 ) γe−1 , 0 < γ e < 1 is required for C eff to increase with distance such that a steady, accelerating wind is able to make a critical transition in a V g ∼ V H = const. galactic potential. 5 Cosmic-ray driven winds have γ e ≈ γ cr /2 = 2/3 (inside the critical point). Equation 47 shows that any simple pressure-driven galactic disk wind will have dependence on V H between the "momentum-driven"and "energy-driven" scalings, i.e. ∝ V −(1+γe) H with 1 < 1 + γ e < 2. In contrast to case of a galactic disk wind with an extended potential, a wind from a point mass (or any truncated mass distribution) has V g decreasing outward ∝ r −1/2 , so that a steady wind with a critical transition may have C eff also decrease outward, compatible with γ e ≥ 1. This is a key distinction between pressure-driven Parker-type winds (which would include quasi-spherical galactic center winds for which the halo potential is unimportant) and galactic disk winds (see Figure 2). Angular momentum and magnetic field dependence Angular momentum of the flow has a small effect on the wind. The centrifugal force produces acceleration in the rotating frame in theR direction, and as shown in § 2.1 this effect can be incorporated in an effective potential Ψ. The centrifugal force partly compensates for the inward force of gravity, which near the disk is primarily in the −R direction. Since we assume streamlines follow the gradient of the effective potential, and angular momentum reduces the gradient of Ψ in thê R direction, the resulting streamlines are more vertical at higher J. This effect is shown in Figure 1. Since angular momentum opposes inward gravitational acceleration, it decreases V g along the streamline. We do not explore large angular momentum J > 0.5R 0 V H because the effective potential produces a gradient that would be unrealistic for streamlines, turning around towards R = 0 at large z. For a nearly vertical streamline with large J, at z R the gravitational and centrifugal components of the effective potential gradient (which is related to streamline directionŝ = ∇Ψ/|Ψ| by assumption), respectively drop off asẑ/z and −RR 2 0 /R 3 . Since R remains roughly constant and z is increasing, the centrifugal term eventually dominates the streamline. This leads to a streamline which unrealistically turns towards R = 0 at large z if J > J max . We numerically determine the maximum value J max for each value of R 0 in Figure 1 and note that typically J max > 0.5R 0 V H . Hence, we avoid those values. This consideration determines the range of streamlines depicted in Figure 1. For given mass fluxΣ = ρ 0 u 0 along streamlines, the mass-loss rate per unit area in the diskΣ z is lower by a factorŝ ·ẑ = [(dR/dz) 2 + 1] −1/2 . More vertical streamlines, with smaller dR/dz, therefore have a largerΣ z , other things being equal. By examining Figure 1, this effect is small for small footpoint radii R 0 , since the fractional change inŝ ·ẑ is small for varying J. Figure 11 shows results for mass-loss rates in two different halo potentials, at a range of footpoint locations, for varying angular momentum parameter J. The top panels show that the mass-loss rate . For all cases, Π 0 = B 2 0 /8π and u 0 = 50 km s −1 at the footpoint at height z 0 = 1 kpc. Points with different colors correspond to streamlines with footpoint radii R 0 = 1, 4, 16 kpc. The mass-loss rate increases slightly with J, but overall the effect of rotation is modest. per unit area in the disk depends more strongly on R 0 (and corresponding streamline geometry) than on the angular momentum J. The bottom panels show that larger R 0 cases correspond to largerΣ (because V g is slightly smaller at the critical point; see § 3.4). In comparison, the top panels show that the geometric effect is strong enough to reverse this trend forΣ z =ŝ ·ẑΣ, with larger R 0 yielding smallerΣ z . Note that increasing J decreases the upper limit on u 0 for which there is an accelerating solution. For example, at large values of J/R 0 V H and fixed u 0 = 50 km s −1 , accelerating solutions exist for high V H halos but not low V H halos, as evident in Figure 11. In this work, we have ignored the toroidal component of the magnetic field and any associated magnetic stresses. Work by Zirakashvili et al. (1996) includes these magnetic forces in a rotating galaxy, finding that increasing the magnetic field strength by a factor of 3 leads to roughly 1.4 -2 times more mass loss. If we included magnetic forces, they would provide an additional acceleration that could increase u 2 up to u 2 Both M A and u φ are small inside the critical point for the winds we study so the acceleration from magnetic pressure forces would be small. Since we do not include magnetic forces, the magnetic field only affects winds through the value of the Alfvén speed v A (associated with the poloidal field component), which controls the streaming rate of cosmic rays. This in turn affects the evolution of C eff , which must increase relative to V g to produce a critical point where C eff = V g . To have C eff increase outward, v A /u > 0.64 is required (see Equation 35). Since v A relative to u only determines the effective adiabatic index of the cosmic ray fluid, v A does not directly appear in the scaling relation Equation 45 (for sufficiently large v A ), and therefore the wind is expected to depend only weakly on the strength of the magnetic field B. We find that in wind solutions the magnetic field strength at the base of the flow (B 0 ) indeed has a relatively small effect on the wind properties. This is evident in Figure 12, in which changing the magnetic pressure by three orders of magnitude leads to less than order unity change in the mass loss rate. This is also evident in the top left panel of Figure 5 and Figure 6. At smaller magnetic field strengths, increasing B 0 leads to increased mass loss since a larger v A allows a larger u 0 under the constraint that v A /u must be large enough to produce an accelerating wind with a sonic transition. Implications for Mass Loading of Galactic Winds Mass fluxes for our wind solutions are all given in units of Π 0 /c s , with values in the range ∼ 0.001 − 0.1 in these units (see Figure 5, Figure 6, Figure 7, Figure 11, Figure 12). The physical value of the mass flux therefore depends on the cosmic ray pressure (or energy density) in the region where the wind originates. Consider as an example the Solar neighborhood, where the local cosmic ray pressure is P cr ∼ 0.6eV cm −3 (Grenier et al. 2015). Using c s = 10 km s −1 , the dimensional factor for the mass-loss rate would be Π 0 /c s → P cr /10 km s −1 ∼ 0.15 M kpc −2 yr −1 . ForΣ z c s /Π 0 ∼ 0.004, as might be appropriate for the Solar neighborhood with u 0 = 50 km s −1 (see Figure 7) , the result isΣ z ∼ 5 × 10 −4 M kpc −2 yr −1 . The corresponding footpoint number density of the wind at z = 1 kpc would be n 0 = 8 × 10 −4 cm −3 (assuming mean molecular weight of 1.4 m H ). This mass-loss rate is ∼ 20% of the observed star formation rate estimated in the Solar neighborhood, 2.5 × 10 −3 M kpc −2 yr −1 (Fuchs et al. 2009). More generally, we showed that the "carrying capacity" estimate in Equation 46 follows the numerical results quite well, especially for small R 0 (see Figure 7), so it is useful to rewrite it in dimensional Lower halo velocity V H or higher feeding velocity u 0 increases the mass-loss rate. The corresponding density of hydrogen nuclei in the wind at the footpoint in the launching region (above the main ISM disk) is Note that this density is much lower than the typical midplane density of both the cold and warm ISM, but based on numerical simulations (e.g. Kim & Ostriker 2017, submitted) is similar to mean densities of warm "fountain" gas in galactic disk corona regions. Mass-loss in galactic winds is often characterized in terms of the "mass loading," defined as the ratio of the local wind mass-loss rate to the local star formation rate, β ≡Σ wind /Σ SFR , whereΣ wind =Σ z in the present notation. Because cosmic rays are produced in the supernova remnants associated with explosions from young, massive stars, the cosmic ray pressure at the disk midplane likely scales with the star formation rate, P cr = η cr Σ SFR . For the Solar neighborhood, η cr ∼ 600 km s −1 . Other components of the midplane pressure, including the thermal pressure and turbulent kinetic and magnetic pressures, are expected to be proportional to Σ SFR (Ostriker et al. 2010;Ostriker & Shetty 2011) with respective "feedback yield" coefficients η th , η turb , η δB , etc., that can be computed with detailed numerical simulations of the ISM including star formation and feedback (Kim et al. 2011(Kim et al. , 2013Kim & Ostriker 2015, such that the total pressure is P tot = η tot Σ SFR . Assuming Π 0 is comparable to the midplane cosmic ray pressure, we then have for the predicted mass-loading factor for cosmic-ray driven winds, In applying Equation 50, numerical results forΣ z c s /Π 0 can be drawn from the figures, while Equation 51 comes from Equation 46. Assuming η cr /c s ∼ 100, the mass-loading factor for cosmic-ray driven winds will exceed unity whenΣ z c s /Π 0 > ∼ 0.01. From the top-left panels of Figure 5 and Figure 6, the mass loading is order unity or higher near equipartition (B 2 0 /8πΠ 0 ∼ 1) for sufficiently low ρ 0 , which corresponds to high u 0 (top-right panels of Figure 5 and Figure 6). From Equation 51, mass-loading for cosmic-ray driven winds is expected to exceed unity in dwarf galaxies where V H < ∼ 200 km s −1 , provided u 0 ∼ 50 km s −1 is consistent with galactic fountain flows that carry gas into the corona (see e.g. Kim & Ostriker 2017, submitted). Finally, we emphasize that Equations 46, 48, and 50 represent carrying capacities, and hence are upper limits for the mass flux or mass-loading of a cosmic-ray driven warm-gas wind that originates in disk corona regions and is fed by a galactic fountain from below. Of course, the wind mass-loss rate cannot exceed the mass feeding rate from below. While the general dependence of the fountain mass flux on local disk parameters is not presently known, current numerical MHD simulations of supernova-driven outflows do show a mass-loading factor of the warm fountain near unity at height of a few times the warm-ISM scale height (Kim & Ostriker 2017, submitted; see also Martizzi et al. (2016) and Li et al. (2017)). The asymptotic specific energy of the gaseous wind is (1/2)V 2 ∞ , where V ∞ /V H ∼ 2 from Figure 7. This implies that the asymptotic energy loading of the wind (defined as ratio of wind energy to energy injected by supernovae) is then ∼ (V H /500 km s −1 ) 2 times the mass-loading factor, where we have assumed 100M in stars are formed for every 10 51 erg of energy injected by supernovae. Using the fiducial η cr and u 0 in Equation 51, this yields an energy loading less than ∼ 10% for V H < 300 km s −1 , as must be the case if the wind ultimately derives its power from cosmic rays that are accelerated in supernova remnants. However, we caution that η cr in Equation 51 need not be a constant, and it is not known how it may depend on local ISM properties. The energy flux in cosmic rays at the base of the wind is (u 0 + v A,0 )3Π 0ẑ ·ŝ. If we assume that this is of order 10% of the energy input rate from supernovae, (700 km s −1 ) 2 Σ SFR , this places a practical upper limit on the product (u 0 + v A,0 )η crẑ ·ŝ. SUMMARY AND DISCUSSION In this paper, we have used one-dimensional (1D) steady-state models to explore the properties of galactic disk winds driven by cosmic ray pressure. In contrast to previous studies of cosmic-ray driven disk winds using steady-state 1D idealizations (Ipavich 1975;Breitschwerdt et al. 1991), we adopt a streamline shape that is specifically motivated by "downhill" flow in a realistic galactic effective potential (including bulge, disk, halo, and a centrifugal term). Also, as our main interest is in understanding how large quantities of relatively cold gas may be accelerated to escape from a deep potential well, we adopt an isothermal equation of state with c s = 10 km s −1 (T ∼ 10 4 K) for which thermal pressure forces are negligible and cosmic ray pressure forces provide the needed acceleration, rather that considering hot outflows (as from galactic center starburst regions) that are driven by both thermal and cosmic ray pressure (e.g. Everett et al. 2008). A key feature of winds driven by cosmic ray pressure is that the square of the effective sound speed C 2 eff = dΠ/dρ increases ∝ ρ −1/3 with decreasing ρ when v A /u is sufficiently large (see Equation 35 and § 3.4), and generally increases relative to the squared gravitational velocity V 2 g = d s Ψ/d s ln A inside the critical point (see Appendix B). In contrast, an adiabatic thermal wind cools as it expands and ρ drops, so that the sound speed strictly decreases outward as C 2 eff ∝ ρ γe−1 for γ e > 1. Thermalpressure driven galactic disk winds face an inherent challenge, as V g must decrease faster than C eff in order to make a steady sonic transition, but a galactic potential including an extended dark matter halo has a nearly flat rotation curve with V 2 g ∼ V 2 H out to large radii. For cosmic rays, C 2 eff increases outward because streaming at the Alfvén speed implies Π ∝ n γcr cr ∝ (v A A) −γcr ∝ ρ γcr/2 ∝ ρ 2/3 . Figure 2 shows the characteristic differences between galactic winds driven by cosmic ray pressure and classical Parker stellar winds, while Figure 3 and Figure 4 show examples of our full numerical solutions. After making a sonic transition, where C eff,c = u c = V g,c ∼ V H , acceleration slows and u flattens out, while v A declines rapidly and C eff declines slowly at large distance. 3. For B 2 0 /(8πΠ 0 ) = 1 and footpoint velocity u 0 = 50 km s −1 , over the full range of V H we find that the mass-loss rateΣ z = ρ 0 u 0,z ∼ 0.01 − 0.1 Π 0 (decreasing at larger V H and increasing slightly with R 0 ), the critical point is close to the disk z c ∼ 1 − 5 kpc (increasing linearly with V H ) with v A,c ∼ V H , and at the virial radius u is 2 -3 times V H (Figure 7). We show that our numerical integration results are in good agreement with a simple analytic prediction relating footpoint properties of "successful" steady wind solutions with the halo velocity as Π 0 /(ρ 0 u ). The footpoint velocity u 0 that enters the mass-loss estimate is presumably limited by the supernova-driven fountain flow that carries gas from the midplane to the "coronal" region above the disk. For galaxies with potentials similar to the Milky Way, Equation 51 suggests that the mass-loss rates for winds driven by cosmic ray pressure will be only slightly lower than the star formation rates. Mass loss could significantly exceed star formation for dwarf galaxies. An interesting feature of cosmic ray driven winds is their dependence on the halo velocity. Whereas nominally the wind mass loading β =Σ wind /Σ SFR ∝ V −1 H for "momentum driven" winds and β ∝ V −2 H for "energy driven" winds (Murray et al. 2005;Somerville & Davé 2015), Equation 51 argues that β ∝ V −5/3 H for galactic disk winds driven by cosmic ray pressure. This power law is in between the "momentum" and "energy" scalings, is intriguingly similar to that in observations by Chisholm et al. (2017), and is also consistent with other observations (see § 1). We remark that more generally, steady galactic disk winds driven by any gamma-law pressure force would have β ∝ V −(γe+1) H for 0 < γ e < 1. Our work has several limitations. For example, we do not include cosmic ray diffusion, and we do not include effects of magnetic pressure or tension forces on the flow. We also do not model the winds from a full disk but rather individual non-interacting streamlines. A full disk would have non-uniform structure and a distribution of cosmic ray pressures, gas densities, and launching velocities from gas motions. Our model is unable to incorporate possible effects of interaction between streamlines. Furthermore, we treat the gas as a single-phase medium, but in reality the warm medium in galactic disk coronal regions at z > ∼ kpc would have a volume filling factor below unity, with "warm fountain" gas intermixed with hot gas (e.g. Kim & Ostriker 2017, submitted). The effects of volume filling factor on mass loss are uncertain, especially as cosmic ray pressure forces on the gas are mediated by the interaction of both the cosmic rays and gas with magnetic fields. To move beyond these limitations will require full numerical MHD simulations of a multiphase ISM, including self-consistent star formation and feedback, with a cosmic ray treatment that includes streaming at the Alfvén speed along magnetic field lines. While our models are idealized in many respects, our results provide evidence that cosmic-ray driven winds may be quite important to the evolution of galaxies, especially at V H < ∼ 200 km s −1 . Our analysis makes clear the distinctive physics behind cosmic-ray driven winds, also providing scaling relations that may prove useful for tests of and comparisons to fully three-dimensional numerical implementations. With the possibility that cosmic ray pressure may drive more mass out of dwarf galaxies than is locked up in stars, there is strong motivation to include a realistic treatment of cosmic rays in future galaxy formation simulations. ACKNOWLEDGMENTS We are grateful to the referee, Ellen Zweibel, for an insightful report, and Eliot Quataert for helpful suggestions. This work was supported by the National Science Foundation under grant AST-1312006 and NASA under grant NNX17AG26G to ECO, and grant DGE-1148900 providing a Graduate Research Fellowship to SAM. A. ION NEUTRAL DAMPING To estimate the effect of ion-neutral damping we compare the streaming instability growth rate with the ion neutral damping rate. The growth rate is (Kulsrud & Pearce 1969): for ion cyclotron frequency Ω 0 , cosmic ray number number density n CR , ion number density n i (corresponding to mass density ρ = µn i ), and mean drift velocity of the cosmic ray distribution v D . We write f D = ((v D /v A ) − 1). We note that n CR n i ∼ Π ρc 2 (A2) and use Equation 45 so that at the base of the wind The damping rate is (Kulsrud & Pearce 1969): for neutral number density n n and rate coefficient σv , where we assume a mostly-ionized medium. From Kulsrud & Cesarsky (1971), σv = 1.53 to 8.40 ×10 −9 cm 3 s −1 for T = 100 to 10 4 K. Setting Γ CR > Γ in as the condition for ion neutral damping to be ignored, this requires This says that ion-neutral damping may be neglected provided that n n /f D is not too large. If n n is small, that means that f D can also be very small (i.e. v D → v A ); larger n n would require larger drift. A lower estimate, taking f D ∼ 1, V H = 50 km s −1 , and a rate coefficient of 10 −8 cm 3 s −1 , gives n n 0.03 cm −3 . This is easily satisfied for the parameter regime we consider, since even the ion density is only ∼ 10 −3 cm −3 forΣ z ∼ 10 −3 M kpc −2 yr −1 (see also Equation 49 more generally). The mass-loss rate would have to be very high, and the neutral fraction very large, for ion-neutral damping to be significant. Finally, we note that for primarily-neutral gas in higher density clouds, ion-neutral collisional damping is much stronger and cosmic rays are therefore expected to stream rapidly through such clouds, whether within the ISM or in galactic winds (Everett & Zweibel 2011). Since C 2 eff − c 2 s is positive and d s ln ρ < 0 (ρ decreases outward), C eff will increase outward (d s C 2 eff > 0) provided that the sign of the right-hand side is negative. For γ cr = 4/3, this is true for v A /u > 0.64. The linear dependence of ρ −1 on u and A also allows us to simplify our treatment of the sonic transition ( § 2.4) from Equation 36. That is, (d s A)∂ A = (Ad s ln A)∂ A = (Ad s ln A)(∂ A ρ −1 )∂ ρ −1 = −(d s ln A)∂ ln ρ . Similarly, ∂ u = (ρ −1 /u)∂ ρ −1 = −(1/u)∂ ln ρ . The partial derivatives with respect to u assume holding A constant and vice versa. Solving Equation 36 for d s u, the behavior of the wind at the critical transition is then given by a quadratic with a solution Since C eff changes slowly, a < 0. Hence, there is an accelerating wind passing through the sonic point whenever b > 0 so that d s u > 0. This corresponds to when ∂ ρ −1 C eff > 0. However, it is also possible to attain d s u > 0 when b < 0, as long as −4ac > 0, so that the determinant is larger than b. This corresponds to −∂ ln ρ C eff d s ln A > d s V g . This is simply a mathematical demonstration of the qualitative property that the wind begins with u 0 < C eff,0 < V g,0 and evolves so that eventually V g < C eff < u. In order for C eff and V g to change order, C eff must increase relative to V g . This concept is roughly illustrated in Figure 2. For a typical galactic potential with a nearly flat rotation curve, V g slightly decreases and is nearly constant. Thus, it is sufficient for −∂ ln ρ C eff > 0, so at the critical transition point, it is necessary for v A 0.64u. Before this point, since v A /u is strictly decreasing, v A u throughout the evolution of an accelerating wind with a smooth sonic transition. Another possible family of accelerating solutions to Equation B7 under the assumption a < 0 is This requires b > 0 so that −∂ ln ρ C eff > 0 and hence d s C eff > 0, and simultaneously −4ac < 0, so that d s V g > −∂ ln ρ C eff (d s ln A) > 0. Again, since b > 0, this leads to v A u. For such sonic point conditions, two branches of solutions are possible, but this second branch of solutions only occurs for gravitational potentials where V g is increasing.
2018-01-19T19:00:00.000Z
2018-01-19T00:00:00.000
{ "year": 2018, "sha1": "1746b20cb13cb3e69fd2255f88705bd3164cd3c4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1801.06544", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1746b20cb13cb3e69fd2255f88705bd3164cd3c4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
23623669
pes2o/s2orc
v3-fos-license
Evaluation of fat sources (lecithin, mono-glyceride and mono-diglyceride) in weaned pigs: Apparent total tract and ileal nutrient digestibilities This study was conducted to investigate the effects of lecithin, mono-glyceride and mono-diglyceride on apparent total tract and ileal nutrient digestibilities in nursery pigs. Twenty [(Landrace × Yorkshire) × Duroc] barrows were surgically fitted with simple T-cannulas. Dietary treatments included 1) CON (basal diet: soy oil), 2) LO (lecithin 0.5%), 3) MO (mono-glyceride 0.5%), 4) MG (mono-glyceride 1.0%) and 5) MDG (mono-diglyceride 1.0%). In apparent total tract nutrient digestibility, dry matter (DM) and gross energy (GE) digestibilities of MDG treatments were higher than LO and MG treatments (p<0.05). In nitrogen (N) digestibility, LO treatment showed the lowest compared to others (p<0.05). The digestibility of crude fat was higher in MDG treatment than CON and LO treatments (p<0.05). In apparent ileal nutrient digestibility, DM digestibility was higher in MDG treatment than LO and MG treatments (p<0.05). GE digestibility was higher in MDG treatment than LO, MO and MG treatments (p<0.05). N digestibility of MDG treatment was greater than LO treatment (p<0.05). Also, the digestibility of crude fat was higher in MDG treatment than CON and LO treatments (p<0.05). In conclusion, mono-diglyceride can increase apparent total tract nutrient and apparent ileal nutrient digestibilities of DM, GE, N and crude fat. Introduction 12) Dietary fat utilization for nursery pigs, especially during the first week after weaning period, is limited due to insufficient digestion and fat absorption (Cera et al., 1988). The capacity of the small intestine to absorb micellar lipid exceeds normal influx into the gut in piglets (Freeman et al., 1968). Therefore, entry of fatty acids into the micellar phase probably limits fatty acid digestibility (Bayley & Lewis, 1963). During postweaning time, it is important to provide a highly digestible fat source. Lecithin (phosphatidyl choline) is a phospholipid that is extracted commercially from soybeans and promotes the incorporation of fatty acid into micelles. The lecithin increased the apparent digestibility of dietary fat in diets fed to calves and humans (Aldersberg & Sobotka, 1943;Hopkins et al., 1959). However, the dietary lecithin had no influence on apparent ileal or overall digestibility in small intestine and whole digestive tract of pigs (Overland et al., 1993). Monoglyceride type is absorbed and utilized in small intestine directly (Mattson & Beck, 1956). The diet containing monoglyceride increased fat digestibility, however, there were no significant differences in digestibilities of DM (dry matter), N (nitrogen) and DE (digestible energy) (Min et al., 2006). Also, the monoglyceride improved absorption of palmitic acid in chickens (Garrett & Young, 1975). The addition of lecithin or monoglyceride may enhance the utilization of fat as well as a highly digestible energy source. Therefore, the objective of this study was to determine the effects of lecithin and monoglyceride on the apparent ileal and total tract DM, N, GE (gross energy) and crude fat digestibilities in weaning pigs. Materials and Methods The Animal Care and Use Committee of Dankook University approved all of the experimental protocols used in the current study. Experimental design and diets Pigs were blocked by initial body weight and randomly allocated to one of five dietary treatments in a randomized complete block design. There were five replications per treatment. Pigs were adapted 4 days to the experimental diets and 2 d (12 h/d) of ileal digesta collection. The daily feed intake allowance was 0.05 × BW 0.9 , as proposed by Armstrong & Mitchell (1955). The daily feed allotment was offered as two meals at 12 h intervals (8:00 a.m. and 8:00 p.m.). Dietary treatments included 1) CON (basal diet: soy oil), 2) LO (lecithin 0.5%), 3) MO (mono-glyceride 0.5%), 4) MG (mono-glyceride 1.0%) and 5) MDG (mono-diglyceride 1.0%). Diet composition is shown in Table 1. The diets were formulated to meet or exceed the nutrient requirements recommended by NRC (1998). Fatty acid composition of lecithin, monoglyceride and mono-diglyceride is presented in Table 2. Experimental diets were formulated to contain 3,400 kcal/kg of ME, 21.50 (%) of CP and 1.42 (%) of lysine. Chromic oxide was added (0.2 % in the diet) as an indigestible marker to allow digestibility determinations. Pigs were allowed to consume water ad libitum from nipple waterer. Sampling and measurements Ileal digesta and feces were collected during the 12 h period between the morning and evening feeding for the last 4 d (2 d: ileal and 2 d:fecal) of collection period. Ileal digesta were collected into plastic bags attached to the cannulas. Every 20 min the digesta were emptied into plastic containers and placed over ice. The collected digesta were pooled and frozen until being lyophilized and ground. Feed, fecal and ileal digesta were analyzed for DM, N and fat concentration (AOAC, 1995). Chromium was determined by UV absorption spectrophotometry (Shimadzu, UV-1201, Japan) and apparent ileal digestibilities of DM and N were calculated using the indirect method. Gross energy was analyzed by oxygen bomb calorimeter (Parr, 6100, USA). Statistical analysis In this experiment, all data were analyzed in accordance with a randomized complete block design using the GLM procedures of SAS (1996), with each pen comprising one experimental unit. Also, data was compared according to the means of treatments via Duncan's multiple range test (Duncan, 1955). Apparent total tract nutrient digestibility In apparent total tract nutrient digestibility (Table 3), DM and GE digestibilities of MDG treatment were higher than those of LO and MG treatments (p<0.05). N digestibility was the lowest in LO treatment compared to others (p<0.05). The digestibility of crude fat was higher in MDG treatment than in CON and LO treatments (p<0.05). Vegetable oil has higher digestibility than animal fat during the initial weeks of post-weaning period (Cera et al., 1989). Lecithin is a phospholipid that is extracted commercially from soybeans. There was no interaction between lecithin and soy oil on apparent digestibility of fat in piglets, and lecithin had no significant effect on apparent digestibility of DM, GE and N (Overland et al., 1993). However, there was a greater percentage of N retained as a percentage of intake. In this experiment, lecithin treatment did not show significant difference in apparent nutrient digestibility of DM and GE compared to soybean oil treatment, while N digestibility was decreased, which matches with the study by Jones et al. (1992). The results from researches mentioned above indicated that N digestibility was reduced for pigs fed tallow plus lysolecithin. When the unsaturated fatty acid was added with monoglyceride, the absorption of long chain saturated fatty acid was improved (Young & Garrett, 1963). Similarly in MO and MG treatments, the apparent total tract digestibility of fat was significantly higher than control treatment. Apparent ileal nutrient digestibility In apparent ileal nutrient digestibility (Table 4), DM digestibility was higher in MDG treatment than in LO and MG treatments (p<0.05). GE digestibility was higher in MDG treatment than in LO, MO and MG treatments (p<0.05). N digestibility of MDG treatments was greater than that of LO treatment (p<0.05). Also, the digestibility of crude fat was higher in MDG treatment than in CON and LO treatments (p<0.05). Lecithin is a type of exogenous emulsifying agent for lipids. The major function is emulsification, which means to transform fat into fat micelle. In this experiment, LO treatment did not show significant difference in apparent ileal nutrient digestibility of DM, GE and N compared to soy oil treatment. Those results were consistent with studies of Jones et al. (1992). They used soybean oil, tallow, lard, and coconut oil, and with lysolecithin as 10 % of the added fat. Also, the overall digestibilities of DM, GE and N were not affected by lecithin supplementation and lecithin could not increase digestibility of soybean oil (Overland et al., 1993). This study showed that the apparent ileal nutrient digestibility of fat was not affected by monoglyceride treatments. The diet containing monoglyceride (12.5 and 25 %) showed increased fat digestibility compared to diet containing dried palm oil powder and had no significant difference compared to diet containing soy oil (Min et al., 2006). Also, the apparent N, DM, and GE digestibilities were not affected by treatments (soybean oil, tallow, lecithin and monoglyceride) (Jones et al., 1992). This experiment showed similar results, and there was no effect among SO, LO, MO and MG treatments in apparent ileal nutrient digestibilities of DM, GE, N and fat. Soy oil contains more unsaturated, long-chain fatty acids (Li et al., 1990), which makes it easier to enter into fat micelle and be digested and absorbed. The apparent fat digestibility was greater in diets containing medium-chain triglyceride (MCT) or coconut oil compared to soybean oil or roasted soybean diets during the first 2 weeks of post-weaning (Cera et al., 1990). In conclusion, mono-diglyceride could improve apparent total tract and ileal nutrient digestibility of DM, GE, N and fat. Aldersberg D & Sobotka H (1943
2016-05-12T22:15:10.714Z
2008-06-01T00:00:00.000
{ "year": 2008, "sha1": "cee055b178a7af21232f21ca21b5c664d5daa81a", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc2815320?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "cee055b178a7af21232f21ca21b5c664d5daa81a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
174798233
pes2o/s2orc
v3-fos-license
Torque equilibrium spin wave theory study of anisotropy and Dzyaloshinskii-Moriya interaction effects on the indirect K$-$ edge RIXS spectrum of a triangular lattice antiferromagnet We apply the recently formulated torque equilibrium spin wave theory (TESWT) to compute the $1/S$-order interacting $K$ -edge bimagnon resonant inelastic x-ray scattering (RIXS) spectra of an anisotropic triangular lattice antiferromagnet with Dzyaloshinskii-Moriya (DM) interaction. We extend the interacting torque equilibrium formalism, incorporating the effects of DM interaction, to appropriately account for the zero-point quantum fluctuation that manifests as the emergence of spin Casimir effect in a noncollinear spin spiral state. Using inelastic neutron scattering data from Cs$_2$CuCl$_4$ we fit the 1/S corrected TESWT dispersion to extract exchange and DM interaction parameters. We use these new fit coefficients alongside other relevant model parameters to investigate, compare, and contrast the effects of spatial anisotropy and DM interaction on the RIXS spectra at various points across the magnetic Brillouin zone. We highlight the key features of the bi- and trimagnon RIXS spectrum at the two inequivalent rotonlike points, $M(0,2 \pi/\sqrt{3})$ and $M^{\prime}(\pi,\pi/\sqrt{3})$, whose behavior is quite different from an isotropic triangular lattice system. While the roton RIXS spectrum at the $M$ point undergoes a spectral downshift with increasing anisotropy, the peak at the $M^\prime$ location loses its spectral strength without any shift. With the inclusion of DM interaction the spiral phase is more stable and the peak at both $M$ and $M^\prime$ point exhibits a spectral upshift. Our calculation offers a practical example of how to calculate interacting RIXS spectra in a non-collinear quantum magnet using TESWT. Our findings provide an opportunity to experimentally test the predictions of interacting TESWT formalism using RIXS, a spectroscopic method currently in vogue. I. INTRODUCTION In a recent publication Cheng et. al., Ref. 1, highlighted the features of the indirect K -edge resonant inelastic x−ray scattering (RIXS) bi-and trimagnon spectrum of an isotropic triangular lattice antiferromagnet (TLAF). The TLAF is known to possess a 120 • long range ordered state even after quantum fluctuations are considered [2][3][4][5][6][7][8][9][10][11][12][13][14]. The authors considered the self-energy corrections to the spin-wave spectrum to pinpoint the nontrivial effects of magnon damping and very weak spatial anisotropy on RIXS. It was shown that for a purely isotropic TLAF model, a multipeak RIXS spectrum appears which is primarily guided by the damping of the magnon modes. Interestingly enough it was demonstrated that the roton momentum point is immune to magnon damping (for the isotropic case) with the appearance of a single-peak RIXS spectrum. It was suggested that this feature could be utilized as an experimental signature to search for or detect the presence of roton like excitations in the lattice. However, including XXZ anisotropy leads to additional peak splitting, including at the roton wave vector. At present no theoretical guidance exists for experimentalists on how to interpret the RIXS spectrum of the ordered phase in a geometrically frustrated triangular lattice quantum magnet, though a proposal has been put forward to detect spin-chirality terms in triangular-lattice Mott insulators via RIXS [15]. Furthermore, as discussed in this article the existing spin wave theory formulation used for the isotropic case fails beyond the isotropic point and with Dzyaloshinskii-Moriya (DM) interaction included in the model. Lately, the nature of the ground and excited states of the TLAF has garnered some attention [16][17][18][19][20][21][22][23][24][25]. A high magnetic field phase diagram study of the TLAF has also been performed [26]. An appropriate theoretical treatment of interactions in a TLAF must consider spin wave quantum fluctuation effects [27]. Zero-point quantum fluctuations of a noncollinear ordered quantum magnet gives rise to spin Casimir effect [28,29]. As a spin analog of the Casimir effect in vacuum, the spin Casimir effect describes the various macroscopic Casimir forces and torques that can potentially emerge from the quantum spin system. The physical consequence of the Casimir torque, generated due to the underlying lattice anisotropy, is the modification of the ordering wave vector, which is much smaller than the classical value. The modification in the ordering wave vector can cause the spin spiral state to become unstable, in turn rendering the standard spin wave theory expansion (1/S-SWT) approach inapplicable. Thus, the generic interacting spin wave theory is not appropriate. To remedy the effect of singular behavior (which is not a precursor to the onset of quantum disordered phases) that naturally arises in noncollinear systems due to the presence of spin Casimir torque, Du et. al. [28,29], proposed the torque equilibrium spin-wave theory (TESWT). The regularization scheme of TESWT formalism removes the naturally occuring divergences within the interacting 1/S-SWT formalism of the anisotropic quantum lattice model. It was shown that TESWT gives a much closer final ordering vector to the results of series expansion (SE) and modified spin wave theory (MSWT) method [18,30]. Furthermore, its prediction of the phase diagram is consistent with the previous numerical studies [18,30]. Historically, the concept of a roton minimum and a rotonlike point in the TLAF was introduced by Zheng et. al. [31,32]. Using SE method the authors identified a local minimum in the magnon dispersion at the high symmetry M point, (π, π/ √ 3). Drawing analogy with the appearance of a similar dip (local minimum) that is observed in the excitation spectra of superfluid 4 He [33] and the fractional quantum Hall effect [34], the authors proposed the "roton" nomenclature to describe the minimum in the magnon dispersion. The dip in the spectrum is also present at the other high symmetry M point, (0, 2π/ √ 3), in the middle of the magnetic Brillouin zone (MBZ) face edge. Zheng et. al. noted that a roton minimum is absent in the linear spin wave theory (LSWT) spectrum. Thus, the occurence of the rotonlike point is a consequence of quantum fluctuations arising in a frustated magnetic material [35,36]. In a subsequent publication the concept of the rotonlike point was extended to the case of an anisotropic lattice by Fjaerestad et.al. [27]. Additionally, a square lattice system with J /J > 2 has also been predicted to support the roton minima [31,35]. Further support of the roton feature was provided by the 1/S-SWT study of Starykh et.al. [37]. Based on their work it was proposed that rotons are part of a global renormalization (weak local minimum), with large regions of (almost) flat dispersion. The appearance of rotonlike minima and what was dubbed as a roton excitation has also been studied in an anisotropic spin-1/2 TLAF from the perspective of an algebraic vortex liquid theory [38,39]. Several anomalous roton minima were predicted in the excitation spectrum in the regime of lattice anisotropy where the canted Neel state appears. From the perspective of the algebraic vortex liquid theory formulated in terms of fermionic vortices in a dual field theory, it was proposed that the roton is a vortex anti-vortex excitation, thereby, lending credence to use of the word roton as an apt description. Rotons have also been predicted to exist in field induced TLAF magnetic systems [40]. The field-induced transformations in the dynamical response of the XXZ model create the appearance of rotonlike minima at the K point. Experimental evidence of the rotonlike point can be found in recent inelastic neuron scattering (INS) spectrum of the α-CaCr 2 O 2 system [10,11]. Examples of TLAF where anisotropy and DM interaction are present are plethora [6-12, 27, 41-43]. With advancements in instrumental resolution of the nextgeneration synchrotron radiation sources, RIXS spectroscopy presents itself as a novel experimental tool to investigate the nature of the bimagnon RIXS spectrum and the influence of the roton [44]. As a spectroscopic technique RIXS has the ability to probe both single-magnon and multimagnon excitations across the entire MBZ [45][46][47]. Using RIXS it is possible to probe high energy excitations in cuprates [48,49]. Considering the physical behavior that has been studied within the context of RIXS TLAF and the fact that departures from the isotropic triangular lattice geometry is a norm in a frustrated TLAF, this begs the question − "What is the influence of spatial anisotropy and DM interaction on the bi-and trimagnon K -edge indirect RIXS bimagnon spectrum at the rotonlike points and the other MBZ points of an anisotropic triangular lattice ?" In this article, we utilize material parameters relevant to Cs 2 CuCl 4 to elucidate the K -edge RIXS behavior of the rotonlike points and also the bimagnon behavior at the Y point. We apply TESWT to our quantum Heisenberg model with spatial anisotropy and DM interaction on a triangular lattice. Using a TESWT upto first order in 1/S , we compute the final ordering vector, the spin-wave energy, and phase diagram with different anisotropy parameters. We find the phase diagram has a physically consistent behavior in the ordering wave vector Q. We find that the presence of a relatively small DM interaction can make the spiral state more stable. We calculate the interplay of x-ray scattering and bi-and trimagnon excitation. We find that the evolution of the RIXS spectra at rotonlike points is non-trivial. In the isotropic case all the rotonlike points are identical due to the 60 • rotation symmetry of the underlying isotropic triangular lattice. However, in the presence of symmetry breaking DM interaction terms the equivalence breaks down to give rise to two distinct points − M and M , see Fig. 9. Thus we investigate and track the evolution of the spectra at these two points separately. With increasing anisotropy the spectral weight at these points are subdued, even though the rotonlike points lie outside the region of magnon damping. Additionally, we find that the RIXS spectrum at the rotonlike M point undergoes a spectral downshift. However, for the M point the location of the peak is stable, albeit suppressed as the strength of the perturbation is increased. We also track the bimagnon RIXS evolution at the Y point in the Brillouin zone to compare and contrast with the behavior at the rotonlike points. The spectrum at Y shows more peaks than at M or M . Thus, the roton excitation spectrum is more stable [1]. This paper is organized as follows. In Sec. II we introduce the model spin-1/2 anisotropic TLAF with DM interaction. In Sec. III A we state the spin wave formalism required to compute the wave vector renormalization (Sec. III B) and renormalized dispersion (Sec. III C). In Sec. IV, we extend the applicability of the TESWT formalism to include the effects of DM interaction. In Sec. IV A, we elaborate on the TESWT method, compute the ordering vector and dispersion, and perform a TESWT INS fitting (Sec. IV B). We then calculate the phase diagram in Sec. IV C. In Sec. V we compute the indirect RIXS spectra. In Sec. V A we compute the non-interacting biand trimagnon spectrum. In Sec. V B we outline the formalism to compute the interacting bimagnon RIXS spectrum by including the quartic interactions. In Sec. V C we track the evolution of the roton energy to provide a physical explanation of the trend exhibited by the RIXS spectrum with anisotropy and DM interaction. In Sec. V D we state the results for the total indirect K -edge RIXS intensity. Finally, in Sec. VI we provide our conclusions. II. MODEL HAMILTONIAN The antiferromagnetic Heisenberg model on the anisotropic triangular-lattice is widely believed to be well described by Cs 2 CuBr 4 and Cs 2 CuCl 4 [50]. While Cs 2 CuCl 4 exhibits spin-liquid behavior over a broad temperature range [43,51], the Cs 2 CuBr 4 compound exhibits a magnetically ordered ground state with spiral order in zero magnetic field [6]. For α−CaCr 2 O 4 , though it is reported to have two inequivalent Cr 3+ ions and four different exchange interactions, the nature of the distortion is such that the average of the exchange interactions along any direction is approximtely equal. We consider the spin-1/2 antiferromagnetic Heisenberg model on the anisotropic triangular-lattice perturbed by a DM interaction, described by the Hamiltonian where i j refers to nearest-neighbor bonds on the triangular lattice, J i j = J denotes the exchange constants along the horizontal bonds and J i j = J the diagonal bonds, see Fig. 1. The asymmetric DM interaction between neighboring spins is given by where D = (0, D, 0) with (D > 0) and δ 1,2 are the nearest neighbor vectors along the diagonal bonds as shown in Fig. 1. In the classical limit, the spin operators are replaced by the three-component vectors where the spin forms a spiral with the ordering vector Q. The classical ground state energy is given by with λ k = 1 3 (cos k x + 2α cos k x 2 cos where the dimensionless ratios α = J /J and η = D/J denote the relative interaction strengths. For the determination of the ordering vector Q we have to minimize the classical ground state energy which amounts to finding the roots of the equations Anticipating that this condition leads to a spiral along the x axis Q = (Q 0 , 0), we obtain the solution in the absence of DM interaction as Apriori, it is not clear whether the classical ordering vector correctly describes the long-ranger order in the quantum frustrated system. In fact, the classical wave vector will be renormalized by quantum fluctuations as will be discussed in Sec. IV A. III. LINEAR SPIN-WAVE THEORY A. 1/S expansion Before we set up the spin-wave expansion, it is convenient to transform the spin components from the laboratory frame (x 0 , z 0 ) to the rotating frame (x, z) through where θ i = Q · r i . The rotating Hamiltonian takes the form where we have defined SWT amounts to applying the Holstein-Primakoff (HP) transformation to bosonize the rotating Hamiltonian (12) where n i = a † i a i . Under the assumption of diluteness of the HP boson gas, n i /(2S ) 1, one arrives at the interacting spinwave Hamiltonian to the first order expansion of the square root where the first term is the classical energy and H n denotes terms of the n th power in the HP boson operators a † (a). B. Quadratic terms: first-order corrected LSWT After Fourier transformation we obtain the quadratic Hamiltonian in momentum space as with where Diagonalization of H 2 is performed with the canonical Bogoliubov transformation with the parameters u k and v k defined as As a result we obtain the linear spin-wave dispersion It is noted that the magnon spectrum has zeros at k = 0 while a gap is opened at k = Q in the presence of DM interaction. The diagonalized Hamiltonian H 2 is given by where the zero-point energy is the 1/S correction to the classical ground-state energy. Generally, the first-order correction of LSWT Neglecting higher order terms, we obtain A straightforward calculation gives 1/S correction to the classical wave vector where In Fig. 2 we display the variation of the ordering wave vector renormalization against lattice anisotropy computed using LSWT, 1/S corrected LSWT, and TESWT. It is clear that while the LSWT formulation extends the spiral phase region, the first-order correction from 1/S-LSWT gives an unphysical result as α → 2 while η = 0. Inclusion of DM interaction rounds the singularity with an angle that is greater than 2π. The root cause of this divergence originates from spin Casimir torque [28,29]. In a frustrated spiral system, the strong quantum fluctuation effect leads to failure in the first-order correction. In Sec. IV we will discuss and implement the TESWT approach which offers a solution to this issue. The equations to generate the TESWT results are reported in that section. C. Cubic and quartic terms: renormalized dispersion The 1/S correction to the spin wave dispersion has to be accounted for in a non-collinear structure. The interplay of magnon decay as it arises from the non-collinear structure is also considered [52][53][54]. The three-boson term that arises from the coupling between transverse and longitudinal fluctuations in the noncollinear spin structure takes the form [55], In momentum space, we obtain where we have defined Performing the Bogoliubov transformation in H 3 we obtain the interaction terms expressed via the magnon operators as where we have adopted the convention that 1 = k 1 , 2 = k 2 , etc. The three-boson vertices are given by withV a,b given bȳ We notice that the three-magnon vertices are of order 1/ √ S relative to the linear spin-wave Hamiltonian and they must occur in pairs in any self-energy or polarization diagram. The quartic term H 4 in the interacting spin-wave Hamiltonian (16) reads To obtain the explicit forms of the quasiparticle representation of H 4 , we introduce the following mean-field averages The Hartree-Fock decoupling of the H 4 yields the quadratic Hamiltonian where We then obtain the Hartree-Fock corrected H 2 term as where Finally, the normal-ordered quartic termH 4 in the quasiparticle representation describes the multi-magnon interactions. In the hierarchy of 1/S expansion, terms relevant for our calculations are the lowest order irreducible two-magnon scattering amplitudẽ with the vertex function given by where we have defined The effective 1/S interacting spin−wave Hamiltonian in terms of the magnon operators reads At zero temperature the bare magnon propagator is defined as The first order 1/S correction to the magnon energy is determined by the Dyson equation The on-shell solution consists of setting ω = ε k in the selfenergy Eqs. (51) and (52) leads to the following expression for the 1/S renormalized spectrum whereω k = Re[ω k ] is the renormalized spin−wave energy and Γ k = −Im[ω k ] represents the magnon decay rate. In Fig. 3, we plot the 1/S LSWT dispersion of Cs 2 CuCl 4 [27]. IV. TORQUE EQUILIBRIUM SPIN WAVE THEORY Zero-point quantum fluctuation in a non-collinear ordered spin structure can lead to deviations in the measured ordering wave vector compared to the classical one. The correction emerging from the spin Casimir effect is usually neglected, but it was recently shown that this is not a bonafide assumption. In Du et. al. [28,29] it was clearly established that in certain situations a standard spin wave theory is no longer applicable due to the spin Casimir quantum effect, even when the system is long-range ordered. A important consequence of these quantum fluctuations is on the spiral state which can become unstable, which is different from the case of long-range-order melting. As mentioned earlier the classical signatures of these instabilities are the divergences of the ordering wave vector at the quantum critical point and the strongly singular one-loop expansions of the energy spectrum and the sublattice magnetization. In this section, we extend the applicability of the TESWT formalism to include the effects of DM interaction in an anisotropic TLAF. Using INS experimental data from Cs 2 CuCl 4 [51], we obtain fitting parameters for the exchange constants and DM interactions utilized in subsequent indirect K -edge RIXS calculations. A. TESWT formalism Spin Casimir effect will change the classical ground state to a new saddle point. This new ground state can be unambiguously determined once we compute the value of Q. An ordinary approach is considering the 1/S correction ∆Q, as we show in Sec. III B. However, such a method gives an unphysical result, see Fig. 2. As α → 2, the 1/S correction ∆Q becomes infinites. The basic idea of TESWT is to minimize the ground state energy. The spin Casimir torque is defined as where |Ψ vac represents the quasiparticle vacuum state. Then the torque equilibrium condition is where Q is the final ordering vector, H cl = E 0 (Q) is the classical energy. Using the fact that the spin-wave spectrum function ε k is only well defined at Q cl , we try to find a system whose classical ordering vector is Q for convenience of calculation. Thus we shift the function depending on classical ordering vector Q cl to Q by where H 2 , A k and B k are functions of another spin system whose classical ordering vector Q cl equals Q. The counterterm is given by H c 2 whose effects are considered in the A k (B k ) coefficients through A c k (B c k ). In principle, we have many combinations of ( α, η) that satisfy this condition. As η/α is small, within perturbation theory, we believe η = η is a reasonable choice. Thus the new parameters can be deduced by solving the following self-consistent equations The spin Casimir torque is then expressed approximately as T sc (Q) = T sc (Q). Thus the torque equilibrium equation in Eq.( 55) can be written as Note, the exchange parameters on the left-hand side of the equation are exact as α, η. While the parameters on the righthand side approximate as α = −2 cos Q 2 − η cot Q 2 . We solve this equation numerically and give the results in Fig. 2. If there is no DM interaction, TESWT gives Q = 2π for α ≥ 1.2, which are similar to the results of numerical methods [18,30]. The LSWT, however, gives a wider region for spiral order phase, can't describe the region for 1.2 ≤ α ≤ 2. As anticipated, even a small DM interaction, η = 0.05, changes our final ordering vector. The DM interaction improves the spiral order stabilization and enlarges it's region of validity. We diagonalize H 2 ( α, η, Q) and treat H c 2 as a counterterm. Since we are considering a 1/S theory, we neglect the counterterm contributions from H c 3 and H c 4 [28,29]. Thus, we can write the Hamiltonian as Following the procedure outlined in Sec. III, the effective TESWT Hamiltonian now reads where F means F( α, η, Q) (F is an arbitrary operator) and Thus, we shifted the classical ordering vector Q cl to the final ordering vector Q using TESWT. Therefore, the first order 1/S corrected magnon dispersion can now be changed to As discussed above, with anisotropy the application of 1/S-LSWT formalism is tricky. But, application of TESWT requires magnetic interaction parameters computed within that formalism. The most direct way to achieve this goal is to compare the theoretical dispersion with the experimental data. We fit the INS data of Cs 2 CuCl 4 [51] to Eq. (64) using iterative least squares estimation both by TESWT and 1/S -LSWT. Our fitting parameters along with results from other sources are reported in Table. I. Our dispersion line fits are reported in Fig. 3. The absence of higher order terms within our TESWT could be a source of disagreement with the series expansion results [27]. While it maybe fruitful to investigate the above mentioned discrepancy, within the context of our RIXS calculation we do not expect the improved interaction constants to bring about much qualitative or quantitative differences. C. Sublattice magnetization Next, we study the phase diagram of the anisotropic triangular-lattice. In a spin system, the sublattice magnetization can describe the phase transition behavior. The secondorder correction of the sublattice magnetization contributes little to the result. Thus, we only consider the first order correction to the sublattice magnetization as where In Fig. 4 we plot the sublattice magnetization S variation with spatial anisotropy. Our result without DM interaction is consistent with previous numerical studies [18,30]. Consistent with our previous analysis of Fig. 2, the spiral order is destroyed at α ≥ 1.2. In addition, the spiral order is unsafe at α ≤ 0.5, consistent with modified spin wave results [18]. The DM interaction, which originates from spin-orbit coupling, helps to generate a non-collinear spin ground state. It is evident from Fig. 4, as η gets bigger, the phase transformation point in the region α ≤ 0.5 diminshes until it disappears. On the opposite end, the sublattice magnetization recovers thereby making the α ≥ 1.2 zone less susceptible to Our focus in this article is on the multimagnon RIXS spectrum in the spiral phase. Thus, we can use the computed phase diagram to extract the appropriate choice of parameters. We find that TESWT not only gives a consistent physical estimate of the final ordering vector, but also correctly predicts the phase diagram of an anisotropic TLAF, helping to better understand the behavior of the spiral ground state of such a geometrically frustrated system. A. Noninteracting bi-and trimagnon RIXS In this section we calculate the bi-and trimagnon RIXS spectrum. The results in this part use TESWT while the LSWT approach is shown in Appendix A. The indirect RIXS scattering operator, is given by [57,58] where q is the scattering momentum. In quasiparticle representation, the magnon creation parts of the RIXS scattering operator can be given by where the bimagnon scattering matrix element is and the trimagnon scattering matrix element is We neglect the corrections from magnon interactions for the trimagnon intensity, which appear at 1/S 2 order. Next, using Eqs. (A4) and (A5) stated in Appendix A we obtain the following expressions for I 2 (q, ω) (noninteracting bimagnon) and I 3 (q, ω) (trimagnon) scattering intensity where ω (0) k = ε k + ε c k . In Fig. 5 we display our results of the noninteracting bi-and trimagnon RIXS spectra at various points across the MBZ. Overall the agreement between the LSWT and the TESWT formalism is reasonable. Our TESWT result generates more peaks for the bimagnon intensity. We note that in the isotropic regime α = 1, our TESWT results are identical with the LSWT formalism since the final ordering vector Q equals the classical vector Q cl , see Fig. 11. As discussed earlier, the TESWT is the physically correct formalism in the presence of anisotropy. B. Interacting bimagnon RIXS spectra We now proceed with the analysis of 1/S correction to the two-magnon Green's function by taking into account both the self-energy correction to the single magnon propagator G according to the Dyson equation and the vertex insertions to the Using the procedure outlined in our prior work [1] and Feynman rules in momentum space, we obtain the following equations for the two−particle propagator and the associated vertex function as where the basic one-magnon propagator up to 1/S order is now given by The lowest order two-particle irreducible interaction vertex in Fig. 6(c) reads in which the frequency-independent four-point vertex V 4 coming from the quartic Hamiltonian can be written as and the other four vertices V (a−d) 3 in the same 1/S order which are assembled from two three-point vertices and one frequency-dependent propagator can be written as In the above we have retained only the bare propagator G 0 for each intermediate line in V (a−d) 3 in the spirit of 1/S expansion. Note, the vertex expressions here are different from those stated within the traditional 1/S-SWT approach [1]. The vertex expressions here are shifted by the correct TESWT wave vector as represented by the tilde notation. Based on the above generalization, we now derive the final solution of the interacting RIXS intensity from the ladder approximation BS equation. We adopt a numerical approach to compute the interact- ing bimagnon RIXS intensity. We assume that two on-shell magnons are created and annihilated in the repeated ladder scattering process with ω ≈ −ω (0) We substitute (73) and (74) into (A6) to obtain is the renormalizated two-magnon propagator in the absence of vertex correction. To proceed further we divide the BZ into N points and replace the continuous momenta (k, k , k 1 ) with discrete variables (m, n, l). Thus, we can writê where Adopting the matrix notation Γ = (1 − VΠ) −1 we obtain the final form of theχ matrix aŝ where we have defined the following N × N matrices, The interacting bimagnon RIXS susceptibility is computed as We use Eqs. In Fig. 7 we show the spectra at the Y point. The first panel is a reproduction of our previous result reported in Ref. 1. In Fig. 7(b) we display the result of TESWT Cs 2 CuCl 4 RIXS. Compared to the isotropic case or to the other anisotropic situations, panels (c) and (d), this spectrum is substantially broadened. With enhanced anisotropy the lattice can be envisioned as disintegrating into a set of loosely coupled chains. Thus, instead of bimagnons one can expect the emergence of spinons as is expected in 1d systems. 1d RIXS has been able to capture multi-spinon excitations [60,61]. Thus, the predicted RIXS spectrum feature could be used to confirm quasi-1d to 2d dimensional crossover features of Cs 2 CuCl 4 [62]. In Figs. 7(c) or 7(d) we can compare the effects of including a tiny DM interaction. We find that there is a prominent low energy peak with a relatively muted higher energy response. This tiny DM interaction does not bring about any spectral down-or upshift. The spectral weight is simply redistributed. C. RIXS signatures at roton points In Fig. 8 we display the interacting RIXS intensity variation at the two anisotropic roton points q = M and q = M with varying lattice anisotropy and DM interaction. The anisotropy parameter choices ensure that the TLAF does not decouple into a set of loosely coupled 1d chains, where the bosonization description has been shown to apply [62]. The upper panel Figs. 8(a) and 8(b) are results for zero DM interaction. Note, the two spectrum coincide in the isotropic limit since the two roton points are equivalent due to C 3v symmetry of the isotropic triangular lattice [1], while they evolve differently in the presence of spatial anisotropy. In particular, we To gain insight into the roton behavior of the RIXS spectra we track the evolution of the roton minimum in the single magnon dispersion along Γ → M and M , both parallel and perpendicular to the MBZ path, see Fig. 9. A bimagnon excitation requires ω k+q + ω k amount energy. We notice that the one magnon dispersion along M displays more sensitivity compared to that along M . The asymmetrical sensitivity to the dispersion stiffness explains the origins of the differing ro-ton RIXS spectra behavior. Increasing anisotropy reduces the one magnon energy (softening) near the M point (the first column in Fig. 9), thus leading to a spectral downshift in Fig. 8. Whereas for the M point, the overall energy scale of the dispersion is not affected (the third column in Fig. 9). We observe neither a drastic hardening nor softening. Thus, the RIXS spectrum holds steady without any shift. The softening and subsequent flattening of the dispersion at the M point suggests that for the anisotropic TLAF, the roton feature is retained more at the M point compared to the M . However, inclusions of the DM interaction increases the one magnon energy both near M and M points (the second and fourth column in Fig. 9), introducing a spectral upshift. This could be understood by the fact that DM interaction introduces a gap, thus it requires more energy to create a single magnon and in turn a bimagnon excitation. The evolution of the spectral height in Fig. 8 can also be explained. As anisotropy weakens the coupling between the TLAF spins to transform the material to a quasi-1d spin chain, it is more difficult to create a bimagnon excitation. In RIXS, this will cause a decrease in the value of the bimagnon scattering matrix element | M(k+q, −k)| in turn leading to a reduction in the spectral weight, see Fig. 8(a) and 8(b). On the contrary, the presence of the DM interaction encourages interactions beyond the traditional Heisenberg type. Thus, it assists with the creation of bimagnons, see Fig.8(c), where the spectral weight increases. But for the q = M point, the actual nature of the magnon bands is not affected by the DM interaction, see Fig. 9 fourth column. Thus, the height of the RIXS spectrum does not change with DM interaction. Note, in all the above discussion we have assumed that the triangular lattice does not break down to a set of coupled 1d spin chains. The α = 0.5 RIXS spectra could well describe the Cs 2 CuBr 4 compound. D. Total RIXS In Fig. 10 we report the total RIXS spectrum for Cs 2 CuCl 4 with TESWT fitting parameters. The total RIXS spectrum comprises of the bi-and trimangon response. We use Eqs. (A4) and (A5) to compute the spectrum. The interacting bimagnon (Eq. (88)) and noninteracting trimagnon intensity (Eq.(72)) are summed over to get the total RIXS spectrum. As expected, the trimagnon peak is located at a higher energy than the bimagnon response. In the response for the M and Y points, the main peaks are separated, see Figs. 10(a) and 10(c). At the M point in Fig. 10(b), a small bigmagnon peak is obvious while the main peaks of bi-and trimagnon are mixed. We note that the spectrum height of the bimagnon undergoes a special evolution. Bimagnon has a height near the boundary of MBZ (M and M points) but vanishes when it is close to the center of MBZ (Y point). A similar trend for the bigmanon can also be observed in Figs. 5 and 11. This is due to the behavior of the RIXS scattering element from the indirect K -edge RIXS scattering operator in Eq. (67). For wave vector choice q close to the high symmetry Γ point, the RIXS bimagnon matrix element occuring from R q gives a vanishingly small contribution. Thus the spectral weight of the bi-magnon is substantially weakened near the Γ point. Without DM interaction, the contribution is purely from the trimagnon excitations at the Γ point in the isotropic TLAF, see Fig. 11(a). The above observations on the total RIXS spectrum should be helpful in distinguishing the contributions of the two different multimagnon excitations. VI. CONCLUSION Due to the possible realization of various unusual ordered or disordered phases, frustrated magnetism is an active area of research in condensed matter physics [63]. Traditionally, information on the magnetic ground state and single magnon excitations is inferred from inelastic neutron scattering (INS) experiments [43,64]. However, with the advent of RIXS spectroscopy experimentalists now have a probe that can comprehensively investigate a wide range of energy and momentum values in MBZ. In this article, we have demonstrated the application of a recently proposed spin-wave theory scheme called TESWT to the indirect K -edge RIXS. As highlighted in this paper it is not a trivial matter to ensure that the sanctity of the spin spiral state is preserved. We performed a TESWT fitting of Cs 2 CuCl 4 INS data, which gives α ≈ 0.316 and η ≈ 0.025. Using these realistic parameters we computed the indirect K -edge bi-and trimagnon RIXS spectra within TESWT formalism. Our results allow us to confirm that in contrast to the isotropic model, quantum fluctuations in the noncollinear anisotropic TLAF can generate divergent fluctuations with drastic effects on the magnetic phase diagram. We find that the behavior of the RIXS spectra is influenced with the occurence of two inequivalent rotonlike points, M(0, 2π/ √ 3) and M (π, π/ √ 3). While the roton RIXS spectra at the M point undergoes a spectral downshift with increasing anisotropy, the peak at the M is not affected. However, the peak at M does not exhibit any downshift. We believe in the anistorpic case the M point retains more of the roton feature. Finally, we find that in the total RIXS spectra, the features of the bimagnon and the trimagnon are certainly different and thus can be easily distinguished within an experimental setting. While resolution issues still plague the K -edge, we hope the calculation in this paper and our past publication [1] will inspire experimentalists to improve resolution to test our predicted K -edge RIXS behavior. In conclusion, our theoretical investigation of the indirect RIXS intensity in the spiral antiferromagnets on the anisotropic triangular lattice demonstrates that RIXS has the potential to probe and provide a comprehensive characterization of the dispersive bimagnon and trimagnon excitations in the TLAF across the entire BZ, which is far beyond the capabilities of traditional low−energy optical techniques [41,42,65,66]. where T is the time-ordering operator and · is the average of the ground state. Using Eq. (A8) and Eq. (A9), we can compute the noninteracting and the interacting RIXS spectra. The non-interacting spectrum can be calculated by applying Wick's theorem to Eq. (A8) and Eq. (A9). The final expressions are stated in Eqs. (71) and (72).
2019-06-05T13:12:45.460Z
2019-06-04T00:00:00.000
{ "year": 2019, "sha1": "6683b40f480ed67cf2aad83254a54458bdb3225d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1906.01619", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3fb87658932617073f4d8e01b4ce7494d4e6e049", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
19953616
pes2o/s2orc
v3-fos-license
Skeletal Muscle AMP-activated Protein Kinase Is Essential for the Metabolic Response to Exercise in Vivo* AMP-activated protein kinase (AMPK) has been postulated as a super-metabolic regulator, thought to exert numerous effects on skeletal muscle function, metabolism, and enzymatic signaling. Despite these assertions, little is known regarding the direct role(s) of AMPK in vivo, and results obtained in vitro or in situ are conflicting. Using a chronically catheterized mouse model (carotid artery and jugular vein), we show that AMPK regulates skeletal muscle metabolism in vivo at several levels, with the result that a deficit in AMPK activity markedly impairs exercise tolerance. Compared with wild-type littermates at the same relative exercise capacity, vascular glucose delivery and skeletal muscle glucose uptake were impaired; skeletal muscle ATP degradation was accelerated, and arterial lactate concentrations were increased in mice expressing a kinase-dead AMPKα2 subunit (α2-KD) in skeletal muscle. Nitric-oxide synthase (NOS) activity was significantly impaired at rest and in response to exercise in α2-KD mice; expression of neuronal NOS (NOSμ) was also reduced. Moreover, complex I and IV activities of the electron transport chain were impaired 32 ± 8 and 50 ± 7%, respectively, in skeletal muscle of α2-KD mice (p < 0.05 versus wild type), indicative of impaired mitochondrial function. Thus, AMPK regulates neuronal NOSμ expression, NOS activity, and mitochondrial function in skeletal muscle. In addition, these results clarify the role of AMPK in the control of muscle glucose uptake during exercise. Collectively, these findings demonstrate that AMPK is central to substrate metabolism in vivo, which has important implications for exercise tolerance in health and certain disease states characterized by impaired AMPK activation in skeletal muscle. The ubiquitously expressed serine/threonine AMP-activated protein kinase (AMPK) 2 is an ␣␤␥ heterotrimer postulated to play a key role in the response to energetic stress (1,2), because of its sensitivity to increased cellular AMP levels (3). Pharmacological activation of AMPK (primarily via the AMP analogue ZMP) increases catabolic processes such as GLUT4 translocation (4,5), glucose uptake (6,7), long chain fatty acid (LCFA) uptake (8), and substrate oxidation (6). Concomitantly, pharmacological activation of AMPK inhibits anabolic processes, and in skeletal muscle genetic reduction of the catalytic AMPK␣2 subunit eliminates these pharmacological effects (9 -12). Thus, AMPK has been proposed to act as a metabolic master switch (2,13,14). Physiologically, exercise at intensities sufficient to increase free cytosolic AMP (AMP free ) levels is a potent stimulus of AMPK, preferentially activating AMPK␣2 in skeletal muscle (15)(16)(17). The metabolic profile of skeletal muscle during moderate to high intensity exercise is remarkably similar to skeletal muscle in which AMPK has been pharmacologically activated (i.e. increases in catabolic processes). This is consistent with the hypothesis that AMPK activation is required for the metabolic response to increased cellular stress. Given this, it is surprising that the direct role(s) of skeletal muscle AMPK during exercise under physiological in vivo conditions is unknown. A number of studies have tried to attribute causality to the AMPK and metabolic responses to exercise using transgenic models. In mouse models in which AMPK␣2 protein expression and/or activity has been impaired, contractions performed in isolated skeletal muscle in vitro, ex vivo, or in situ have demonstrated that skeletal muscle glucose uptake (MGU) is normal (9,10), partially impaired (11,18), or ablated (19). Furthermore, ex vivo skeletal muscle LCFA uptake and oxidation in response to contraction appears to be AMPK-independent (20,21). A key limitation of these studies is that the experimental models were not physiological. Under in vivo conditions, mice expressing a kinase-dead (18) or inactive (22) AMPK␣2 subunit in cardiac and skeletal muscle have impaired voluntary and maximal physical activity, respectively, indicative of a physiological role for AMPK during exercise. In this context, obese non-diabetic and diabetic individuals have impaired skeletal muscle AMPK activation during moderate intensity exercise (23) as well as during the post-exercise period (24), yet the contribution of this impairment to the disease state is unclear. Thus, in vivo studies are essential to define the role of AMPK in skeletal muscle during exercise. Physical exercise of a moderate intensity is an effective adjunct treatment for chronic metabolic diseases such as obesity and type 2 diabetes (25). Given the importance of elucidating the molecular mechanism(s) regulating skeletal muscle substrate metabolism during exercise and the putative role of AMPK as a critical mediator in this process, we tested the hypothesis that AMPK␣2 is functionally linked to substrate metabolism in vivo. EXPERIMENTAL PROCEDURES Animal Maintenance-All procedures were approved by the Vanderbilt University Animal Care and Use Committee. Male and female C57BL/6J mice expressing a kinase-dead AMPK␣2 subunit (␣2-KD) in cardiac and skeletal muscle (18) and wildtype (WT) littermate mice were studied. Twenty one days after birth, littermates were separated by gender, maintained in microisolator cages, fed a standard chow diet (5.5% fat by weight; 5001 Laboratory Rodent Diet, Purina), and had access to water ad libitum. All mice were studied at 16 weeks of age. Exercise Stress Test-Peak oxygen consumption (V O 2peak ) was assessed using an exercise stress test protocol. Two days prior to the exercise stress test, all mice were acclimatized to treadmill running by performing 10 min of exercise at a speed of 10 m⅐min Ϫ1 (0% incline). To determine V O 2peak , mice were placed in an enclosed single lane treadmill connected to Oxymax oxygen (O 2 ) and carbon dioxide (CO 2 ) sensors (Columbus Instruments, Columbus, OH). Following a 30-min basal period, mice commenced running at 10 m⅐min Ϫ1 on a 0% incline. Running speed was increased by 4 m⅐min Ϫ1 every 3 min until mice reached exhaustion, defined as the time point whereby mice remained at the back of the treadmill on a shock grid for Ͼ5 s. O 2 consumption and CO 2 production were assessed at 30-s intervals throughout the basal and exercise periods. Basal values are representative of the final 10 min of the basal period. Prior to the V O 2peak test, body weight was measured, and body composition was assessed using an mq10 NMR analyzer (Bruker Optics, The Woodlands, TX). Given that changes in whole body V O 2 during exercise closely reflect changes occurring within exercising muscle (26), all oxygen consumption measurements were expressed per kg of lean body mass (kg LBM ). Metabolic Experiments-Following the exercise stress test, surgical procedures were performed as described previously (27) to catheterize the left common carotid artery and right jugular vein for sampling and infusions, respectively. The catheters were exteriorized, sealed with stainless steel plugs, and kept patent with saline containing 200 units⅐ml Ϫ1 heparin and 5 mg⅐ml Ϫ1 ampicillin. Mice were housed individually post-surgery, and body weight was recorded daily. Five days following surgery, all mice performed a 10-min bout of exercise at their pre-determined experimental running speed (see below). Experiments were performed 2 days later. Approximately 1 h prior to the experiment, Micro-Renathane tubing was connected to the exteriorized catheters, and all mice were placed in the enclosed treadmill to acclimate to the environment. At t ϭ 0 min, a base-line arterial blood sample was taken for the measurement of arterial glucose, plasma insulin, plasma nonesterified fatty acids (NEFA), plasma lactate, and hematocrit. Mice then remained sedentary or performed a single bout of exercise. Sedentary mice were allowed to move freely in the stationary treadmill for 30 min. Mice that exercised were divided into three groups as follows: 1) ␣2-KD mice performed a maximum of 30 min of treadmill exercise at 70% of their maximum running speed; 2) WT mice ran at the same absolute running speed as ␣2-KD mice; 3) WT mice ran at the same relative intensity as ␣2-KD mice. Running time was matched between groups. In all mice, a bolus containing 13 Ci of 2-[ 14 C]deoxyglucose (2-[ 14 C]DG) and 26 Ci of [9,10-3 H]-(R)-2-bromopalmitate ( 3 H-R-BrP) was injected into the jugular vein at t ϭ 5 min to provide an index of tissue-specific glucose and LCFA uptake and clearance, respectively. At t ϭ 7, 10, 15, and 20 min, arterial blood was sampled to determine blood glucose, plasma NEFA, plasma lactate, and plasma 2-[ 14 C]DG and 3 H-R-BrP. Hematocrit was measured at t ϭ 20 min, and at t ϭ 30 min or exhaustion, arterial blood was taken for the measurement of blood glucose, plasma insulin, plasma NEFA, plasma lactate, plasma 2-[ 14 C]DG, and 3 H-R-BrP. Following the final arterial blood sample, 50 l of yellow DYE-TRAK microspheres (15 m; Triton Technology Inc., San Diego) were injected into the carotid artery, followed by a small flush of saline, to assess the percentage of cardiac output to gastrocnemius (%Q G ) and the left and right kidney. Mice were then anesthetized with an arterial infusion of sodium pentobarbital (3 mg). The soleus, right gastrocnemius, superficial vastus lateralis (SVL), heart, and brain were rapidly excised, frozen in liquid nitrogen, and stored at Ϫ70°C. The left gastrocnemius and left and right kidney were placed into 15-ml polypropylene tubes and stored at 4°C prior to microsphere analysis. Echocardiography-Transthoracic echocardiograms were performed as described previously (28). Mice were acclimated to the procedure over 3 days. Immediately following treadmill exercise, two-dimensional targeted M-mode echocardiographic images were obtained at the level of the papillary muscles from the parasternal short axis view and recorded at a speed of 150 cm/s for the measurement of heart rate. Echocardiograms were completed within 72 Ϯ 13 s after exercise. Left ventricular wall thickness, end diastolic measurements, and left ventricular end systolic dimensions were determined as described previously (28) and are the average of three to five consecutive selected sinus beats using the leading edge technique. Heart rate was determined from the cardiac cycles recorded on the M-mode tracing. Plasma and Tissue Radioactivity-Plasma 2-[ 14 C]DG radioactivity was assessed by liquid scintillation counting following deproteinization with 0.3 N Ba(OH) 2 and 0.3 N ZnSO 4 as described previously (29). Plasma 3 H-R-BrP radioactivity was determined directly from the plasma via liquid scintillation counting. Tissue 2-[ 14 C]DG and 3 H-R-BrP were determined using a modified method of Folch et al. (30). Chloroform:methanol (2:1) was added to a portion of tissue that had been crushed in liquid nitrogen using a mortar and pestle, homogenized on ice, and stored at 4°C for 60 min. KCl (0.1 M) was then added to the homogenate, and samples were centrifuged at 3500 ϫ g for 15 min. The upper aqueous phase (containing 2-[ 14 C]DG) was used to determine 2-[ 14 C]DG -P as described previously (29). A portion of the lower lipid phase (containing 3 H-R-BrP) was used to determine tissue 3 H-R-BrP content (31). Plasma Hormones and Metabolites-Immunoreactive plasma insulin was assayed with a double antibody method (32), and plasma NEFA were measured spectrophotometrically using an enzymatic colorimetric assay (NEFA C kit, Wako Chemicals Inc.). Plasma lactate was determined enzymatically (33), with lithium L-lactate (Sigma) used as the standard. Arterial glucose levels were determined directly from ϳ5 l of arterial blood samples using an ACCU-CHEK Advantage monitor (Roche Diagnostics). Muscle Metabolites-For muscle glycogen determination, 2 M HCl was added to a portion (ϳ10 mg) of crushed tissue samples, which were then incubated at 100°C for 2 h and neutralized with 0.667 M NaOH. Glucose units were determined using an enzymatic fluorometric method (33). Muscle lactate, PCr, Cr, and ATP were analyzed from ϳ20 mg of crushed tissue using enzymatic fluorometric techniques (33). ADP free and AMP free were calculated as described previously (34). Microsphere Isolation-Tissues were digested overnight in 1 M KOH at 60°C. Following sonication with Triton X-100, microspheres were suspended in ethanol containing 0.2% (v/v) HCl, followed by ethanol. The microsphere:ethanol solution was evaporated at room temperature, and 200 l of N,N-dimethylformamide (Sigma) was added to elute the fluorescent dye from the microspheres. The absorbance of the N,N-dimethylformamide solution was determined at 450 nm. AMPK and NOS Activity Assays-AMPK␣2 and -␣1 were sequentially immunoprecipitated using 200 g of protein, 2 g of a rabbit AMPK␣2 polyclonal antibody (Abcam), 2 l of a rabbit AMPK␣1 monoclonal antibody (Abcam), and immobilized Recomb protein A beads (Pierce). AMPK activity in the immune complexes was measured for 24 min at 30°C (within the pre-determined linear range) in the presence of 200 M AMP and calculated as picomoles of phosphate incorporated into the SAMS peptide (100 M; GenWay Biotech) per min per mg of protein subjected to immunoprecipitation. NOS activity was measured on gastrocnemius and SVL muscle. Samples were homogenized in lysis buffer, and 5 l of sample (ϳ70 g of protein) was added to pre-heated assay buffer OXPHOS Activity Assays-Post-600 ϫ g supernatants of gastrocnemius muscle were prepared as described previously (35). Briefly, frozen samples were homogenized in 120 mM KCl, 20 mM HEPES (pH 7.4), 2 mM MgCl, 1 mM EGTA, and 5 mg/ml bovine serum albumin and centrifuged twice at 600 ϫ g for 10 min at 4°C. The second supernatant was stored in 2 g/l aliquots at Ϫ70°C. All assays were performed at 30°C in a final volume of 1 ml using a SpectraMax Plus 384 spectrophotometer (Molecular Devices). Prior to measurement of complex I, I ϩ III, II, and II ϩ III activity, samples were diluted 1:1 in hypotonic media (final concentration of 25 mM potassium phosphate (pH 7.2), 5 mM MgCl) and freeze-thawed three times. Complex I activity (NADH:ubiquinone oxidoreductase; EC 1.6.5.3) was measured by following the decrease in absorbance due to the oxidation of NADH at 340 nm, with 425 nm as the reference wavelength (⑀ ϭ 6.81 mM Ϫ1 ⅐cm Ϫ1 ) (35). The reaction was initiated by adding 30 g of protein to the assay buffer (25 mM potassium phosphate (pH 7.2), 5 mM MgCl, 2 mM KCN, 2.5 mg/ml bovine serum albumin (fraction V), 130 M NADH, 65 M decylubiquinone, 2 g/ml antimycin A) and monitored for 5 min. Rotenone (2 g/ml) was added, and the reaction was monitored for 3 min. Complex I activity is the difference between total enzymatic rates and rates obtained in the presence of rotenone. Complex I ϩ III (NADH-cytochrome c oxidoreductase) activity was determined as described previously (36) with minor modifications. The reaction was initiated by adding 30 g of protein to the assay buffer (50 mM potassium phosphate (pH 7.2), 80 M cytochrome c (bovine heart), 130 M NADH, 2 mM KCN, 5 mM MgCl). The increase in absorbance due to the reduction of ferricytochrome c (⑀ ϭ 19 mM Ϫ1 ⅐cm Ϫ1 ) was monitored for 3 min at 550 nm with 580 nm as the reference wavelength. Rotenone (2 g/ml) was added, and the reaction was monitored for a further 3 min. Complex I ϩ III activity is the rotenone-sensitive rate. Complex IV activity (cytochrome c oxidase; EC 1.9.3.1) was measured by following the decrease in absorbance at 550 nm due to the oxidation of ferrocytochrome c, with 580 nm as the reference wavelength (⑀ ϭ 19.1 mM Ϫ1 ⅐cm Ϫ1 ) (35). Samples (10 g) were added to 20 mM potassium phosphate, 15 M ferrocytochrome c, and 450 M n-dodecyl ␤-D-maltoside, and the reaction was monitored for 30 s. Complex IV activity was calculated from the initial rate. Ferrocytochrome c was prepared by adding 5 M dithiothreitol to 200 M ferricytochrome c. After 20 min, the 550 nm/565 nm ratio was determined, and ferricytochrome c was considered reduced if the ratio was between 10 and 20. Citrate synthase was measured on 10 g of sample as described by Barrientos (37). Calculations-The tissue-specific clearance of 2-[ 14 C]DG and 3 H-R-BrP (K g and K f , respectively) and the metabolic index for glucose and LCFA (R g and R f ) were calculated as described previously (38). K g and K f are used as concentration-independent indices of muscle glucose and LCFA uptake, respectively. R g and R f are concentration-dependent indices of muscle glucose and LCFA uptake, respectively. Percent cardiac output was calculated from fluorescent intensity as described previously (39) and is expressed as percent cardiac output to the tissue (%Q T ), where %Q T ϭ (f T / f Ref )⅐(tissue average /tissue mouse ). f T and ƒ Ref are the fluorescent intensity of the tissue and reference sample, respectively. Adequacy of microsphere mixing was assumed if %Q to the left and right kidney was within 10%. Of the 43 mice infused with microspheres, 34 met the inclusion criteria for analysis. The amount of 2-[ 14 C]DG-P present in the gastrocnemius muscle as well as the amount of microspheres trapped within the gastrocnemius muscle were used to determine the glucose tissue extraction index (TEI). The glucose TEI was calculated by expressing the percentage of 2-[ 14 C]DG-P (expressed relative to the amount infused) relative to the percentage of microspheres (expressed relative to the amount infused). For the echocardiography experiments, an index linearly related to cardiac output was calculated as heart rate ϫ (diastolic left ventricular internal dimension 3 Ϫ systolic left ventricular internal dimension 3 ) (28). Statistical Analyses-Data are means Ϯ S.E. Statistical analysis was performed using a Student's t test, one-way analysis of variance (ANOVA), one-way repeated measures ANOVA, or two-way repeated measures ANOVA where appropriate with the statistical software package SigmaStat. If the ANOVA was significant (p Ͻ 0.05), specific differences were located using Fisher's least significant difference test. Exercise Capacity and Oxygen Consumption in Vivo Are Impaired in ␣2-KD mice during an Exercise Stress Test-At 16 weeks of age no significant differences were observed between ␣2-KD mice and WT littermates with respect to body weight (24 Ϯ 2 versus 25 Ϯ 1 g for WT and ␣2-KD, respectively), muscle mass (77 Ϯ 1 versus 78 Ϯ 2% body weight), or fat mass (8.5 Ϯ 1.4 versus 9.4 Ϯ 0.4% body weight). Basal V O 2 was similar between genotypes (78 Ϯ 5 versus 79 Ϯ 6 ml⅐kg LBM Ϫ1 ⅐min Ϫ1 for WT and ␣2-KD, respectively) as was the respiratory exchange ratio (0.77 Ϯ 0.02 versus 0.77 Ϯ 0.01). During an exercise stress test, ␣2-KD mice displayed marked exercise intolerance as seen by impairments in maximum running speed (38 Ϯ 1 versus 21 Ϯ 1 m⅐min Ϫ1 for WT and ␣2-KD, respectively; p Ͻ 0.001) and running time (23 Ϯ 1 versus 10 Ϯ 1 min; p Ͻ 0.001). V O 2 during the stress test increased at a similar rate in WT and ␣2-KD mice (Fig. 1A); however, V O 2peak was reduced in ␣2-KD mice (142 Ϯ 2 versus 113 Ϯ 4 ml⅐kg LBM Ϫ1 ⅐min Ϫ1 ; p Ͻ 0.001). As a result, ␣2-KD mice were exercising at a greater percentage of V O 2peak compared with WT mice at any given absolute work rate (supplemental Table S1). Respiratory exchange ratio was similar between genotypes at exhaustion (0.89 Ϯ 0.01 versus 0.90 Ϯ 0.03). At a V O 2 of ϳ90 ml⅐kg LBM Ϫ1 ⅐min Ϫ1 , V CO 2 increased disproportionately compared with V O 2 in WT mice (Fig. 1B), reflecting a change in either substrates utilized or acidosis. This effect was not apparent in ␣2-KD mice (Fig. 1C). Acute Exercise Experiment, Controlling for Relative and Absolute Exercise Intensity-To examine the role of AMPK␣2 in the regulation of skeletal muscle metabolic flux in vivo, ␣2-KD mice performed a single bout of treadmill exercise at 70% of their maximum running speed (␣2-KD 70% ). Because of the difference in maximum running speed between the genotypes, WT mice that exercised at the same absolute speed as ␣2-KD 70% did so at ϳ45% of their maximum running speed (WT 45% ; supplemental Table S2). To best equate results to ␣2-KD 70% , a second group of WT mice was exercised at 70% of their maximum running speed (WT 70% ; supplemental Table S2). As demonstrated in the results that follow, controlling for absolute and relative exercise intensity is essential for interpretation of the physiological and metabolic responses to exercise in vivo. AMPK␣ Protein Expression and AMPK Activity Is Impaired in Skeletal Muscle of ␣2-KD Mice-Similar to other muscle groups (11), expression of the ␣2-KD subunit in gastrocnemius muscle was increased relative to native AMPK␣2 (98 Ϯ 8% higher in ␣2-KD compared with WT, p Ͻ 0.01; supplemental Fig. S1A). A concomitant decrease in AMPK␣1 expression was observed in the gastrocnemius of ␣2-KD mice (51 Ϯ 11% lower in ␣2-KD compared with WT, p Ͻ 0.02; supplemental Fig. S1A). Similar findings for AMPK␣2 and ␣1 expression were observed in SVL muscle (data not shown). In gastrocnemius muscle of WT mice, AMPK␣2 (supplemental Fig. S1B) and AMPK␣1 activities (supplemental Fig. S1C) increased in an intensity-dependent manner. AMPK␣2 and -␣1 activities were barely detectable in the gastrocnemius of ␣2-KD mice under sedentary conditions and did not change in response to exercise. Gastrocnemius acetyl-CoA carboxylase-␤ Ser 221 phos-phorylation was similar between genotypes at rest and increased to a similar extent in all groups in response to exercise (supplemental Fig. S1D). Skeletal Muscle ATP Concentrations Decrease in ␣2-KD Mice during Exercise in Vivo-In response to exercise, no significant changes in ATP were observed in the gastrocnemius of WT 45% or WT 70% (Table 1). In contrast, exercise significantly decreased gastrocnemius ATP levels in ␣2-KD 70% . Lactate and creatine (Cr) significantly increased, whereas phosphocreatine (PCr), PCr:(PCr ϩ Cr), and glycogen significantly decreased during exercise in all groups (Table 1). In ␣2-KD 70% and WT 70% , ADP free , AMP free , and AMP free :ATP all increased in response to exercise ( Table 1). The similar increase in AMP free and AMP free :ATP observed between ␣2-KD 70% and WT 70% shows that, by this criteria, cellular stress was equally elevated in these groups compared with WT 45% . This finding emphasizes the need to exercise mice at the same relative work intensity to obtain comparable energetic responses in vivo. Arterial Metabolites and Hormones Are Altered in ␣2-KD Mice at Rest and during Steady State Exercise in Vivo-An increase in exercise intensity resulted in significantly lower arterial glucose levels in WT mice ( Fig. 2A). Compared with WT 70% , arterial glucose levels during exercise were significantly greater in ␣2-KD 70% . Arterial NEFAs (Fig. 2B) and insulin (Fig. 2C) decreased to similar concentrations in all groups during exercise. Although no differences in basal insulin levels were observed between individual groups, basal insulin levels were greater in ␣2-KD 70% compared with the combined average of all WT mice (98 Ϯ 6 versus 71 Ϯ 7 pM, p Ͻ 0.05). Arterial lactate increased over time in all exercise groups (Fig. 2D), and a significant group effect was observed with ␣2-KD 70% Ͼ WT 70% Ͼ WT 45% (p Ͻ 0.01). Indices of Glucose Uptake, but Not LCFA Uptake, Are Impaired in Skeletal Muscle of ␣2-KD Mice during Exercise in Vivo-In WT mice, an increase in exercise intensity increased the plasma disappearance of 2-[ 14 C]DG at 7 and 10 min (Fig. 3A). Gastrocnemius K g (Fig. 3B) and R g (Fig. 3C) also increased in WT 70% compared with WT 45% . At the same relative exercise intensity, the disappearance of plasma 2-[ 14 C]DG was attenuated at 7 min in ␣2-KD 70% mice when compared with WT 70% . In ␣2-KD 70% mice, gastrocnemius K g was impaired by ϳ60% when compared with WT 70% mice (Fig. 3B). Gastrocnemius R g FIGURE 1. Oxygen consumption is impaired in 16-week-old chow-fed C57BL/6J mice expressing a kinase-dead form of AMP-activated protein kinase ␣2 (␣2-KD) in cardiac and skeletal muscle. Compared with WT littermates, the increase in oxygen consumption (⌬V O 2 ) during an exercise stress test is attenuated in ␣2-KD mice (A). B and C, carbon dioxide production (V CO 2 ) during an exercise stress test was plotted against V O 2 for WT and ␣2-KD mice, respectively. Note the change of slope of V CO 2 versus V O 2 in WT mice (denoted by the arrow) that is not present in ␣2-KD mice. Data are mean Ϯ S.E. for n ϭ 8 -9. kg LBM indicates kilograms of lean body mass. TABLE 1 Measured and calculated metabolites (normalized to total creatine levels) and glycogen at rest and immediately following exercise in gastrocnemius muscle of 16-week-old chow-fed C57BL/6J mice expressing WT or kinase-dead form of AMP-activated protein kinase ␣2 (␣2-KD) in cardiac and skeletal muscle Data are mean Ϯ S.E. for n ϭ 5-7 per group. SEPTEMBER 4, 2009 • VOLUME 284 • NUMBER 36 during exercise was also impaired ϳ35% in ␣2-KD 70% mice when compared with WT 70% (Fig. 3C). An increase in exercise intensity tended to increase K g in the soleus of WT mice (p ϭ 0.07 for WT 70% versus WT 45% ; supplemental Fig. S2A), whereas K g in WT 70% was significantly greater than WT 45% in SVL (supplemental Fig. S2B). These results paralleled findings observed for R g in soleus (supplemental Fig. S2C) and SVL (supplemental Fig. S2D). As with the gastrocne-mius, K g in soleus and SVL was impaired ϳ30% in ␣2-KD 70% mice when compared with WT 70% ; however, soleus and SVL R g was similar between ␣2-KD 70% and WT 70% . Physiological Role of AMPK in Skeletal Muscle Taken together, these findings show that glucose concentration-dependent (R g ) and -independent (K g ) indices of MGU are impaired in ␣2-KD mice during exercise in vivo compared with WT mice exercising at the same relative intensity. The finding that MGU was greater in WT 70% compared with WT 45% shows for the first time that the 2-[ 14 C]DG method (38) can be used to determine the effect of different exercise intensities on multiple muscle groups in vivo. Indices of LCFA clearance (K f ) and uptake (R f ) are shown in supplemental Fig. S3. K f increased to similar rates during exercise in soleus, gastrocnemius, and SVL of ␣2-KD 70% and WT 70% . In WT 45% K f responses were generally reduced. R f significantly increased in soleus, gastrocnemius, and SVL of ␣2-KD 70% . In WT 70% , R f significantly increased in soleus and gastrocnemius, whereas R f was elevated in gastrocnemius of WT 45% . Given that K f and R f increased normally in response to exercise in ␣2-KD mice, it can be concluded that AMPK␣2 is not essential for skeletal muscle LCFA uptake during exercise in vivo. This is in agreement with previous studies performed ex vivo (20,21). Percent Cardiac Output to Skeletal Muscle Is Altered in ␣2-KD Mice at Rest and during Exercise in Vivo-Under basal conditions %Q G was ϳ2.5-fold greater in ␣2-KD mice compared with WT mice (Fig. 3D). Exercise increased %Q G in WT 45% (ϳ4.5-fold) and WT 70% (ϳ4-fold). Exercise did not alter %Q G in ␣2-KD 70% . The glucose TEI did not differ between ␣2-KD and WT mice at rest (Fig. 3E). Exercise increased the glucose TEI to a similar extent in ␣2-KD 70% and WT 70% , demonstrating that the impairment in MGU seen in the gastrocnemius of ␣2-KD 70% compared with WT 70% during exercise was likely due to reduced substrate delivery (i.e. %Q G ). The TEI did not increase in WT 45% , demonstrating that in WT mice the extraction of glucose by skeletal muscle is accelerated as exercise intensity increases. Cardiac Fuel Uptake and Function during Exercise in Vivo Are Not Impaired in ␣2-KD Mice-Cardiac K g was similar between genotypes at rest, and exercise did not significantly increase K g in any group (supplemental Fig. S4A). Cardiac R g was also similar between genotypes at rest, and exercise significantly increased cardiac R g in ␣2-KD 70% and WT 70% (supplemental Fig. S4B). Cardiac R g did not increase during exercise in WT 45% and was significantly less than cardiac R g in ␣2-KD 70% and WT 70% . No significant differences were observed with respect to K f or R f in cardiac muscle between any of the three groups (data not shown). Heart rate (683 Ϯ 12 versus 669 Ϯ 9 versus 650 Ϯ 5 beats⅐min Ϫ1 for WT 45% , WT 70% , and ␣2-KD 70% , respectively) and cardiac output (14 Ϯ 1 versus 14 Ϯ 1 versus 15 Ϯ 1 ml⅐min Ϫ1 for WT 45% , WT 70% , and ␣2-KD 70% , respectively) were similar between groups in response to exercise. Thus, a kinase-dead AMPK␣2 subunit in cardiac muscle does not adversely affect substrate uptake or cardiac function in response to exercise. In the basal state, total NOS activity was ϳ35% lower in gastrocnemius of ␣2-KD mice (Fig. 4B). Exercise increased NOS activity in WT 70% but not in either WT 45% or ␣2-KD 70% . Basal NOS activity was also impaired in SVL muscle of ␣2-KD mice (supplemental Fig. S5); however, exercise did not alter SVL NOS activity in any group, a finding that may be related to less recruitment of this muscle (i.e. attenuated R g and K g when compared with gastrocnemius muscle). Thus, AMPK is required for full expression of nNOS, as well as NOS activity at rest and in response to exercise. The observation that NOS activity increased in gastrocnemius of WT 70% but not WT 45% shows that NOS activity is sensitive to exercise intensity. Activities of Specific Electron Transport Chain (ETC) Complexes Are Reduced in Skeletal Muscle of ␣2-KD Mice-The finding that exercise capacity, V O 2peak , and ATP generation are impaired, although changes in arterial lactate levels are accelerated in ␣2-KD mice during exercise despite normal extraction of glucose in skeletal muscle, led us to hypothesize that mitochondrial function is impaired in these mice. Support for this hypothesis comes from the finding that a reduction in nNOS protein expression, such as seen in the present study, is associated with impaired activity of enzymes involved in skeletal muscle OXPHOS (41,42). As shown in Table 2, complex I and complex IV activities of the ETC were significantly impaired in sedentary ␣2-KD mice when compared with WT mice, whereas no changes were observed for complex I ϩ III, II, or II ϩ III activities. Identical findings were observed if complex activities were normalized to citrate synthase levels, which did not differ between genotypes (50 Ϯ 7 versus 52 Ϯ 11 mol⅐min Ϫ1 ⅐mg Ϫ1 for WT and ␣2-KD, respectively). Thus, the impairment in skeletal muscle ETC complexes in ␣2-KD mice was not because of a nonspecific reduction in mitochondrial content, a finding that is in agreement with previous observations demonstrating no alteration in mitochondrial density, DNA, and other markers of mitochondrial content and biogenesis in gastrocnemius muscle of untrained ␣2-KD mice (43). DISCUSSION This study supports for the first time in vivo the hypothesis that AMPK is a critical mediator of the metabolic response to exercise. We demonstrate that AMPK regulates skeletal muscle metabolism in vivo at multiple levels, with the overall result being that a defect in AMPK␣2 subunit activity in skeletal muscle grossly impairs exercise tolerance. Without a functionally active AMPK␣2 subunit, glucose uptake during exercise in vivo is impaired in different skeletal muscle groups of ␣2-KD mice compared with WT littermate mice exercising at the same relative intensity. This may be due in part to impaired substrate SEPTEMBER 4, 2009 • VOLUME 284 • NUMBER 36 delivery to exercising muscle (estimated via %Q G ) as the glucose TEI was not different between the two genotypes. Specific ETC complex activities in skeletal muscle were also impaired in ␣2-KD mice. Taken together with the findings of decreased skeletal muscle ATP concentrations, greater arterial lactate accumulation, and reductions in V O 2peak during maximal exercise in ␣2-KD mice, our findings suggest that the exercise intolerance in ␣2-KD mice is the result of impaired energy-producing oxidative pathways (see Fig. 5). The novel finding that complex I and complex IV activities of the ETC were impaired in the gastrocnemius of ␣2-KD mice reveals new insight regarding the role of AMPK in skeletal muscle, and it provides a mechanism that could account for or contribute to the exercise intolerance observed in the ␣2-KD mouse. Complex I and complex IV represent the proximal and distal ETC complexes, respectively, and thus play an integral role in OXPHOS and the generation of ATP. A deficiency in complex I activity will lead to excess levels of NADH and a lack of NAD ϩ , resulting in impaired Krebs cycle function and elevated blood lactate (44), the latter being observed in ␣2-KD mice during exercise in this study. A deficiency in complex IV activity would impair the proton gradient required for subsequent ATP synthesis (45), explaining the accelerated net ATP degradation observed in skeletal muscle of ␣2-KD mice during exercise in vivo. Importantly, the changes in complex I and complex IV activities in ␣2-KD mice occurred despite similar levels of citrate synthase activity when compared with WT mice. This agrees with previous findings showing that mitochondrial density, mitochondrial DNA, cytochrome c protein expression, ␦-aminolevulinate synthase mRNA expression, and peroxisome proliferator-activated receptor ␥ coactivator-1␣ mRNA expression are similar in gastrocne-mius muscle of untrained ␣2-KD and WT mice (43). Thus, a functionally inactive AMPK␣2 subunit is sufficient to impair mitochondrial function, without adversely altering markers of muscle mitochondrial content. Physiological Role of AMPK in Skeletal Muscle Although OXPHOS capacity was impaired in skeletal muscle of ␣2-KD mice, it is unclear whether the ␣2-KD subunit per se was directly responsible for this phenomenon. A novel finding with important implications was that nNOS protein expression was impaired in skeletal muscle of ␣2-KD mice. This finding is supported by the close association between AMPK␣2 and nNOS (40). A decrease in nNOS protein expression has been associated with impairments in OXPHOS. Indeed, in skeletal muscle of patients with amyotrophic lateral sclerosis, reduced nNOS expression is highly associated with impaired ETC complex activities (42). Similarly, in skeletal muscle of nnos Ϫ/Ϫ mice, ETC complex activities are reduced (41). Thus, the impairments in OXPHOS within skeletal muscle of ␣2-KD mice may be due to a direct impairment of AMPK or indirect effects mediated by reductions in nNOS protein expression. The reduced nNOS expression may have also caused an impairment in muscle blood flow, as %Q G did not increase in response to exercise in ␣2-KD mice, whereas an ϳ4-fold increase was observed in WT 70% and WT 45% . It has been shown that vasodilation in response to mild exercise is significantly impaired in animal models where nNOS is partially impaired or ablated in skeletal muscle (46). Likewise, Lau et al. (47) have shown that ϳ50% of contraction-induced arteriolar dilation in vitro is dependent on nNOS. Conversely, restoring nNOS at the sarcolemma of skeletal muscle significantly improves the exercise-induced increase in skeletal muscle perfusion (48). It is well known that contracting muscle releases nitric oxide (NO) (49,50). Given that NOS activity in gastrocnemius of ␣2-KD mice was impaired in response to intense exercise, it is a plausible hypothesis that NO efflux from ␣2-KD mice was also impaired. NO is a potent stimulator of vasodilation (51), and as such impaired NOS activity during exercise may have also suppressed arteriolar relaxation in ␣2-KD mice. Aside from nNOS, it has been shown that the gastrocnemius of ␣2-KD mice contains significantly fewer capillaries compared with WT mice (52). Given that exercise normally causes a redistribution of blood flow toward contracting muscle (53), fewer capillaries in the gastrocnemius of ␣2-KD mice might have also resulted in less blood flow to this tissue during exercise. We found for the first time that suppressed activation of AMPK␣2 in skeletal muscle during exercise in vivo was associated with ϳ60 and ϳ35% reductions in concentration-inde- Our results show that skeletal MGU during exercise is dependent on AMPK␣2 activation, as mice expressing ␣2-KD have impaired MGU when compared with WT mice at the same relative exercise intensity. The impaired MGU in ␣2-KD mice is at least partially because of reduced vasodilation, which arises from an inability of AMPK to activate NOS and thus stimulate NO production. The impairment in AMPK␣2 activation and/or reductions in the skeletal muscle isoform of neuronal NOS (nNOS) also attenuate mitochondrial function. This reduces mitochondrial ATP generation and diverts glucose toward anaerobic ATP generation, resulting in elevated plasma lactate levels. The whole body phenotype of these impairments is a reduction in exercise tolerance. G-6-P, glucose 6-phosphate.
2018-04-03T00:59:15.784Z
2009-06-12T00:00:00.000
{ "year": 2009, "sha1": "cbb02b49521b910520596e6f3db7570e99e6c9e9", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/284/36/23925.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "5d4ff07819cbc9ce5ae43014e17d11130325db19", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246635790
pes2o/s2orc
v3-fos-license
The Security Versus Freedom Dilemma. An Empirical Study of the Spanish Case One of the classic debates in public opinion, now more prevalent due to the COVID-19 pandemic, has been the dilemma between freedom and security. Following a theoretical review, this article sets out to establish the sociodemographic profiles and those variables that can correlate and/or explain the inclination towards one or the other, that is, the dependent variable “freedom-security,” such as victimization or the assessment of surveillance. The analysis is based on the results of a survey prepared by the Center for Sociological Research (CIS, in Spanish) and administered to a sample of 5,920 Spaniards. The conclusions indicate that the majority inclination is for security, especially among older men, with elementary education attainment level and right-wing ideology. Furthermore, although victimization correlated with the dependent variable, the perception of being a possible victim led to a preference for safety rather than the actual experience of having been a victim. Finally, the positive assessment of surveillance through technologies such as video cameras explains or is strongly associated with security, making it a promising line of research for future work and a means to improve the understanding of the analyzed dilemma. INTRODUCTION The COVID-19 pandemic is not the first event that has forced public opinion to consider the dilemma of freedom versus security in a world dominated by the influence of so-called new information and communication technologies. Currently, technological control is provoking debates around the right to privacy in the context of the surveillance society (Lyon, 2018;Lyon and Wood, 2021). There are precedents to the influence of information and communication technologies, the extent to which they can control or influence citizens and countries, and their effect on these actors when valuing one side over the other when balancing freedom versus security. By way of example, the following cases affected both the personal safety of private citizens and nation-states: the case of "Wikileaks" in 2006; the "Snowden" case in 2013; "Cambridge Analytica" case in 2014, the spying of Jeff Bezos by Saudi Arabia in 2019, or the most recent "Pegasus case" which was made public in 2021. Currently, the incidence of the pandemic has had a more significant impact on control over citizens and a corresponding lower degree of freedom. An example of this is the research carried out by the Canadian Citizen Lab into internet censorship, wherein it analyzed how the Chinese authorities, through WeChat, used an artificial intelligence system capable of detecting the semantic meaning of texts. From 1 January to 15 February 2020, up to 516 keyword combinations were set to trigger censorship, automatically locking the server, and preventing further communication (Ruan et al., 2020). According to "The COVID-19 Civic Freedom Tracker" database, developed by "The International Center for Not-for-Profit Law" (www.icnl.org), most of the restrictions applied by States as a result of the pandemic are: an increase in powers related to surveillance of citizens; suspension of rights; control over information and delay of political elections. The Spanish case is even more severe regarding freedom of information and the press, given that the Spanish government commissioned a government body, the Sociological Research Center, to include in its February Barometer the possibility of limiting all information on the pandemic in official sources (González-Requena, 2020). Given these antecedents, this research aims to analyze the dilemma based on the opinions, attitudes, and behaviors of Spaniards with regards to freedom and security. This is a continued and constant dilemma in the field of sociology and social sciences, starting from the analyzes on the change from materialist to postmaterialist values worldwide as stated by Inglehart (Inglehart, 1990;Inglehart, 2018a), and particularized for the case of Spain by Díez Nicolás (2011), Díez Nicolás (2020). The working hypothesis established by Inglehart (1977), widely verified in countless investigations, was that those societies and individuals when reaching higher levels of personal security, including a lower level of crime, and a higher level of economic security, tend to be oriented toward more libertarian or self-expressive values. This trend, however, is not valid for all the countries, as shown by the different waves of the World Values Survey. In the case of Spain, there has been a decrease in the post-materialism index compared to the waves of 1990 and 2005, further verified by the most recent wave of 2014 (Díez Nicolás, 2020). This empirical inclination can be ascribed to factors that have changed the perception of citizens towards feeling greater personal insecurity, such as, among others, the irruption of jihadist terrorism, the increase in organized crime and crime in general, the greater flexibility of the labor market and job insecurity, the uncertainty about the future pension model, the increase in crime and insecurity related to the internet and social networks, the real estate market, or lately, the current global viral pandemic. For this reason, we believe that, in the Spanish case, depending on their perception of security, citizens will choose a greater or lesser degree of freedom. In this regard, we believe that security takes precedence over a greater or lesser degree of freedom. In other words, security is, to a greater extent, is the dominant value over freedom. More specifically, and as a working hypothesis, we believe that historical, economic, geographical, or sociological influences and the perception that the majority of Spaniards have towards citizen insecurity determine that security be valued more highly than freedom. In this instance, citizen insecurity refers to crime and other types of insecurity such as economic, employment, health, or informational. This study traces the most significant theories about security versus freedom. It presents an empirical investigation for the Spanish case, based on the 2016 CIS General Social Survey, where a descriptive analysis will be carried out based on the more significant sociodemographic and socioeconomic variables. Secondly, multiple regression models will carry out an explanatory analysis to discover if Spaniards prefer security over freedom using crime and victimology as a dependent variable in terms of perception, opinion, attitude, and experience. FREEDOM VERSUS SECURITY In a globalized world, the interrelation and connectivity of countries, economies, and citizens are constant. In this order of things, it is observed that the private sphere is ever decreasing, resulting in a smaller margin of freedom, either for individuals or collectives, whereby citizens, in general, cannot control their information themselves, and the privacy of their information is constantly threatened. There is a general perception about the vertiginous social changes, hence the data mentioned above from the World Values Survey on the orientation of the most developed countries in recent waves towards more materialistic, scarcity, or survival values instead of values related to postmaterialism, self-expression, or emancipation (Inglehart, 2018b). Our research does not focus on the classic six or twelve items of materialist/postmaterialist values but rather on the debate between, on the one hand, freedom and accessibility to surveillance information and, on the other hand, security related to surveillance linked to citizen security, such as personal security against crime and victimology. The research question for the Spanish case is: Do Spaniards, in general, perceive a greater degree of citizen insecurity and thus accept lower degrees of freedom in return? or simply stated; Do Spaniards demand higher levels of security measures because they feel insecure? It is not easy to define the concept of freedom in philosophical terms, as it is a polyhedral and contradictory word. However, the type of freedom at stake is easier to define since it affects the collective. Two examples of freedom from the territorial and evaluative perspectives are the differing visions of American and European liberalism (Leonard, 2011) or Bauman's consumerist interpretation of capitalist liberalism (Bauman, 1989). Similarly, differences of perspective could be included from the academic stance of authors such as Bay (1958), Sen (2001), Skinner (2012), Honneth (2015. The term freedom is contradictory and difficult to apply to specific realities and is even more complicated when combined with the term security. In this sense, the questions posed are: what freedom? Freedom for whom? How much freedom? Freedom for what purpose? Inversely, the questions posed could be: what security? Security for whom? or how much security? Or even, security for what purpose? Suppose we place ourselves in the classic dilemma, positive versus negative freedom (Berlin, 2002;Rothbard, 2015) or, more recently, quantitative versus qualitative freedom (Dierksmeier, 2019). In that case, it is observed that the object of freedom passes from the individual/property dyad to an individual triad/own good/other people's good. In this instance, we understand freedom as being able to carry out any individual or collective initiative, without any limitations or coercion, whether by the State or other individuals, and with budgets and objectives that reinforce both one's own good and that of others. With this definition in mind, we believe that we can answer the questions previously formulated. The concept of security has a similar or even greater number of facets as that of freedom. The most classic issue is that there are different types of security, national or state security, which ensures the protection of State, and human security, which ensures the protection of individuals (Mack, 2005;Krause and Williams, 2016). Logically, to the two types of security mentioned above, the supranational system that is increasingly important in the globalized world should also be added. Similarly, these supranational entities, together with nation-states, would also become subjects responsible for security. To these entities, we could also add other new actors such as NGOs or public opinion. References to national or state security or supranational security are logically interrelated. The denomination of collective security seems more logical. In December 2004, the High-level Panel of the United Nations Secretary-General on Threats, Challenges, and Change presented a report entitled: "A more secure world: our shared responsibility." The report highlights six groups of threats to collective security: conflicts between states; violence within the state (civil wars, human rights abuses, and genocide); poverty; infectious diseases; environmental degradation; nuclear, radiological, chemical, and biological weapons; terrorism; and transnational organized crime (Morillas, 2007). The UN Secretary-General, K. Annan, also pointed to the March 2005 document entitled: "In larger freedom: Towards Development, Security and Human Rights for All," highlighting in point IV, "Freedom from Fear," that most of the victims of these new conflicts are civilians. The above notwithstanding, the discussion remains constant, whether in reference to state security or human security. With regards to the former, many believe that the State is predominant in matters related to security as it is the institution which must ensure it. Although individual citizens remain a definitive reference in this matter, it is the State that provides the necessary framework for the security of all. In the latter case, although human security is essentially focused on protecting individuals, there are two variants: the focus on "freedom from wants" and the "freedom from fear." In the first, human security is based on basic human needs, or more specifically, on threats to well-being in the spheres of human rights, religion, poverty, hunger, disease, epidemics, the environment, wars, education, and information. In the second, human security revolves around the elimination of all types of coercion, threat, and violence in the daily lives of individuals (Suhrke, 1999;Seiple and Hoover, 2004;Knox Thames et al., 2009;Seiple et al., 2015). Bauman's sociological theory of liquid modernity and the nature of community extends the debate. Individuality increases freedom but does so at the expense of security and a sense of community. The concepts of "freedom versus security" and "individuality versus community" are simultaneously complementary and contradictory. Increasing either freedom or security comes at the expense of the other. The conflict between "security and freedom" and between "community and individuality" may never be resolved, but as they are equally indispensable values, we continue to search for a solution (Bauman, 2000, Bauman, 2001. In this sense, achieving a balance between freedom and security is probably impossible. The problem, however, is that when security is lacking, free agents are deprived of the trust without which freedom can hardly be exercised. When, on the contrary, it is freedom that is lacking, security feels like slavery or a prison (Bauman, 2005). In methodological terms, in this research, we will consider information through the new information and communication technologies, which would essentially fit into the field of human security, both with regards to "freedom from want" and "freedom from fear." Therefore, we understand security to be the central value that encompasses both the structure of human needs and its limitations due to coercion and threats in the daily lives of individuals. These definitions align with United Nations Development Programme (1994) and more specifically with the idea that freedom also includes security. However, in operational terms, we believe that, among others, fear, insecurity, coercion of religious freedom, hunger, crime, epidemics can constrain citizens, essentially because the survival instinct is more fundamental than freedom. As Kofi Annan, Secretary-General of the United Nations, states in his report entitled: "In larger freedom: Towards Development, Security and Human Rights for All" of 21 March 2005, in point 14 of the document: "The notion of larger freedom also encapsulates the idea that development, security, and human rights go hand in hand" (Annan, 2005). Grim and Finke (2012), in an empirical investigation in 200 countries, observed that when governments and various social groups restrict religious freedom, the possibilities of violent persecution, conflicts, instability, and terrorism increase. MATERIALS AND METHODS This work is based on the descriptive analysis of a survey administered in Spain at the start of 2016 in which the behavior of the dependent variable "freedom or security" is analyzed. The survey research was carried out by the Sociological Research Center (CIS) on a representative sample of adult Spaniards (see Table 1). The sample selection is based on a vast network of sampling points by municipality and a multistaged sample selection system, culminating in face-to-face interviews. The sampling error was ±1.4% for the whole of the corresponding sample. All the methodological information of the survey, such as the technical sheet, questionnaire, data matrix, and descriptive results, are available for download in the corresponding link (see Table 1). The instrument or questionnaire presents the study variable ("freedom-security") in the following literal way: "On a scale of 0-10, in which 0 means having full access to information even if it meant losing security, and 10 means having maximum security even if it meant losing access to information. Where would you position yourself? [0 = Maximum access to information even if it meant losing security (Freedom); 10 = Maximum security even if it meant losing access to information (Security)]." The question or dependent variable used includes an attitude, a certain predisposition, or a simple opinion rather than values. The latter, according to Rokeach (1973) are important life goals or standards which serve as guiding principles in a person's life, while attitudes are learned predisposition to respond in a consistently favorable or unfavorable manner with respect to a given object (Fishbein and Ajzen, 1975). The independent variables used include, on the one hand, traditional classificatory sociodemographic variables such as gender, age, marital status, subjective social class, ideology, education, size of locality, income, national identity, and religion. On the other, a set of questions related to security such as victimization, having been a victim of a crime, reporting a crime, having engaged in delinquent behavior in youth, and the perception of potentially being a victim of a crime (Herranz and Fernández-Prados, 2019); and other questions related to freedom of information or privacy such as internet use and assessing the presence of video cameras in public spaces. The data analysis includes a descriptive, correlational, and explanatory methodology of the dependent variable being studied and is presented in three sections of the results. The descriptive analysis aims to draw a profile according to the sociodemographic variables and other "freedom-security" dilemma issues. The correlational analysis shows the relationships between those variables of a continuous nature with the study variables and their orientation (either towards greater security or towards greater freedom). Finally, a table with two multiple regression models is presented in the explanatory analysis, one with all the outstanding independent variables and the other with only those deemed to be significant. responses in some grouped cases. Thus, the survey sample is composed mainly of women (51.5%) who are over 60 years old (28.4%), married (55.6%), middle subjective social class (70.4%) with centrist ideology (33.9%), describing the most representative social characteristics of the Spanish population. Table 2 also contains the descriptive analysis, mean and standard deviation, of the dependent variable "freedom-security" for each of the sociodemographic and socioeconomic characteristic values of the sample. For the sample as a whole, the mean of the "freedom-security" variable is 6.4 on a scale of 0-10 with a standard deviation of 2.36. In essence, this means that Spanish population leans towards "security." The profile where security tends to stand out corresponds to that of men (M = 6.6; SD = 2.28); over 60 years old (M = 6.9; SD = 2.29); widowed, divorced or separated (M = 6.7; SD = 2.44); low subjective social class (M = 6.5; SD = 2.38); with right-wing ideology (M = 6.8; SD = 2.3); elementary education (M = 7.0; SD = 2.22); rural locality (M = 6.5; SD = 2.34); low income (M = 6.7; SD = 2.34); identified as a Spanish national (M = 6.5; SD = 2.30) and practicing Catholic (M = 6.8; SD = 2.19). Table 3 also shows the description of the variables related to safety or victimization and freedom or privacy that appear in the questionnaire. Thus, half the respondents said they had been the victim of a crime (50.7%), a third had reported a crime (33.6%), two-fifths had engaged in delinquent or quasi-criminal behavior in adolescence (22%), and a 10th considered they were likely to be the victim of a crime (9.4%). Likewise, the vast majority considered surveillance cameras in public spaces to be very good (37.7%) or good (46.6%), and finally, almost threequarters used the internet (72.3%), and almost half the respondents used social networks (48.2%). Descriptive and Profile As in the previous table, the means and standard deviations for each variable are presented with the values of the independent variables that lean towards either security or freedom are highlighted. Thus, the profile of those surveyed with higher means and, therefore, lean more towards security are those who had never been victim of a crime (M = 6.6; SD = 2.29); never reported a crime (M = 6.5; SD = 2.34); nor engaged in predelinquent behaviors in adolescence (M = 6.5; SD = 2.29); although they did consider that they were likely to be a victim of a crime (M = 6.7; SD = 2.36); strongly agreed with surveillance cameras (M = 6.8; SD = 2.26) and did not use the internet (M = 7.1; SD = 2.17). Correlation Among Continuous Variables The results of the correlation matrix between the dependent variable, "freedom-security," and the continuous sociodemographic variables and those related to victimization and privacy are shown in Table 4. Only the variable "nationalism" does not correlate with the study variable, and all the others reach a significance of p value < 0.001 except for size of locality and religious practice with a p value < 0.01. Although it should be borne in mind that the n of the sample is high and can cause this significant correlation with most of the variables, we can point to certain co-variations between the dependent variable and the remainder. That is, a desire for greater levels of security is related to older age, lower social class, more right-wing ideology, lower educational attainment and living in smaller localities, and greater religious observance. In addition, the demand for greater security shows a lower correlation with having been the victim of a crime, reporting a crime, and having pre-criminal or delinquent behaviors, or manifesting stronger agreement with surveillance cameras and lower use of social networks. Explanatory and Regression Analysis Finally, Table 5 shows the results of two multiple regression models where, on the one hand, all the variables used in the descriptive variables and the correlation are contemplated (with the insignificant variable of nationalism); and, on the other hand, only those variables that in the last step had proven to be significant in this multivariate technique. Thus, the first model is comprised of 15 variables attaining a low R squared (R 2 = 0.068), and only five variables are significant within the model. The second model presents only those five significant variables in the final step. These reinforce the level of significance; they all reach p value < 0.001 and increase the R squared (R 2 = 0.080). These variables confirm a first approximation to a more detailed explanatory profile or predictive variables that lean towards security, male gender, older age, right-wing ideology, low educational level, and supporting surveillance cameras in public spaces. DISCUSSION AND CONCLUSION The principal hypothesis of the present study was that the majority trend of the population would lean towards security rather than freedom. This has been confirmed by the results in the case of Spain. In the seventh and last wave of the World Values Survey (2017-2021), which is still being developed, similar results are found for the set of 54 countries for which data was available, where 69.7% of the more than eighty thousand interviewees answered that security is more important than freedom. Only in three countries does freedom have a majority percentage: the United States, New Zealand, and Australia (Haerpfer et al., 2020). In this sense, comparisons with other international studies that include similar questions related to the freedom vs security debate such as the European Social Survey, International Social Survey Program , as well as sociodemographic profiles and other social characteristics or explanatory factors could be helpful to confirm or expand this hypothesis and trend. The study conducted is not able to give a definitive answer about future trends in the population's preferences between freedom and security. Among other reasons, the research is Cross-sectional and not longitudinal, moreover, it is limited to an only country to the influence of a global context. Certainly, it would be necessary to conduct or analyse cross-sectional and international studies. The recent analysis of the World Values Survey shows a return to the values of loyalty, security primacy, distrust, and authoritarian populisms as a reaction to the values of tolerance and individual freedom (Norris and Inglehart, 2019). In short, culture seems to be facing the freedom-security dilemma as a historical pendulum, although our current context is priming security. The relationship and correlation found in this article between victimization and the "freedom-security" debates provide at least two nuances. Firstly, against what is expected, people who are victims of crimes, whistle-blowers, and those who had delinquent or pre-criminal behaviours in adolescence lean more towards freedom than security, while those who are perceived as priority targets of crimes overwhelmingly opt for the security. That is, the issue of security has is related to personal experiences or behaviours, thus connecting them to the theory of securitization and de-securitization by Butler (2020), which states that the major security issues such as terrorism, climate change, gender violence, or any conflict are constructed and deconstructed in political discourses and public opinion. In this sense, the inclusion of more variables related to the perception of insecurity in future studies could also be helpful to build more significant explanatory models with a stronger association. In addition to the sociodemographic variables in which the association with security rather than freedom have been verified (male gender, older age, and lower level of education attainment), ideology has behaved as a highly predictive variable, associating the right more towards security. In contrast, the left was associated more closely with freedom. Azmanova (2020) points to a redefinition of the ideological panorama and the left-right axis as a consequence of the impact of globalization in Western societies, with a winning party that considers it an opportunity and a losing party that perceives it as a risk. The winners and supporters of globalization value its advantages for a more cosmopolitan lifestyle and open economy, placing themselves in traditionally left-wing positions. The losers of globalization represent blue-collar workers, those who fear or are insecure about opening international markets and migration, defending positions of a certain economic patriotism, materialistic values, and ideological positions located to the right and extreme right (Azmanova, 2020). Perhaps another fitting interpretation of the trend towards security comes from the interpretation of the consumer society, and by extension the network society, in the context of Bauman's sociological theory. In liquid modernity, consumer society replaces groups with an increasing number of "swarms" and the comfort of flying in a swarm derives from having security in numbers. The individual is based on the idea that when many have chosen to fly in the same direction, it must be a good and safe choice. In a "swarm" there is no exchange, cooperation or complementarity; there is only physical proximity and basic coordination in a given direction. Swarms have no leaders and no hierarchy of authority. They gather, disperse and reassemble from one event to the next, drawn by shifting and moving targets. The actual leadership of the swarm may "assign" leadership roles to particular members for a short period of time before they return to anonymity within the "swarm" (Bauman, 2007). The role of new technologies requires a reflection that Manuel Castells (1996) pointed out in the last century when he differentiated between the mere information society and the informational society. In other words, information and communication technologies have been the basis for entering a new informational era after the industrial society. This radical social and cultural change situates the debate on the dilemma between freedom and security precisely in the development and trends of technologies. Thus, the great historical and current challenge, according to Clarise Véliz (2020), is to recover individual and collective privacy (freedom) in the face of the data economy (security) in the hands of the power of large technology companies and governments. The fact of shifting the debate from a mere technological issue to the realm of power relations makes the freedom-security dilemma an exciting ideological and philosophical topic. The theses and the consequences of the surveillance society proposed by David Lyon have been reflected in the solid and significant association between the assessment of the presence of cameras in public spaces and the "freedom-security" debate. The results have confirmed that the preference for security is supported by those who defend the presence of public social control tools such as surveillance cameras. The current crisis caused by the COVID-19 pandemic has sparked the debate on "freedom-security" and the new mechanisms of social control such as mobile phones and tracking and surveillance applications used by States and technology companies (Taylor et al., 2020). In this way, the virtual space acquires an increasing relevance to address the redefined dilemma as privacy-cybersecurity. Finally, the current crisis caused by the global pandemic points to an emphasis on security and new social challenges to be faced at global, national, and individual levels (Varin, 2022). At the global level, it has increased tensions between the superpowers of China and the United States and demonstrated the unwillingness of rich countries to help much poorer countries when the health of their own populations is at risk. For countries, in some cases it has increased their tendency to fragment, and in others it has led to authoritarian rule that may well outlast the pandemic. And at the individual level, it has led to unprecedented forms of intervention, accentuating the growth of the "surveillance state" and "quarantining" rights and freedoms. In this context in which the pandemic and the measures adopted have led to greater confrontation, polarization and socio-political control, the debate between security and freedom takes on greater interest and connotations of a political and ideological nature from the point of view of public perception and opinion (Fernández-Prados et al., 2021). In this way, the new context of the global pandemic crisis, the dilemma between freedom and security, and public opinion become a triad that will undoubtedly generate future lines of research. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
2022-02-08T14:23:40.894Z
2022-02-08T00:00:00.000
{ "year": 2022, "sha1": "06c243d2e7d136bde1e2bb263c3e50c070a1cb01", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fsoc.2022.774485/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "06c243d2e7d136bde1e2bb263c3e50c070a1cb01", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
253727219
pes2o/s2orc
v3-fos-license
Comparison of Risk Factors, Clinico-Radiological Profile and Outcome in Patients with Acute, Subacute and Chronic Cerebral Venous Sinus Thrombosis 1. Misra UK, Kalita J, Mani VE. Neurological manifestations of scrub typhus. J Neurol Neurosurg Psychiatry 2015;86:761‐6. 2. Hughes RAC, Newsom‐Davis JM, Perkin GD, Pierce JM. Controlled trial of prednisolone in acute polyneuropathy. Lancet 1978;312:750‐3. 3. Lee SH, Jung SI, Park KH, Choi SM, Park MS, Kim BC, et al. Guillain‐Barré syndrome associated with scrub typhus. Scand J Infect Dis 2007;39:826‐8. 4. Lee MS, Lee JH, Lee HS, Chang H, Kim Y‐S, Cho K‐H, et al. Scrub typhus as a possible aetiology of GuillainBarré syndrome: Two cases. Ir J Med Sci 2009;178:347‐50. 5. Kim K‐W, Kim YH, Kim BH, Lee CY, Oh MS, Yu KH, et al. Miller Fisher syndrome related to Orientia tsutsugamushi infection. J Clin Neurosci 2014;21:2251‐2. 6. Gangula RS, Stanley W, Vandanapu A, Prabhu MM. Guillain‐barre syndrome with falciparum malaria and scrub typhus mixed infection‐an unusual combination. J Clin Diagn Res 2017;11:OD10‐1. 7. Phillips A, Aggarwal GR, Mittal V, Singh G. Central and peripheral nervous system involvement in a patient with scrub infection. Ann Indian Acad Neurol 2018;21:318‐21. 8. Tandon R, Kumar A, Kumar A. Long‐segment myelitis, meningoencephalitis, and axonal polyneuropathy in a case of scrub typhus. Ann Indian Acad Neurol 2019;22:237‐40. 9. Ju IN, Lee JW, Cho SY, Ryu SJ, Kim YJ, Kim SI, et al. Two cases of scrub typhus presenting with guillain‐barré syndrome with respiratory failure. Korean J Intern Med 2011;26:474‐6. 10. Sakai K, Ishii N, Ebihara Y, Mochizuki H, Shiomi K, Nakazato M. Guillan‐Barré syndrome following scrub typus: Two case reports. Rinsho Shinkeigaku 2016;56:577‐9. 11. Parra B, Lizarazo J, Jiménez‐Arango JA, Zea‐Vera AF, González‐Manrique G, Vargas J, et al. Guillain–Barré syndrome associated with zika virus infection in Colombia. N Engl J Med 2016;375:1513‐23. 12. Helbok R, Beer R, Löscher W, Boesch S, Reindl M, Hornung R, et al. Guillain‐Barré syndrome in a patient with antibodies against SARS‐COV‐2. Eur J Neurol 2020;27:1754‐6. 13. Li X, Wang Y, Wang H, Wang Y. SARS‐CoV‐2‐associated Guillain‐Barré syndrome is a para‐infectious disease. QJM An Int J Med 2021;114:625‐35. Comparison of Risk Factors, Clinico-Radiological Profile and Outcome in Patients with Acute, Subacute and Chronic Cerebral Venous Sinus Thrombosis Sir, Cerebral venous sinus thrombosis (CVST) primarily affects young and middle-aged population. It accounts for 0.5-1% of all strokes. [1] It has an annual incidence of 3-4/million population. [2] Due to its rarity, large population-based studies are sparse, although several case series have been reported from India. [3] Unlike arterial stroke, only one-third CVST patients present acutely. Nearly half have subacute CVST and one-fifth develop symptoms gradually over more than a month. [2] To date, no Indian studies has discussed differences in risk factors, clinical profile, neuroimaging findings, and outcome of acute, subacute, and chronic CVST. Herein, we have compared the same. This retrospective study involved CVST patients at a tertiary care hospital in North India from May 2018 to March 2020. All CVST patients aged ≥18 years were included. CVST was confirmed by brain magnetic resonance imaging (MRI) and MR venography (MRV) or computed tomography scan of brain venous sinuses (CTV). Patients with non-venous cerebral stroke and infection-related CVST were excluded. Demographic and clinical features including risk factors, obstetric history in females, neuroimaging findings, treatment, and outcome details were collected. In-hospital complications including need for decompressive craniectomy, intensive care unit (ICU) and mechanical ventilation were recorded. Modified Rankin Score (mRS) was used to assess neurological severity and outcome at discharge and 6-month follow-up, with a score of 0-1 defining "good functional outcome." We categorized patients in acute (<8 days), subacute (8-30 days) and chronic (>30 days) groups according to symptom duration at presentation. Hemoglobin <11 g/dl in pregnant, <12 g/dl in non-pregnant females and <13 g/dl in males was considered anemic. [4] Hyperhomocysteinemia was defined as plasma homocysteine level >15 μmol/L. [5] [6] Although, the risk factors were comparable in the three groups, alcohol consumption was seen in higher proportion of acute CVST. Post-partum state and anemia with iron deficiency was common in subacute CVST. Alcoholism has been reported in male CVST patients previously, [6] with dehydration, enhanced coagulability, and increased platelet reactivity likely precipitating acute CVST. [6] Subacute CVST in post-partum females appear related to a delay in seeking consultation due to lack of awareness in primary physicians and general population. Anemia with iron deficiency may result in thrombocytosis, reduced red blood cell deformability and increased viscosity, thereby contributing towards CVST. [7] Headache was the most common presenting symptom in all three CVST groups similar to previous reports. [6,8,9] Up to 50% of CVST patients develop seizures, [6,[8][9][10] and was seen in 58.3% of our patients. While most clinical features were comparable in the three groups, seizures manifested in a significantly higher proportion of acute CVST patients, probably related to increased parenchymal involvement, especially hemorrhagic infarction. Neuroimaging showed SSS and TS involvement in two-third patients and was comparable to 54.3% and 48% involvement, respectively, reported previously. [6] While SSS was most commonly involved sinus in acute, TS thrombosis was most frequent in subacute and chronic CVST. Since SSS is the primary drainage site for cortical veins and CSF, its blockage may result in early decompensation and appearance of clinical symptoms. Development of adequate collaterals and gradual compensation due to patent SSS might have delayed the symptoms despite TS involvement in subacute and chronic cases. Majority of patients (92%) reported a good functional outcome at 6-month. In-hospital mortality in 2 (5.6%) patients, both acute CVST, was comparable to 4-8% mortality in acute phase reported previously. [2,6] Single-center study, retrospective design, and small sample size are the major limitations of our study. Neuroleptic malignant syndrome (NMS) is an infrequent and life-threatening adverse effect of antipsychotics, especially the typical antipsychotics. [1] NMS is characterized by delirium, muscular rigidity, fever, and autonomic nervous system dysregulation, [1] as shown in Figure 1. A meta-analysis showed an overall estimate of 0.991 cases per thousand people. [2] Various criteria have been designed to improve the diagnostic accuracy of the same. [1] However, NMS remains a diagnosis of exclusion, and atypical forms of NMS exist. [3] Though the presence of rigidity and elevated levels of creatinine kinase (CK) characterize the illness, they are not specific for NMS. [3] A normal CK level and absence of rigidity does not rule out NMS. [4] A 20-year-old lady with history of schizophrenia of five years' duration presented to the emergency room (ER) with history of intentional excessive consumption of olanzapine (20 tablets). On examination, the patient was obtunded, dehydrated with a poor Glasgow Coma Scale (GCS -5), pupils bilaterally sluggishly reacting to light and mute plantar response (bilateral). However, there was no rigidity or any other localizing signs, and no signs of meningeal irritation. Her vitals showed a blood pressure of 90/60 mm Hg, temperature 101 F, pulse rate 145/min, respiratory rate of 35/minute. In view of her GCS, she was intubated and started on supportive medication. With a history of consumption of a large dose of olanzapine, a diagnosis of NMS was considered. She was evaluated to have leukocytosis (cultures negative and no focus of infection) and borderline elevation of CK levels. Troponin I levels and urine for myoglobin were negative. Other lab parameters, CT brain and chest X-ray were all normal. The patient qualified to the diagnosis of NMS according the Nierenberg criteria (though not satisfying DSM-V criteria). [1] Other NMS mimics were ruled out with appropriate investigations.
2022-11-21T16:20:57.872Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "8da02155cf94acd44845255cbc060da3c4743404", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/aian.aian_516_22", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b6cd89adb6d4ff51d4d0cf85c4d1106a45a300e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
210865240
pes2o/s2orc
v3-fos-license
Effects of SA and H2O2 Mediated Endophytic Fungal Elicitors on Essential Oil in Suspension Cells of Cinnamomum longepaniculatum Salicylic acid (SA) and hydrogen peroxide (H2O2) are signal molecule that plays crucial roles in plant secondary metabolism. In order to explore their roles in mediating the effect of endophytic fungal elicitors on essential oils in the suspension cells of Cinnamomum longepaniculatum fungus, an elicitor made of Penicillium sp. was used as material in this experiment. Cell suspension culture was used to study the effects of SA and hydrogen peroxide on the essential oils in the suspension cells of Cinnamomum longepaniculatum fungus. The results showed that there were signal pathways regulating essential oil synthesis by SA and H2O2 in Cinnamomum longepaniculatum fungus suspension cells, but the two pathways had no obvious succession. Adding endophytic fungal elicitors, AOPP and CAT at the same time could reduce the essential oil synthesis induced by elicitors in Cinnamomum longepaniculatum fungus suspension cells, but not completely inhibit it. It indicated that endophytic fungal elicitors could also promote the synthesis of essential oil in oil suspension cells through other signal transduction pathways. Introduction Cinnamomum longepaniculatum (Gamble) N. Chao is an evergreen tree of the family Cinnamomum, and it is one of the second-class national key protected plants. It is mainly distributed in Sichuan and is rich in natural aromatic oils. The whole body is treasure [1]. Further research found that essential oil from camphor oil has analgesic, anti-cancer and antibacterial effects [2] [3]. Endophyte of a plant is a type of fungus that grows in a plant and does not cause any pathological characteristics of the plant itself during the entire growth cycle or part of a healthy plant's growth cycle [4]. The first endophytic fungus was discovered in 1898. Until 1993, Sierle et al. isolated an endophyte capable of producing paclitaxel from Taxusb revifolia bark. Fungi (Taxomyces andreanae) [5]. Although the number of researchers has increased, the main research directions are still focused on the secondary metabolites of endophytic fungi and the diversity of biological activity and the relationship between endophytic fungi and the host [6] [7] [8]. In recent years, researchers have isolated endophytic fungi from a variety of plants, and some can produce the same substances or components as the host [9]. Plant endophytic fungi can promote the growth of host plants, enhance host resistance and reversibility, promote the synthesis and accumulation of effective substances or ingredients in plants, participate in the formation of essential substances in plants, etc. [10] [11]. At present, research finds that endophytic fungi of C. longepaniculatum can help the synthesis and accumulation of essential oil in C. longepaniculatum cells, and can increase the activity of protective enzymes of free radical scavenging system and can further promote the up-regulation of gene expression levels of key enzymes during the monoterpene synthesis [12] [13]. But so far, the research on the effect mechanism of endophytic fungi and essential oils of C. longepaniculatum is not enough, so it limits the endogeneity to a certain extent understanding of fungi affecting essential oil quality. Salicylic acid (SA) is hydroxybenzoic acid, a phenolic compound that is commonly found in plants [14]. Its biosynthetic pathway is mainly shikimic acid pathway in plants and is related to phenylalanine ammonia lyase (PAL). When plants induce necrosis symptoms through biotic or abiotic stress factors, PAL and other enzymes are induced in this pathway. The combined effect of these enzymes resulted in the accumulation of SA [15]. Today, as a signal molecule, H 2 O 2 is receiving more and more attention [16]. At the same time, H 2 O 2 considered being one of the main intracellular messenger substances in the process of endogenous fungal elicitors inducing plant cell defense responses. Recent studies have also begun to focus on H 2 O 2 for various plant by-products accumulated effects, such as betulin [17], atractylodesin [18], etc. In order to better understand the signal transduction mechanism of endophytic fungal elicitors through salicylic acid (SA) and hydrogen peroxide (H 2 O 2 ) to mediate the essential oil synthesis of C. longepaniculatum suspension cells to promote C. longepaniculatum oil, in this study, the suspension cells of C. longepaniculatum were used as the research object to study the synthesis of essential oil from C. longepaniculatum cells (this study mainly examined 1,8-eucalyptus) and the SA and H 2 O 2 mediate the relationship between endophytic fungal elicitors, further revealing that SA and H 2 O 2 mediate endophytic fungal elicitors (2J1) as signal molecules that influence essential oil production in Open Access Library Journal C. longepaniculatum suspension cells, and it will provide a reference for the subsequent research on the signal transduction mechanism of endophytic fungal elicitors to mediate the synthesis of monoterpenoids through other signal molecules. Materials The C. longepaniculatum was collected from the C. longepaniculatum base of Hongyan Mountain in Yibin, and an endophytic fungus 2J1 (Penicillium commune) was isolated from the C. longepaniculatum plant and identified in the early stage. Frozen storage using glycerol tube storage method. Establishment of Suspension System of C. longepaniculatum Select the fresh and tender C. longepaniculatum leaves soaked in washing powder water for 5 minutes, then place them under the faucet, rinse them with running water, soak in 75% alcohol for 15 s, rinse three times with sterile water; disinfect with mercury for 8 minutes, and finally rinse at least 5 times with sterile water. The inoculated explants were light cultured at about 25˚C. And then the callus was subcultured twice after the callus induction was completed. The well-grown and loosely-brown callus was inoculated into a conical flask containing 50 mL B5 liquid culture medium, at about 25˚C, cultivate with shading and shaking at 120 r/min speed. Subsequent once every 2 weeks, at least twice. Preparation of Endophytic Fungal Elicitors The 2J1 was inoculated on the culture PDA medium using a plate streak method. It was sealed and stored in a plastic wrap and cultured at 28˚C for seven days. Activated endophytic fungi were inoculated into the PDA liquid culture medium, set the temperature at 28˚C, the rotation speed was 130 r/min, and cultured for 7 days in suspension and shaking. The gauze was used to filter and isolate the bacteria from the fermentation broth, and the bacteria were ultrasonically broken. Then, it is mixed with the fermentation broth, filtered under reduced pressure, and finally the filtrate is put into a high-temperature autoclave, and autoclaved at 121˚C for 20 minutes to prepare an endophytic fungal elicitor. The total sugar was used to calibrate the elicitor concentration, and the content of endophytic fungal elicitor sugar at a concentration of 40 mg/L was determined by the fluorenone-sulfuric acid method. (Glucose standard curve: y = 0.1436x + 0.0119, R2 = 0.9919). Data Measurement Method Accurately weigh 0.3 g of dried C. longepaniculatum suspension cells, add cyclohexane to cold soak overnight (the ratio of cyclohexane to cells is 4:1), and then perform ultrasonic extraction for 30 min. Centrifuge on the centrifuge for 4 min at a speed of 5000 r/min, extract the supernatant, make up to 5 mL. And determine the content of essential oil. Analyze by GC-MS. The chemiluminescence method was used to determine the H 2 O 2 concentration [18]. Determined Open Access Library Journal the SA concentration by high performance liquid chromatography (HPLC) [19]. Exogenous Substance Addition Method C. longepaniculatum suspension cells were cultured to the 7th day with endophytic fungal elicitor 2J1 and substances filtered through a 0.22 µm microporous membrane, namely SA and SA synthesis inhibitor AOPP (L-α-aminooxy-β-phenylpropionic acid), H 2 O 2 , catalase (CAT); of which the addition concentration of exogenous SA, H 2 O 2 is 5 mmol/L, the addition concentration of AOPP and CAT is 20 mmol/L, the addition time 20 minutes before the endogenous fungal elicitor or exogenous was added. The suspension cells of each control group were added with an equal volume of PDA liquid medium. In summary, it can be seen that the suspension cell of C. longepaniculatum treated with an appropriate amount of endogenous fungal 2J1 elicitor, the best induction effect was achieved on the 21st day, and the essential oil produced was significantly increased; but the induction time was too long, which was not conducive to C. longepaniculatum suspension accumulation of essential oil in cells. The Role of SA and H 2 O 2 in Promoting the Synthesis of Essential Oil from C. longepaniculatum Suspension Cells by Endogenous Fungal Elicitors Add endogenous fungal 2J1 elicitors, exogenous SA, 2J1 and AOPP, 2J1 and CAT, SA and AOPP, H 2 O 2 and CAT, to detect the accumulation of SA ( Interactions of SA and H 2 O 2 in Endophytic Fungal Elicitors to Promote the Synthesis of Essential Oil from C. longepaniculatum Suspension Cells Although the above experimental results can confirm that SA and H 2 O 2 can be used as signal molecules to mediate the endophytic fungal elicitor to promote Open Access Library Journal the synthesis of essential oil from C. longepaniculatum suspension cells, the relationship between the two during the mediation process is not clear. In this study, the suspension cells of C. longepaniculatum were treated for 7 days, and a blank group, a control group (with the addition of the endophytic fungus 2J1) and 3 experimental groups (with the addition of the endophytic fungus 2J1 inducer and AOPP; 2J1 elicitor and CAT; 2J1 elicitor, AOPP, and CAT were added at the same time) were established to detect the accumulation of SA and H According to the experimental results, it can be seen that in the experimental group added SA inhibitor AOPP, the accumulation of SA and the synthesis of Discussion Since endophytic fungal elicitors belong to extracellular materials and cannot directly enter the cell to play a role, the process of endophytic fungal elicitors to influence the secondary metabolism of plant cells through signal pathways will first identify and bind to the plant specific receptors on the cell membrane, change the structure of the cell to promote the production of specialized intracellular messenger substances. These messenger substances can regulate the expression of related genes in the nucleus through a series of signal transduction pathways. Finally, the defensive secondary metabolic system is activated, and the synthesis of secondary metabolites [20]. Based on this theory, we can speculate that after treatment of C. longepaniculatum by the endogenous fungal 2J1 elicitor, the accumulation of SA and H 2 O 2 may promote the early reaction in the synthesis of essential oil from C. longepaniculatum, and SA and H 2 O 2 act as intracellular messenger substances and undergo a series of reactions, which promotes synthesis of essential oil in C. longepaniculatum. Studies have found that plant cells can produce a variety of signal molecules in the cell after being stressed by stressors such as inducers, and resist the external stress signal through corresponding signal pathways [21]. With more and more intensive research of the signal molecule and signal transduction mechanism was conducted, SA and H 2 O 2 were found to play important signaling molecules in many plants. This study also shows that the SA pathway and the H 2 O 2 pathway exist two signal pathways in C. longepaniculatum cells that simultaneously affect the essential oil synthesis of C. longepaniculatum suspension cells. In the study of these two pathways mediating the role of endogenous fungal elicitors in promoting the essential oil synthesis of C. longepaniculatum suspension cells, it was found that the addition of SA inhibited agent AOPP had no significant effect on the release of H 2 O 2 caused by the 2J1 inducer, and the addition of the quencher CAT with H 2 O 2 did not significantly affect the accumulation of SA. There are no obvious upstream and downstream relationships in this approach. Although this study further studied the role of SA and H 2 O 2 mediated endogenous fungal elicitors on the synthesis of essential oil from C. longepaniculatum, there are three aspects that can be further studied: first, the crude fungal elicitor is the filtrate inactivator of homogenized hyphae, and the endophytic fungal elicitor is divided into 4 categories: oligosaccharides, glycoproteins, proteins, unsaturated fatty acids, and the components are more complex. In this experiment, it is speculated that oligosaccharides are one of the more common elicitors [22]. In order to fully understand the induction of elicitors, it is necessary to further isolate and purify the crude endophytic fungus 2J1 and to detect the structure, explore a better preparation method [23]. Second, in this study, only 1,8-eucalyptol in essential oil was detected, which represented the synthesis amount of essential oil, and the detection of the products was not comprehensive, so it is possible to further study the role of the two signal regulation pathways of SA and H 2 O 2 in the synthesis of different essential oil components. Finally, add AOPP and CAT, and after double inhibitor treatment, the synthesis and accumulation of essential oils induced by the endophytic fungal crude elicitor in suspension cells was reduced to zero and not completely inhibited, indicating that there are also pathways other than SA and H 2 O 2 in the endogenous fungal 2J1 elicitor to promote the accumulation and accumulation of essential oil in C. longepaniculatum suspension cells. The synthetic pathway is very complicated and requires further research.
2020-01-09T09:11:42.488Z
2020-01-02T00:00:00.000
{ "year": 2020, "sha1": "519b5fec0e1c2d828ae5cd49034cee885c7f7331", "oa_license": null, "oa_url": "https://doi.org/10.4236/oalib.1106009", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f922211a080a9cde12f4bcd41ab5b09728958d0f", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
75140085
pes2o/s2orc
v3-fos-license
Ursodeoxycholic acid attenuates hepatotoxicity of multidrug treatment of mycobacterial infections: A prospective pilot study Background: Tuberculosis (TB) remains a global health problem. The application of rifampicin-based regimens for antimycobacterial therapy is hampered by its marked hepatotoxicity which results in poor adherence and may contribute to prolonged therapy or treatment failure. The purpose of this prospective investigation was to evaluate the hepatoprotective effectiveness of oral ursodeoxycholic acid (UDCA) (250–500 mg TID) administered to TB- or non-TB mycobacterial (NTM)-infected patients with drug-induced hepatotoxicity and ongoing therapy. Methods: Study population: During 2009–2017, 27 patients (11 women, 16 men, aged 19–90 years; median age 44 years, 16 Caucasians, 10 Africans, 1 Asian) out of 285 patients with active TB (24/261) or NTM infections (3/24) treated at our TB Center developed clinically relevant hepatotoxicity. Oral UDCA was administered to treat hepatotoxicity. Results: Twenty-one out of 27 patients (77.8%) showed normalization of elevated enzymes (alanine transferase and aspartate aminotransferase), alkaline phosphatase, and bilirubin while continuing TB treatment and 5 patients demonstrated a significant reduction of liver enzymes (18.5%). No change was observed in 1 patient (3.7%). Drug dose was not reduced in all patients; they all showed radiological and clinical improvement. There were no significant side effects. Conclusion: Oral administration of UDCA to TB patients developing anti-TB drug-induced liver injury may reverse hepatotoxicity in adults. Ursodeoxycholic Acid Attenuates Hepatotoxicity of Multidrug Treatment of Mycobacterial Infections: A Prospective Pilot Study Susanne M. Lang 1 , Johannes Ortmann 1 , Sven Rostig 1 , Helmut Schiffl 2 management of patients is complicated by the need to interrupt treatment and rechallenge with the standard drugs, as liver injury may be transient in many cases. The occurrence of anti-TB drug-induced liver injury is unpredictable, although risk factors have been identified. [3] However, the effectiveness of hepatoprotective drugs is not well defined. [4][5][6][7] Ursodeoxycholic acid (UDCA), a naturally occurring hydrophilic bile acid, has been used mostly off-label for the treatment of a variety of acute and chronic liver diseases, including primary biliary cirrhosis (licensed drug), primary sclerosing cholangitis, cystic fibrosis-related liver disease, pregnancy-induced intrahepatic cholestasis, and drug-induced liver injury. [8][9][10][11][12][13] Carefully designed experimental studies suggest a protective effect of UDCA pretreatment on isoniazid plus rifampicin-induced liver injury in mice. [14] Clinical data, however, are rare, and the effects of UDCA on anti-TB drug-induced hepatotoxicity are not clear. [15,16] Recently, Russian authors published a small randomized study using UDCA in children with TB and drug-induced hepatotoxicity. They found a relevant hepatoprotective effect of UDCA. [17] Moreover, UDCA is frequently used in Japan for isoniazid-induced acute liver injury in adult patients with TB infection. The anecdotal data are inconsistent. [16,18] We conducted a prospective pilot study to assess the hepatoprotective effectiveness of oral UDCA to attenuate liver injury induced by standard TB therapy. methods A total of 285 adult patients (age over 18 years) were diagnosed and treated for active TB (n = 261) or pulmonary non-TB mycobacterial (NTM) infections (n = 24) at the TB center of the SRH Wald-Klinikum Gera between 2009 and 2017. All patients with newly diagnosed TB disease received the standard 6HR2ZE regimen; the drugs were dosed according to the German TB guideline for adults. [1] The diagnosis of pulmonary NTM infections was made by repeated isolation and identification of pathogens in the patients' lungs with compatible clinical and radiological (computed tomography scan) features. ). Twelve patients with active TB were excluded from the analysis: four patients had preexisting chronic hepatitis or liver cirrhosis, one patient each suffered from malnutrition or advanced chronic kidney disease (Stage IV), one patient experienced intolerability of first-line anti-TB drugs (other than hepatotoxicity), and five patients had received other potentially hepatoprotective drugs (N-acetyl-L-cysteine, corticosteroids). Study design We performed a prospective pilot study of patients who developed drug-induced liver injury while on treatment for active TB. All treatment decisions were based on common medical standards only. Oral UDCA was administered to these patients at initial doses from 250 mg TID to 500 mg TID. None of the patients had cholangitis, cholecystitis, or pancreatitis. UDCA dose was reduced to 250 or 500 mg QD and finally withdrawn when the patients' liver enzymes returned toward normal. Ethics The investigations were conducted in accordance with the ethical principles that have their origin in the Declaration of Helsinki. The study protocol was approved by the Institutional Review Board (SRH WKG 35). Informed consent to analyze and publish the data in anonymous form was obtained from all patients. They all consented to the off-label use of UDCA. Study population The 24 patients with active TB and the 3 patients with pulmonary NTM disease had normal ALT and bilirubin/AP values before the initiation of treatment. None of these patients had coinfection with HIV, hepatitis B or C, preexisting overt liver disease, current high alcohol intake, renal failure, or received drugs affecting liver function tests. Definition of anti-TB drug included hepatotoxicity and injury patterns. Hepatotoxicity was defined as a treatment-emergent increase in (a) serum alanine aminotransaminase (ALT) greater than three (with symptoms) or five times (without symptoms) the ULN and (b) an increase of AP >2 ULN and/or bilirubin >2 ULN. Three biochemical patterns of injury were classified: hepatocellular, cholestatic, and mixed hepatocellular/cholestatic. These assignments refer to histologic features of injury but are usually defined based on the patterns of serum liver enzyme elevations. Hepatocellular injury can be suggested by markedly elevated serum ALT and AST levels, while the AP, gamma-glutamyl transpeptidase (GGT), and/or bilirubin are normal or only modestly increased. An "R" ratio of ALT to AP of 5 or more was used to define a hepatocellular pattern of injury. In cholestatic drug-induced liver injury, serum AP, GGT, and bilirubin are predominantly elevated with an R ratio of ALT to AP levels of minimally 2 or less. In cases of mixed injury, with similar elevations in serum ALT and AP, an R ratio of ALT to AP between 2 and 5 was used. Liver biopsies performed in five patients with anti-TB drug-induced liver injury showed intrahepatic cholestasis. Monitoring of liver function tests. Liver function tests were performed during the 1 st week on multidrug therapy at least twice weekly and as needed. Measurements of hematologic parameters and renal function were done to document safety of UDCA. results The study population encompassed a wide age range (19-90 years); most of the patients were men, born in Germany, and had active TB of the lungs. Nonmycobacterial pulmonary infections were caused by M avium or intracellulare [ Table 1]. Course of liver injury Most patients (22 out of 27 patients) showed biochemical patterns of cholestatic liver disease (AP 2-4 ULN) and/or bilirubin 2-3 ULN within 1 to 2 weeks after starting anti-TB therapy. Patients with a biochemical pattern of hepatocellular liver injury had ALT levels of 5-6 ULN. None of the patients complained at the diagnosis of anti-TB liver injury of jaundice, abdominal pain, ascites, and signs of encephalopathy or showed abnormal coagulation tests. In most of the patients, oral administration of UDCA was associated with a rapid decline (within 1 to 2 weeks) of elevated enzymes or bilirubin (normalization in 21 out of 27 patients, significant reduction in 5 patients). No measurable effect was seen in 1 patient. However, the enzymes of this patient did not further increase. By comparison, all 12 TB patients excluded from the UDCA investigations showed no reduction in the biochemical parameters of liver injury, and the majority even progressed. The hepatoprotective effect of UDCA in patients with anti-TB-induced liver injury occurred independently of age, gender, ethnicity, and type of mycobacterial infection. Outcome Hepatoprotection was effective in 96.3% of patients [ Figure 1]. Anti-TB drug dosage was neither reduced nor discontinued in any of the patients. They all showed radiological and clinical improvement of active TB or pulmonary nonmycobacterial infection. There were no adverse clinical, biochemical, or hematologic side effects of UDCA. dIscussIon The main results of this prospective pilot study indicate that oral UDCA administered to patients with anti-TB therapy-induced liver injury may ameliorate elevated transaminases, AP, and bilirubin in most of these patients independently of patient characteristics or type of mycobacterial infection. Transient mild changes in alanine transaminase and/or bilirubin are relatively common during antituberculotic chemotherapy and do not allow to predict the further course or severity of hepatotoxicity. Therefore, there is no unanimous recommendation for the cutoff level of liver dysfunction necessitating modification of treatment (dose reduction, discontinuation, and exchange of certain drugs). Undoubtedly, ALT levels five times or above that of normal or greater 2 or 3 fold levels of AP or bilirubin -as in our patients -are usually not spontaneously reversible. Moreover, none of the TB patients with liver injury excluded from the investigations showed spontaneous normalization of liver injury parameters without UDCA. Most of these patients needed a discontinuation or change of regimen of TB therapy. UDCA has been shown to exert anticholestatic effects in various cholestatic disorders and may be safely used long term in patients with cystic fibrosis, with very few side effects. Numerous potential mechanisms and sites of actions of UDCA have been unraveled in clinical and experimental studies. The relative contribution of these mechanisms to the anticholestatic action of UDCA may depend on the type and stage of cholestatic injury. Protection of injured cholangiocytes against the toxic effects of bile acids and the stimulation of impaired hepatocellular secretion by mainly posttranscriptional mechanisms seem to be relevant in cholestasis. [4,12,14,19] Stimulation of impaired hepatocellular secretion and increases in bile flow could be crucial for improvement of serum liver function tests, as it is in some forms of drug-induced Our investigation may have limitations due to the small number and characteristics of the study population and the nonrandomized design of the study. However, the vast majority of our patients had reversed from progressive liver injury with UDCA treatment and could continue anti-TB treatment without dose reduction. They experienced no known side effects, but some complained of a bile taste sensation while on high doses. Abdominal discomfort was reported frequently with ingestion of the antituberculous drugs and additional effects of UDCA could not be discerned. A positive effect of UDCA has been described also in Russian pediatric TB patients with anti-TB-induced liver injury, as published recently in Russian. [17] The authors had performed a randomized trial (UDCA vs. silymarin) in 77 children (3-14 years) and observed a more rapid and more frequent normalization of elevated ALT (>5 ULN) with UDCA compared to silymarin treatment. conclusIons Oral administration of UDCA to TB patients developing anti-TB drug-induced liver injury may reverse hepatotoxicity in adults. There is an urgent need to perform a large trial to confirm the preliminary findings of our pilot study. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2019-03-13T13:26:59.244Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "bfce41e67768f8db125d01aeb34313afada3d203", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijmy.ijmy_159_18", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "f909564bb5c6b7c85694deb90f4b9e18178e922e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
261632594
pes2o/s2orc
v3-fos-license
Developing The Snake And Ladder Game As Media For Teaching Vocabulary At Ma Al Khoiriyah This study aims to develop snakes and ladders game as learning media to teach descriptive text vocabulary at X class students of MA Al Khoiriyah Putukrejo Gondanglegi Malang. The research method used was the research and development (R&D) method with the ADDIE model which consists of five stages, namely analysis, design, development, implementation, and evaluation. The population of this study were 20 students at X class of MA Al Khoiriyah and the sample was taken by using cluster random sampling method. The instruments used in data collection were questionnaires and unstructured interviews. The results showed that the snakes and ladders learning media for descriptive text vocabulary was very valid and effective in increasing students' learning motivation. In this learning media, students are not only asked to answer questions in each box of snakes and ladders boards, but also to make sentences related to the vocabulary that has been translated. After the game was over, students must remember the new vocabulary that has been found and recorded on the blackboard by the teacher. This research is expected to make a positive contribution in the development of effective and innovative learning media to increase student learning motivation are confused when they are going to arrange the words they want to use. it is evident from the case that occurred when I taught at a Madrasah Aliyah institution in the Malang area, there were several obstacles that I faced when I gave English material that discussed grammar, namely Passive Voice, they could understand how to use formulas in passive sentences when I asked questions with the language that they often hear in general, they can turn these sentences into passive sentences well, but unfortunately there are some difficulties experienced by some students when I give them the task of compiling or giving examples of passive voice material that has been explained, because they don't know the vocabulary what they want to write and apply it when they want to arrange sentences so that it hinders them from doing their assignments. To solve this problem, it is necessary to develop vocabulary learning media which can make students easier to get a lot of vocabulary during teaching and learning activities. Students can enjoy continue to learn and it may reduce students' perspective that in getting a large vocabulary is difficult and must go through a process of memorization that takes a long time. That is way the researcher will use the snake and ladder game for learning media, because in this game it will help students master English vocabulary, so that it will make it easier for students to achieve mastery of the 4 skills in English (writing, reading, speaking, listening) therefore this game will motivate students to increase their English vocabulary. (Donmus, 2010) reveals that the value of educational games has been increasing in language education since they help to make language education entertaining. Azar (2012:240). There are some previous studies that related with this problem. Depend on article "Develop Games Learn English vocabulary" that is written by Suastika Yulia Riska (2018: 237890), state that this study use the material development stage produces a vocabulary learning media, namely "English with I do". Expert validation involves media experts and material experts. The percentage of media feasibility obtained from media experts is 76.78% or very feasible; while the percentage of feasibility of material obtained from material experts is 75% or suitable for use as learning media. "Development of Mobile Game Applications to Increase Student Motivation in Learning English Vocabulary". That is written by (Elaish et al., 2019) This paper investigates whether the developed Vocab Game can motivate native Arab students to learn English to achieve better performance. foreign languages that affect student learning processes. The last "Development of English Vocabulary Teaching Materials (Fun Vocabulary) Based on Total Physical Response (TPR) (Research and Development (RnD) for Grade IV Students at SDN Serang 3)", written by (Bendang Pratiwi, 2019) Researchers take advantage of increasingly advanced technology in developing learning media by making educational games for learning vocabulary. The development of vocabulary games in this study uses snakes and ladders games, further developing the learning media through the use of games, because games are media that will provide benefits for teachers and students in increasing language knowledge if material is taught in a fun way. Teachers can use technology to create games as learning media. Qumillaila (2017: 57) states that the use of technology and media in learning can help improve the quality of learning itself. In this research, we will explore the process of developing an English vocabulary learning game using the snake and ladder game at MA AL Khoiriyah. By conducting this research, we aim to understand how the development of vocabulary games using the snake and ladder game can assist students in learning English vocabulary. The results of this research are expected to provide insights and recommendations for the development of more effective and engaging learning methods to expand students' vocabulary. Through this study, we will delve into the process of developing an English vocabulary learning game using the snake and ladder game at MA AL Khoiriyah. By conducting this research, we hope to gain an understanding of how the development of vocabulary games using the snake and ladder game can support students in learning English vocabulary. The findings of this research are anticipated to offer insights and recommendations for the enhancement of more effective and engaging teaching methods to expand students' vocabulary. (Nurriatul Latifah) RESEARCH METHOD According to (sugiyono, 2009) argues that, method research and development is a research method used to manufacture certain products, and test the effectiveness of the product. To be able to produce products certain research is used that is needs analysis (using survey or qualitative methods) and to test the effectiveness of the product so that it can function in the wider community, research is needed to test it effectiveness of the product (used experimental method). Research Design This study used a research and development model to obtain results that were relevant to the objectives (R&D). with make a product in the form of media learning Snake and ladder vocabulary for material descriptive text. That is, first, this research entails looking at research findings that are relevant to the product being developed. The second step is to create a product based on this discovery. Third, field testing it in the environment where it will be used in the future and last, revising it to address any flaws discovered during field testing (Putra et al., 2020). The type of development carried out by the researcher follows ADDIE development model. this method consists of 5 phases namely analysis, design, development, implementation and evaluation. This model has a systematic process to produce effective learning material. Population and Sample The population of this research consists of 20 students from grade X at MA Al Khoiriyah. The sample was taken using cluster random sampling method. Instruments One of the instruments used in data collecting for this analysis stage was a questionnaire. The questionnaire used in preliminary study was "open" questionnaire for teacher of X Grade and students of X Grade. The questionnaire for the final result was "closed", it means the respondents are only choosing the best one on the items and making checklist on the given answers. Unstructured Interview was given to the English teacher of X Grade Senior High School Al Khoiriyah and five students of X Grade in Senior High School Al Khoiriyah (using random sampling technique) in order to get the data from questionnaire result. This questionnaire was the expert judgment. It was proposed to a materials expert and media expert from 2 validators to know their opinion and suggestions about the developed materials After the data is obtained, analyze the data validation results from the team of media and material experts using a Likert scale. The assessment scores used are: (1) Very Invalid, (2) Invalid, (3) Valid, (4) Very Valid. This questionnaire is about student satisfaction to find out the students response regarding the use of product Snake and ladder learning media for X grade in Senior High School Al Khoiriyah This questionnaire type is close ended questionnaire that provide "Very good" or "Good" or "Enough" or "Less good" or "Bad" to answer. Data Analysis The data was analyzed and displayed into descriptive quantitative analysis. The data from questionnaire was displayed in form of percentage and described in words as descriptive analysis. In this research, to measure whether the data or the product is feasible, the researcher used eligibility formula (validity product), as follows (Arikunto, 2010). (Nurriatul Latifah) In addition, in analyzing the percentage of satisfaction questionnaire, the researcher used questionnaire percentage formula, as follows (sugiyono, 2009). RESEARCH FINDINGS AND DISCUSSION Research Findings In the first process, researchers conducted field observations to analyze needs or identify problems by interviewing English subject teachers at MA AL KHOIRIYAH, and from the results of the studies that have been conducted, it turns out that teachers still use conventional learning models more often. in teaching English, the teacher only carries out learning according to the lesson plan in the teacher's book and does not innovate how to teach English to make it more interesting and fun, this makes students less motivated to learn the material, even though there are still many students who experience difficulties in learning to master English, mostly due to the difficulty of understanding the English material presented due to the lack of vocabulary that is owned so that it makes students lazy and reluctant to learn more about English material which is very important to learn. students to master at this time. After being validated by experts, the researchers revised the product before it was used in research or trials on class X MA AL KHOIRIYAH Putukrejo Gondanglegi students. To innovate an interesting and fun learning model in increasing student learning motivation, especially about vocabulary, the researchers created a game-based learning media design, namely the snakes and ladders game to teach vocabulary which can make students more active and cheerful during learning. In this Snakes and Ladders learning media, students are not only required to answer the questions in each box of the Snakes and Ladders board, but after answering the questions, students must make sentences related to the vocabulary they have translated.In this case students learn to place vocabulary on the right sentences to make meaningful sentences related to the vocabulary itself. after the game is over, there is a second challenge, which is to remember the most vocabulary from several new vocabulary words that have been found and recorded on the blackboard by the tutor. the winner of the first challenge is the participant who reach the finish the earliest and the winner of the second challenge is the participant who is able to remember the most vocabulary. When implementing the snakes and ladders learning media, the students have high enthusiasm for learning with a new learning model which is quite interesting because it uses snakes and ladders game in delivering the subject matter, this can make it easier for them to understand the material of the descriptive text because of the content of the questions contained in the game. and make them easily get new vocabularies from what they apply in carrying out the game. Evaluation only existed when the researcher validated the experts and the improvements were made repeatedly until the product could be said to be suitable for use in the trial or research process in class X MA Al Khoiriyah. Student response to snakes and ladders learning media Based on the research results obtained from the student response questionnaire in the trial involving 20 students which can be seen in the Image size and shape 16 3 1 -- The following is the percentage of student responses to which can be seen in Table 3 It can be seen that of the nine questionnaire questions, the percentage of students who chose strongly agreed (4) was 81%, agreed (3) was 17%, while those who choose do not agree (2) was 2%, and do not agree (1) do not exist. While the results of the Percentage of Student Responses in the Large Group Trial involving 20 students can be seen in the following table: The results of the reliability test for each question above can be said to be valid because precentage of the question > 70% , it can be seen in Table 4 Table 5 Results of Data anlysis of satisfaction questionnaire Frekuensi Nilai Total 2 42 84 9 44 396 2 40 80 4 43 172 1 38 38 1 41 41 1 37 37 Average 848/20=42,4 .100% F N Total 2 93 186 9 97 873 2 88 176 4 95 380 1 84 84 1 91 91 1 82 82 Precentage 1872/20=93,6.100% The questionnaire, the average value obtained in Table 4 is 93,6% The results of the percentage analysis of the satisfaction questionnaire obtained an average value in Table 5 of 93.6%, from this result it can be said that the students strongly agree with the development of snakes and ladders learning media for teaching vocabulary at MA Al Khoiriyah Malang. (Nurriatul Latifah) 1. The ease of snake and ladder game media in helping students' understanding of descriptive text 4 80 2. snake and ladder game media to increase students' motivation to learn vocabulary 5 100 3. The accuracy of using questions in snake and ladder game media 4 80 4. Snake and ladder game as learning media 5 100 5. Enthusiasm of students in using snake and ladder game media to learn vocabulary 4 80 6. The suitability of the material in the crossword puzzle media with KI and KD 5 100 7. The suitability of the material in the crossword puzzle media with the indicators and learning objectives. 1. Ease of operation of media 5 100 2. Ease of use of Language 5 100 3. Image variations 5 100 4. Clarity of writing 5 100 5. Color compatibility 5 100 6. Image size and shape 5 100 Jumlah 1.240 Rata rata 95,3 These results were obtained from the product validation stage, at the validation stage product there are criticisms and suggestions from the team of experts to improve snake ladder learning media. The validation results that can be obtained from the results of presenting and processing the data, can be seen in Table 6 The average value of the validation results is 89%, the results are consulted in Table 1 The data is obtained in a very valid category, so it can be concluded that learning media snakes and ladders in descriptive text to teach vocabulary material can be developed at MA Al Khoiriyah Putukrejo Gondanglegi Malang. Discussion This study used a research and development model to obtain results that were relevant to the objectives (R&D). with make a product in the form of media learning Snake and ladder vocabulary for material descriptive text. Development research can be interpreted as a process or step in developing a new product or perfecting an existing and created product. According to (sugiyono, 2009) This research is a research development or Research and Development (Nurriatul Latifah) (R&D) is a research used for produce a certain product. This is consistent with the main objective of research and development in education not to formulate or test theory, but to develop effective products for use in schools. In this research, it is about the development of learning media for snakes and ladders in the material for writing descriptive text in class X-MA Al Khoiriyah Gondanglegi Malang. The steps (R&D) include several stages, namely seeing the potential and problems, data collection, product design, design validation, design improvement, expert team validation, product revision, product trial, final product revision and snake and ladder media production. However, the development of snakes and ladders learning media in descriptive text material in this study was only seven stages without carrying out trial use, revision of the final product and mass production. The results of the average percentage of the validator on snakes and ladders learning media in descriptive text material for teaching vocabulary from two aspects were obtained at 89% with a very valid category used in MA Al Khoiriyah Gondangllegi Malang. The explanation above has explained that the existence of snakes and ladders learning media can help English teachers in the learning process and teaching, and is needed by students so that students are not bored and bored in learning English, especially vocabulary with this media also allows them to play games and study in a relaxed manner. Therefore from the results of the presentation two expert teams that have been obtained can be categorized as very valid to be used at MA Al Khoiryah Gondanglegi Malang. With the existence of snakes and ladders learning media in descriptive text material, students of MA Al Khoiriyah Gondanglegi Malang find it easier to learn descriptive text material by connecting students' imaginations in the real world, namely around the school. With the Snakes and Ladders learning media, students look more active and motivate students to be enthusiastic in learning English, especially vocabulary. Learning media is expected to provide benefits, including: 1) the material conveyed becomes clearer in meaning for students, and is not verbalistic in nature. 2) learning methods are more varied. 3) students become more active in learning English. 4) learning is more interesting. 5) Overcoming time constraints. The questionnaire sheet is an information gathering tool by asking a number of written questions to be answered also in writing by the respondent. Questionnaire sheet 3 is used to see the results of student responses to snakes and ladders learning media in collecting data. The results of the Snakes and Ladders media trial on students were carried out after the Snakes and Ladders learning media was revised based on suggestions and input from experts, the Snakes and Ladders learning media could be tried out on 20 students using a questionnaire. Based on table 4.3, it can be seen that the test scores on 20 sample students of snakes and ladders learning media in descriptive text material with the percentage of students who strongly agree (4) is 81%, agree (3) is 17%, while those who choose disagree (2) is 2%, and disagree (1) none. Thus, based on the results of the student's research it can be concluded that the development of snakes and ladders learning media on descriptive text material for teaching vocabulary at MA Al Khoiriyah Gondangglegi Malang obtained a positive response with a percentage of strongly agreeing 81%. so it can be said that some students agree with the development of snakes and ladders learning media at MA Al Khoiriyah Gondangglegi Malang school. This research is in accordance with the research on the development of vocabulary snakes and ladders media to empower students' creative thinking skills in MA Al Khoiriyah Gondangglegi Malang which states that snakes and ladders vocabulary media to empower students' thinking skills meets the criteria very feasible, with a percentage of 95.3% for material experts, 84.6% for media experts, 96.15% So it can be concluded that the snakes and ladders media developed is feasible.
2023-09-10T15:23:06.464Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "9f57fd303896ca9cf80a2662fa3845c1e69a3652", "oa_license": "CCBYSA", "oa_url": "https://ejournal.mandalanursa.org/index.php/JIME/article/download/5506/4157", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "04b453a485a35014fcbbf2862f6611a8a30ac399", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
204694401
pes2o/s2orc
v3-fos-license
Water in Krakow’s Gardens, Parks and Areas of Greenery In park and garden design, one of the more valued assets of the natural environment is water. Perceived as the source of life, it has always constituted an essential element of garden compositions, one that is both impressive and symbolic. In subsequent historical periods, designers expanded the possibilities of using water in garden layouts and since the nineteenth century waterfront areas have been an important element of shaping systems of urban parks. The article features a characterisation of the participation of water and waterfront areas in Krakow’s gardens and areas of public greenery. The matter of both waterway systems that order the structure of the city, anthropogenic or natural pools, as well as details decorating park spaces were discussed. One of the most essential elements that have crystallised Krakow’s urban layout are so-called river parks. The presence of rivers in the city significantly improves its visual attractiveness. Natural points, sequences of views and exposed places are highly distinct, in addition to an attractive waterfront landscape with outstanding landmarks. This is confirmed by historical panoramas and contemporary conceptual proposals of walking areas and boulevards. River parks, which are linear, show highly diverse landscapes, the separate tradition of places, their own identities, including natural identities—not limited to the Vistula River valley—as the Vistula is the main river. Natural and artificial bodies of water and their accompanying recreational areas, e.g. Zakrzówek, Bagry or Przylasek Rusiecki, have a significant share in shaping Krakow’s areas of greenery. Water is also present in Krakow’s gardens and parks. Its visual qualities, the charm of shimmering light, reflections and the dynamism of, among other things, water jets on a smooth water surface, as well as its sound have all found their use. In gardens or manorial or palatial parks it often constituted an essential compositional element, e.g. in the garden of Łobzów, the parks of Dębniki or Prokocim. It did not survive in any of them. Later on it also became an important part of the programme of public parks, with pools or ponds located, among other places, in Planty Park and Park Jordana. To this day it brings joy and refreshment to its users in the form of representative fountains and water jets. In the article it was presented just how diverse functions does water play in the composing of Krakow’s areas of greenery—from the detail to the planning of its urban structure. The fact that, particularly recently, a frontal turn has been made towards the Vistula, as there were voices that the city had its back turned to it. At present, public areas that are open towards the water are being designed. Designs of waterfront areas that have recently been completed in Krakow and which constitute a result of competitions, were shown as well. These include the parks in Zakrzówek or near Płaszów lake. This proves that water still remains an inexhaustible source of inspiration and its accessibility in areas of public greenery attracts large amounts of users. Introduction One of the most prized natural assets of the environment in the design of gardens and parks is water. Perceived as the source of life, it has always constituted an essential element of a garden composition, one that is impressive and symbolic. Throughout history, designers have expanded the possibilities of its use in garden layouts. Since the nineteenth century, waterfront areas have been an important element of designing systems of municipal parks, which was shown by, among other things, the designs by F. L. Olmsted for Boston or Rochester. The arrangement of areas of greenery combined with ensuring flood safety increased the attractiveness of an area for development, facilitating the development of cities, often determining their form or direction of expansion. Waterfronts have become their hallmarks. Krakow, the capital of Lesser Poland, a former capital city, constitutes an example of a gradual turn with its front towards water, with river parks having become an element of its development policy since the final decade of the twentieth century. Water, introduced to Krakow's parks and public spaces, constitutes a good example of increasing the attractiveness of places, also demonstrating the specificity of their atmosphere. Outline of the use of water in garden design and landscape architecture, as well as its symbolic qualities Without water there is no garden. It gives life to plants. Since the most ancient times it has been a key part of gardens. The symbolic dimension of water as the source of life and purification, the basis of all things, the vision of the rivers of paradise is in the topos of the garden-Eden-paradise. Water took on a particular significance as an allusion and a symbol, eliciting an atmosphere of contemplation. Buildings reflected in it like in a mirror and the sometimes, foggy image reminded of transience and lability. Springing, falling, under specific circumstances it formed rainbows, providing unforgettable experiences. In classical mythology natural caves with springs inside them were considered the homes of nymphs-Greek spirits of nature-as well as muses. Nymphs were created by the living stream of a river, its magic and strength, residing in grottoes, which are wet and where water trickles down their rocky surfaces. Water was also associated with river deities, the naiads or tritons, which are associated with a wealth of iconography and references to the genius loci. Water also constitutes one of the main compositional elements. In geometric layouts it was present in the form of ornamental wells, pools, various types of fountains, automatons and water chains, canals, cascades or water parterres. During the Renaissance, garden hydraulics enjoyed an excellent bout of development-as a field concerning itself with intricate ways in which to manipulate water, particularly with the use of pipes and fountains, or in forms that fall as if a curtain or sheet of fabric. Various forms of water features were used, such as water organs or automatons. In automaton theatres water was used to move various water mechanisms-animals, birds or figures playing out a sort of spectacle. In Le Notre's Baroque gardens visual perspectives were shaped using a large share of water mirrors, with meticulously thought-out perspective effects. Water surfaces at different levels were conducive to the obtainment of mirror-like effects that magnified the appearance of spatiality, infinity and provided interiors with depth. Burbling water or thundering cascades were also used in building atmosphere. Charles Perrault wrote that water is the soul of gardens, without which gardens appear dead. In the eighteenth century, when nature triumphed in garden design, water also became an important element of garden landscapes. Its visual qualities, the charm of shimmers of light, reflections and the dynamic of unbound water and its accompanied sounds were all put to use. Rivers, streams, springs and waterfalls were used. If there was no water present, it was introduced artificially. Important elements of artificially created landscapes also included levees, one of the basic tools that the eighteenth and nineteenth-century landscape gardeners operated with. "English rivers" also became common in gardens-ponds shaped in a manner similar to a river, with an elongated shape and irregular shores. Larger water bodies were given distinct, irregular shorelines. It almost became a rule to place islands on ponds and lakes. Along with the expanding knowledge of engineering, the possibilities of using water and waterfront areas expanded as well. Water still plays an important role in works of landscape architecture that are currently being created, both in the visual and environmental sense. It improves the quality of urban public spaces, becoming a dynamo for the development of river shores or water bodies, often being an important element in processes of re-cultivating decayed areas. Oftentimes, thanks to interesting ideas for the use of water in parks or squares, these structures gain genius loci of their own and become models of good design practices. Recent decades have also brought with them new tendencies whose goal is to increase the environmental value of a place, e.g. by introducing rain gardens, the renaturalisation of watercourses or programmes of small-scale retention of surface runoff. The significance of water in shaping the structure of Krakow's areas of greenery The surface area of contemporary Krakow is 327 km 2 , and its dimensions are 18 km along the northsouth direction and 31 km along the east-west direction. The areas of the highest and high natural value, when combined, comprise ca. 10 % of the city's surface, while areas that can be considered precious (which also include urban parks)-ca. 15 % [1]. Krakow's location in the Vistula River Valley, its varied topographic layout, diverse plant cover and location at the edge of the Krakow-Częstochowa Upland and the Niepołomice Forest-comprise its unique natural and landscape values. Outstanding and varied works of architecture and urban planning add to this, ultimately resulting in a landscape that stands out not only on the scale of the country, but the global one as well. The presence of a river in the centre of a city significantly increases the attractiveness of the landscape. The number of natural points, view sequences and exposed locations increases, which is also an effect of topography. This is the case with Krakow, where the Vistula cuts through the city, with smaller rivers and streams connecting with it. Krakow's hydrological layout constitutes a basis for the design of a so-called system of river parks, green wedges that crystallise Krakow's urban layout. These linear water-side parks show various landscapes, separate traditions of various places, their own identity, including the natural one. Systems of municipal greenery, by introducing wedges of greenery outside of the city, connect with suburban regions forming a regional system of open areas [2]. The creation of a system of parks makes it possible to protect the landscape of the natural riverside scenery. This provides broad opportunities for shaping the representative part of the city along rivers, including landscaped park areas. In Krakow we can observe, similarly as in many other cities, changes in the function of areas in downtown districts where various industrial structures had been located, as well as those that had featured circulation-related grounds and storage areas. The possibility of developing these "reclaimed" areas as parks is also being used, e.g. the Stacja Wisła park in the former industrial district of Zabłocie. This part of the city changed drastically in the nineteenth century, from a suburban into an industrial one, and at present constitutes an actively revitalised region [3]. A park with an area of 2 ha was built as an element of the Vistula River's river park, in a place formerly occupied by the infrastructure associated with the Kraków Wisła train station, from which it took its name. In 2016 a two-stage competition was concluded. 26 competition entries were submitted, from which the jury chose three. These were once again subjected to consultations and the choice made by jurors and residents was unanimous-the best conceptual design was the one prepared by Michał Grzybowski, a second-tier student of landscape architecture at the Cracow University of Technology. On the 11th of May 2018 another municipal park was officially opened-the Stacja Wisła park. The park features 12 zones that are the result of its functions and target users. It includes the following zones: a recreational zonewith a picnic site, a play zone, with a natural playground, as well as, among others, a flowery meadow, an urban farm with raised beds where residents can cultivate their own plants, a labyrinth, an open-air stage with a dancing space, as well as a bicycle parking space. The park also features a multi- functional pavilion, designed in accordance with the precepts of sustainable development, i.e. featuring rain gardens. The Stacja Wisła park was acknowledged by the Association of Polish Town Planners (TUP) and won in the "Newly created public space in greenery" category in a competition for the best public space, organised in cooperation with the Union of Polish Cities. The park also received a nomination for the prestigious Mies van der Rohe award in the same year, as one of 18 projects from Poland. Case study-large water bodies in Krakow Stagnant waters are a distinct element of Krakow's landscape. Some of them include natural water bodies, however, large anthropogenic lakes, out of which Zakrzówek, Bagry, Staw Płaszowski, Przylasek Rusiecki and Zalew Nowohucki have the greatest surface area, are among the largest and most popular among the city's residents. The existence of most stagnant water bodies is associated with the post-industrial heritage of the city-these are former mine grounds or quarries that were flooded with groundwater that had a high water table. Each of the water bodies has its own character, however. Examples of these, along with a listing of their selected properties in terms of size have been presented in the table below (tab.1). It would be proper to start their characteristic with Zakrzówek-a water body with accompanying areas whose development has been a cause of controversy and conflict among numerous stakeholders for years. The site is a part of the Eighth District of Dębniki and its surroundings include the charming spot of Skałki Twardowskiego. The water body itself was created in the place of a limestone quarry that was closed down in 1990-shortly afterwards the quarry was flooded with water, whose surface is currently surrounded by several-dozen-metres tall vertical limestone rocky walls that are partially overgrown with tall plants. The water surface itself takes on a turquoise shade (under the influence of natural chemical processes), and its depth reaches up to 30 metres in same places. The site is perceived as very charming-apart from the lake we can also find climbing walls here or observe Krakow's panorama (it has an excellent exposition of the Old Town). Visitors here can not only distance themselves from the hustle and bustle of the city and come into contact with nature, but can also use the services of the scuba-diving school that uses the lake. Its nature is a highly valuable aspect. Apart from the shade-giving dense tall greenery (reports of the Municipal Greenery Authority state that it is comprised of 48 species and over 7 thousand trees [4]), the area of the lake also includes xerothermic grasses [1], with the local fauna also constituting an important aspect, mostly because of its numerous species of butterflies and dragonflies. The development of Zakrzówek and its surrounding areas has remained a contentious issue for years. The cause of the conflict were the conflicting interests of a developer (who bought a part of the site and planned to place commercial buildings there), the owners of neighbouring private lots, ecologists, as well as residents of the city who want to protect the local environment. The conflict that lasted for over 10 years ultimately ended with a buyout of the land by the city and the planned project of making it accessible in the form of a park. Zakrzówek is also associated with other controversies. Numerous cases of drowning and other accidents have been noted here (including broken limbs), as well as the incident of the "spring bonfire", when a Krakow-based student, using a social media website, gathered 22 thousand people for a massive party near the lake, therefore leading to an illegal spontaneous mass event [5]. In 2016, the Board of Greenery of the City of Krakow in cooperation with SARP announced an urban-architectural competition for the development of a programme and spatial concept for the Zakrzówek Park. The competition assumptions [6] included, among others, the location of the Ecological Education Centre in the area of the development of the building, the zone of the arranged water sports space, as well as the location and form of the most important architectural and landscape elements. Great emphasis was also placed on minimizing interference in the existing flora and fauna. Despite the great interest in the subject, it was not possible to select the winner, as the jury declared that none of the projects met the competition's objectives, although the proposals presented were of a high standard [7]. The highest score (second place) was given to the work by Aldona Kret, Katarzyna Elwart, Katarzyna Janicka, Alina Ziemiańska and Weronika Jaworska. Among other things, the coherence of the proposed architectural forms was appreciated, as well as the harmonious inclusion of the buildings in the terrain [8]. The failure to select the winner made it necessary to develop a separate concept of land development, which was entrusted to the design studio F11 [9]. It assumes the arrangement of paths of simple shapes and various park functions (such elements as swimming pools, climbing spaces, picnic places or a dog enclosure were located). The buildings of the Ecological Information and Water Sports Centres have been arranged as one-storey facilities with softly running lines of wood-clad facades. In 2018, the process of cleaning up the area and preparing it for the construction of the park began. It is expected that the implementation process may take about 3-4 years. Another water body is Staw Płaszowski (the Płaszów Lake), which was created in place of a former gravel and clay pit, materials which were procured for the construction of a railroad interchange near the water body. It is a lake located in the eastern part of Krakow, in the district of Płaszów, very close to Powstańców Śląskich Street. The reservoir is located in a part of the city that is valuable in terms of its landscape-in the vicinity of the former Liban quarry, Lasota Hill and the Mound of Krakus, which have good expositions when observed from the paths around the lake. The place has high visual qualities in terms of passive exposure-the extensive lake is partially covered in reeds, surrounded by tall water-side greenery and the nearby hills, it produces picturesque visual effects. Due to its low depth and being placed near public structures (a clearway, big box stores), Staw Płaszowski does not play the role of an urban bathing spot. Its primary asset is environment, which has so far developed in accordance with natural succession. Along the border of the entire lake there are groupings of reeds from the taxon Phragmition. These are floristically poor aggregative communities distinct of eutrophic shores of stagnant water bodies [10]. The areas to the north west and east of the lake are covered with fresh oatgrass meadows (Arrhenatheretum elatioris typic). These are floristically rich, anthropogenic groupings on fertile, slightly damp mineral soils. They are distinct of flatland areas in our country. They are some of the most distinct replacement plant communities in the vicinity of mixed-species deciduous forest communities. They are dominated by lawn grasses such as the meadow oatgrass, as well as flowering legumes [10]. The medium-height and tall plant group includes the plants of the shore shrubbery that covers the remaining part of the area, in this case bearing the closest resemblance to riparian poplar forest community. These are communities of plants that are primarily self-sown. Acer Negundo and Sambucus nigra should be considered its dominant species. The areas surrounding the lake also provide favourable conditions for the development of fauna. The area around Staw Płaszowski has been observed to feature 24 species of dragonfly, including one protected species-Sympecma paedisca. During avifauna studies performed in 2017 a total of 37 bird species were observed, among which there were: the mute swan, the mallard, the great crested grebe, the Eurasian coot, the common tern, the common blackbird, the common swift, the cuckoo, the great reed warbler, , the Eurasian blackcap, the magpie, the jackdaw, the rook, the crow, the seedeater and others. The invertebrate fauna of the water body is also quite rich, as many different groups of animals were observed in itprimarily insects, whose larvae live in aquatic environments [11]. Staw Płaszowski has also encountered problems with maintaining its use as an area of high ecological value. Until recently, the periodic lowering of its water surface was seen as a significant problem, one that can be associated with numerous causes, which can include excessively dense development in the vicinity of the lake, the draining of excavations for the construction of new buildings and atmospheric conditions (summer draughts) [12]. Littering is another problem-it is a space in the city centre that does not possess a clear infrastructure, lighting or monitoring, but is overgrown with tall greenery, resulting in conditions that are suitable for the emergence of illegal waste dumping sites. In 2016, the Board of the Municipal Greenery of the City of Krakow announced a competition for the development of the space surrounding the Płaszów Pond, whose main objective was to locate the park in this area (including areas currently leased by two shopping centres, where technical squares and parking lots were located). The winner was the design team composed of Katarzyna Dorda, Karolina Porada and Joanna Szwed ( fig. 1). The highest rated concept was to divide the space into zones including recreation and recreation areas, an educational garden, a zone of natural succession, a zone of special protection of ecological values and a zone of residents of the surrounding settlements. The designed objects (including a café pavilion and platforms) connected the system of paths of a free character. In the basin itself, the location of small breeding islands is envisaged. After the results were announced, a part of the design team in cooperation with Land Arch, run by Małgorzata Tujko, started to develop a detailed design for a part of the area including plots of land owned by the city. The new scope of the study covered areas located along the northern part of the basin, excluding leased areas. Determination of new boundaries of the park forced to develop a new concept of its development, which in terms of functionality and composition refers to the competition work. Finally, it was planned to locate in the park four wooden rest platforms, arbours, two playgrounds with a naturalistic character and a free layout of paths. Postulates of ecologists [11] cooperating with the designers -professor Roman Żurek and Karol Ciężak -were also taken into account. It was decided to exclude from the process of clearing development in the north-western part of the complex, where an educational garden was originally planned. In the future, the area is to be cut off from the mainland with a moat and become a large ecological enclave, playing the role of a breeding island. In May and June 2018, cleanup works were carried out in the area covered by the executive design. Currently, the concept is waiting to be implemented. At a distance of around 1 km from Staw Płaszowski there is yet another anthropogenic water body called Bagry, which is also a part of the Twelfth District of Podgórze. The lake is located between Lipska and Wielicka streets, a small distance away from the Kraków Prokocim railway station. The genesis of its creation is similar as in the previous case, as it is a former gravel pit that has been flooded. The previously described Staw Płaszowski is also called Małe Bagry (Little Bagry) due to the numerous similarities between the two water bodies. This is particularly visible in terms of their environment-the swampy areas of Bagry are also overgrown with wetland reed and bulrush grasses, while the nearby land sports tall and medium-height greenery with a character close to that of a riparian forest. The water body has an expansive infrastructure for sports and recreation. It also offers jetties and water equipment rentals, as well as gastronomic premises that are open during the season. During the sailing season Bagry lake is used for open-air events and competitions. Another site, this time located in the Nowa Huta district, is Przylasek Rusiecki, which is a complex of 14 water bodies with a combined area of 86 ha. In this case its creation is also associated with human activity-the water bodies are located in the former Vistula River bend, where gravel was procured in the 1950's and 1960's for the construction of the Lenin metallurgy plant (currently the T. Sendzimir metallurgy plant). Ecologists point that the area is important because of environmental considerations, as it constitutes the dwelling of protected animal species such as the great crested grebe, the mute swan, the Eurasian coot, the European green toad and the smooth newt [13]. The actual plant life atlas of Krakow also points to the area's high value in terms of natural greenery. The area of Przylasek Rusiecki is marked on maps as a place of the occurrence of willow and poplar riparian communities (Salici-Populetum), reed grass communities (Phragmition), as well as aquatic plants [1]. Due to the picturesque water surfaces of the water bodies and the natural character of their accompanying greenery, the complex is an area of high visual and landscape value, thanks to which it has been nicknamed "Little Mazury". Up t o 2017 the site was rented by the Krakow Fishing Association, which was associated with the introduction of fish into its lakes. After the agreement expired, plans were formulated to convert the wild area into a generally accessible public park, in accordance with the approved local spatial development plan [14], in which the water bodies and their accompanying areas were described as areas of surface and inland waters and landscaped greenery. In association with this, work has begun on formulating a conceptual proposal of the development of the park as a part of the Nowa Huta Przyszłości project [15]. The planned work includes decluttering and landscaping the area, the construction of a network of paths, pedestrian path sequences, shared bicycle and pedestrian paths and Places for rest and recreation are also planned to appear-a sandy beach with demarcated bathing spots, an open space for events, as well as footbridges and a platform for launching small boats. The recreational infrastructure is meant to be supplemented by dressing rooms, toilets, as well as a watersports equipment rental. The design is also meant to feature protection of the landscape qualities and biodiversity of the area-interference with the natural environment is meant to be minimal and the landscaping design is based on domestic tree and bush species. Zalew Nowohucki is also a large anthropogenic water body, located in the district of Bieńczyce. The water body is an artificial lake created in the 1950's in accordance with a design by the engineer A. Ścigalski [16], which also featured the surrounding park space. It has a regular rectangular shape, accentuated by an island. The lake is continuously filled with water from the Dłubnia River, which flows along its eastern shore. In the 1950's and 1960's the site constituted the main recreational space of the residents of the city of Nowa Huta, which was just being established at the time. The period that followed resulted in the area becoming neglected-its infrastructure became damaged, while the site was littered with trash, its waters becoming silted. It was only at the start of the twenty-first century that a process of the modernisation of the lake and its surroundings was initiated. Pedestrian and walking paths were delineated and renovated, the lake was deepened, and the park fitted with new furniture. A sandy beach was prepared near a bathing spot, along with spaces organised for children and the youth-playgrounds, a volleyball pitch, a tennis court. Fish have been introduced into the lake, causing fishermen to make use of it. Furthermore, 2016 also saw the presentation of a conceptual design of further modernisation as a part of the Nowa Huta Przyszłości Project, similarly to Przylasek Rusiecki. It assumes the construction of a new beach, the establishment of an information and history trail, the construction of an open-air gym, the renovation of the observation platform, the replacement of park furniture and the placement of chess tables [17]. Similarly, to other water bodies in Krakow, Zalew Nowohucki is an area of high environmental value, primarily because of the fauna that lives there. It includes several species of birds which have their nests on the islands (e.g. the mute swan, the mallard, the coot and the tern). Water features in Krakow's parks Water is also present in Krakow's gardens and parks. In manorial and palatial gardens, it often constituted an essential compositional element, with most sites featuring a pond, initially a utilitarian one, which over time became ornamental. Examples are many: the royal garden in Łobzów, residential gardens in Dębniki, Bieżanów, Prokocim and Skotniki. Unfortunately, due to neglect and damage during the post-war period, despite the restoration of numerous parks, water bodies survived in only a handful of them, including in Skotniki. However, they do exist in historical gardens and public parks, considerably improving their attractiveness. Ponds and pools can be found at, among other places, Planty park, the first Krakow green ring created as a result of the demolition of the city's fortifications. They are also in the following parks: Park Krakowski-a former entertainment garden, and Park Jordana-a prototype of a children's garden in Poland. The pond with an island covered by birch trees at Planty park is one of its more picturesque fragments, considered to be secessionist. It was funded by the municipal waterworks plant at the start of the twentieth century, at a time when Planty park was already almost a century old. Another project located in Krakow and in which water played an important role was Park Krakowski, established in 1885 [2]. It was established as a suburban park, placed on land rented from the army. The site included aformer army swimming pool, which was converted into a public one over time, with a pond being built as well. Both pools were supplied with water from the Młynówka Królewska creek as an easement. During summer the pond was used for boat rides, while during winter it constituted an attraction in the form of an ice-skating rink, which was remembered by one of Krakow's residents in the following manner: "The so-called elegant audience would gather at the rink in Park Krakowski. A fountain appeared in the park a little later. A design was prepared before the war (1938,1939), in which the water layout was completely changed. The lake, which was significantly expanded, was planned to have two islands: an elongated, pear-shaped one featuring trees and one circular one with a bridge and a gazebo, surrounded by trees. The pond, which would narrow at the centre, was intersected by a small bridge that boats rented at the jetty could sail under [18]. The planned changes were described as follows: "The pond, as a reservoir of clean and running water, on an impermeable base, enlivened by swans, fish and aquatic plants, will be considerably expanded due to the fact that water constitutes a beautiful and attractive garden motif. The jetty for boats and a pavilion with a cafe, located adjacent to the water, will supplement the park. Furthermore, spaces for children to play, with a shallow pool and sandbox were designed". The design that featured these assumptions was not fully completed. Only a pond with an island remained, with a fountain built on it after the war, in the form of a number of jets gushing out of a retaining wall. There are also representative fountains and water jets in Krakow's parks, belonging to the most beautiful decorations of areas of greenery, although they are not too numerous. The modern fountain by Maria Jarema, which refers to a grand piano with its form and was placed in Planty park in 2006, is, undoubtedly, a significant attraction. However, there are still too few of such features being built in greenery. Conclusions The share of water in the design of areas of greenery is significant due to the river park system, a structural element of Krakow's urban layout. In the planning of land development, the impact of rivers, streams and water bodies in brownfields is a matter of great importance due to their recreational and tourism-related use. An entire array of the possibilities offered by water: sailing using a boat, a kayak, fishing, walking along environmental trails and many others, provides designers with a broad range of opportunities. Much is already happening, often as a result of competitions, which is a good model. Water bodies located in brownfield areas are starting to gain considerable significance in Krakow's structure-they are being landscaped, developed and protected. They are areas that are important from an ecological point of view-they constitute habitats of wild birds, amphibians, lizards or small mammals. Natural plant communities that feature a high degree of diversity also play a significant role-from the swampy bushes of Przylasek Rusiecki to the xerothermic lawns of Zakrzówek. Water bodies also constitute actively used places of recreation-despite the fact that not all of them have so far been equipped with proper infrastructure. The third aspect is the fact that some of them possess cultural value, being a sort of industrial legacy (an effect of recultivation). There are still too few small water features in areas of greenery, even though they enjoy a great deal of popularity, increasing not only the aesthetic, but also the environmental quality of life. Summary This short and incomplete review does not exhaust the subject but demonstrates on Krakow's example just how varied are the roles that water can play in public areas of greenery and how broad are the possibilities of its use. Starting with the hydrographic network, which can constitute one of the basic elements of urban structure and regional open areas, through water bodies adapted to the needs of rest and recreation, to detail. In Poland water is still not fully used in terms of shaping areas of public greenery, although this seems to be changing lately, as can be seen on the basis of projects that are being initiated in Krakow. Historical examples can provide an entire array of solutions and constantly remain an untapped source of inspiration. Awareness of the high environmental value of waterfront areas is also increasing, which is why appropriately drafted local plans are being prepared (unfortunately they still do not cover the entirety of Krakow), with forms of protecting the most valuable areas being introduced.
2019-09-19T09:09:07.392Z
2019-09-18T00:00:00.000
{ "year": 2019, "sha1": "bd141a3e0d97652a9dac9d9a9f6c9b32a27903b9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/603/5/052038", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "372a5f3e9fc1e6ac2ba1fdb5fc60b1022fca0de0", "s2fieldsofstudy": [ "Environmental Science", "History" ], "extfieldsofstudy": [ "Geography", "Physics" ] }
252096889
pes2o/s2orc
v3-fos-license
Acute high altitude exposure, acclimatization and re-exposure on nocturnal breathing Background: Effects of prolonged and repeated high-altitude exposure on oxygenation and control of breathing remain uncertain. We hypothesized that prolonged and repeated high-altitude exposure will improve altitude-induced deoxygenation and breathing instability. Methods: 21 healthy lowlanders, aged 18-30y, underwent two 7-day sojourns at a high-altitude station in Chile (4–8 hrs/day at 5,050 m, nights at 2,900 m), separated by a 1-week recovery period at 520 m. Respiratory sleep studies recording mean nocturnal pulse oximetry (SpO2), oxygen desaturation index (ODI, >3% dips in SpO2), breathing patterns and subjective sleep quality by visual analog scale (SQ-VAS, 0–100% with increasing quality), were evaluated at 520 m and during nights 1 and 6 at 2,900 m in the 1st and 2nd altitude sojourn. Results: At 520 m, mean ± SD nocturnal SpO2 was 94 ± 1%, ODI 2.2 ± 1.2/h, SQ-VAS 59 ± 20%. Corresponding values at 2,900 m, 1st sojourn, night 1 were: SpO2 86 ± 2%, ODI 23.4 ± 22.8/h, SQ-VAS 39 ± 23%; 1st sojourn, night 6: SpO2 90 ± 1%, ODI 7.3 ± 4.4/h, SQ-VAS 55 ± 20% (p < 0.05, all differences within corresponding variables). Mean differences (Δ, 95%CI) in acute effects (2,900 m, night 1, vs 520 m) between 2nd vs 1st altitude sojourn were: ΔSpO2 0% (-1 to 1), ΔODI -9.2/h (-18.0 to -0.5), ΔSQ-VAS 10% (-6 to 27); differences in acclimatization (changes night 6 vs 1), between 2nd vs 1st sojourn at 2,900 m were: ΔSpO2 -1% (-2 to 0), ΔODI 11.1/h (2.5 to 19.7), ΔSQ-VAS -15% (-31 to 1). Conclusion: Acute high-altitude exposure induced nocturnal hypoxemia, cyclic deoxygenations and impaired sleep quality. Acclimatization mitigated these effects. After recovery at 520 m, repeated exposure diminished high-altitude-induced deoxygenation and breathing instability, suggesting some retention of adaptation induced by the first altitude sojourn while subjective sleep quality remained similarly impaired. Introduction Recent developments in means of transport and infrastructure make traveling to high altitude increasingly common. Many settlements worldwide are located at high altitudes (above 2,500 m), with regular working places up to more than 5,000 m. Especially, astronomical observatories and mines are located at very high altitudes. Working schedules of professionals at such sites may require rapid ascent from low altitude and subsequent shifts of one or several days or repeated shifts of several days alternating with periods of recovery at low altitude. The effects of prolonged or repeated exposure to high altitude have not been extensively studied but may include discomfort and high-altitude illness (Luks et al., 2017). Even though physiological acclimatization tends to counteract the effects of the reduced inspired PO 2 and consecutive hypoxemia, depending on the altitude reached and individual susceptibility, this response may not prevent adverse consequences or it may itself be perceived as uncomfortable. Thus, at altitudes >1,500 m, high altitude periodic breathing, an oscillatory pattern of waxing and waning of ventilation with periods of hyperventilation alternating with central apneas or hypopneas, is commonly observed in healthy subjects during sleep and sometimes also during wakefulness and even during physical exertion (Latshang et al., 2013a;Bloch et al., 2015). It may be associated with frequent arousals from sleep with a distressing sense of suffocation that prevents revitalizing rest and it may impair daytime performance (Latshang et al., 2013b). Although high altitude periodic breathing has been well known for many years (West et al., 1985) its evolution during a sojourn of a few days or during repeated sojourns at high altitude are incompletely understood. Moreover, the consequences of high altitude periodic breathing in terms of daytime performance in real working schedules of people at such altitudes remain elusive even though they are clinically highly relevant. Therefore, the purpose of the current study was to evaluate in healthy volunteers the effects of high altitude on nocturnal oxygenation and the breathing pattern during acute exposure, during prolonged exposure with daily ascents to very high altitude, and during re-exposure. This pattern of altitude exposure was selected to mimic that of professionals in high altitude work places. We hypothesized, a), that acute high altitude exposure induces pronounced nocturnal hypoxemia and periodic breathing that improves over the course of a few days and nights at high altitude and that, b), re-exposure to high altitude after 1 week staying near sea level induces less pronounced hypoxemia and periodic breathing compared to the first exposure suggesting retention of some altitude-induced physiological changes. Ethical approval All participants gave written informed consent and the study was approved by the University of Calgary, Canada, Conjoint Health Research Ethic Board (CHREB ID: REB15-2709) and the Cantonal Ethic Committee Zurich, Switzerland (REQ-2016-00048). The trial was registered at ClinicalTrials.gov (NCT02730143). Study design and setting This prospective study carried out in Chile comprised 3 consecutive 7-day periods. Participants spent two 7-day cycles at high altitude and recovered for 7 days near sea level in-between ( Figure 1). After baseline measurements in Santiago de Chile (LA, 520 m, 1,706 ft, barometric pressure 709 mmHg), participants travelled for 2 h by plane and 2 h by bus to the Atacama Large Millimeter-Submillimeter Array (ALMA) Operation Support Facility (ASF, 2,900 m, 9,514 ft, barometric pressure 542 mmHg) where they spent the next 7 nights. During daytime over this period, they travelled by car within 45 min to the ALMA Operation Site (AOS, 5,050 m, 16,568 ft, barometric pressure 419 mmHg) and stayed there for 4-8 h without oxygen supplementation. This exposure pattern is similar to that of workers at the telescope station. After the first 7-day cycle at high altitude, participants travelled back to 520 m for 7 days of recovery. A second high altitude sojourn, Cycle 2, with identical protocol as in Cycle 1 concluded the study. At 520 m and during the 1 st and 6 th night at 2,900 m, respiratory sleep studies were performed, followed by daytime evaluations in the morning at 2,900 m and additional evaluations over the course of the day at 5,050 m as illustrated in Figure 1. The study was performed in conjunction with examinations on cognitive performance published previously (Pun et al., 2018a;Pun et al., 2018b). Participant characteristics and reaction time have been presented in the cited reports. Data from sleep studies, the focus of the current paper, have not been published. Frontiers in Physiology frontiersin.org 02 Participants Healthy men and women were recruited in the area of Calgary, Canada, and Zurich, Switzerland. Exclusion criteria included a previous history of intolerance to altitudes <3,000 m, current pregnancy, or any health impairments which required regular treatment. All participants underwent clinical examinations prior to altitude sojourns. During the expedition, caffeine consumption was allowed while alcohol or any medication (in particular acetazolamide, among others) were prohibited. Measurements Prior to the baseline measurement at 520 m, a familiarization session was performed in the preceding day/night in all participants. Respiratory sleep studies Recordings were performed from 22:00 to 06:00 (time in bed, TIB). Measurements included finger pulse oximetry (SpO 2 ), nasal cannula pressure swings, thoracic and abdominal excursions by inductance plethysmography, electrocardiogram (ECG) and body position (AlicePDx, Philips AG Respironics, Zofingen, Switzerland). Breathing disturbances were scored as reported previously (AASM, 1999;Bloch et al., 2010). Apnea/hypopnea were defined as a >50% reduction in nasal pressure swings or chest wall excursions for ≥10 s. Obstructive apneas/hypopneas were scored if asynchronous or paradoxical chest wall excursions suggested continued effort during the event or if a flattened inspiratory portion of the nasal pressure curve suggested flow limitation. Central apneas/hypopneas were scored in the absence of criteria of obstructive events. Three or more consecutive central apneas/hypopneas with a duration of ≥5 s were scored as periodic breathing. The apnea/ hypopnea index (AHI) was defined as the number of events/h TIB, the oxygen desaturation index (ODI) as the number of SpO 2 desaturation dips >3%/h TIB. Clinical examination, questionnaires and spirometry At 2,900 m, blood pressure, pulse rate, weight and general well-being were evaluated before breakfast in the morning after sleep studies. Participants rated their sleepiness by the Karolinska Sleepiness Scale ranging from 1 (very awake) to 9 (very tired) (Kaida et al., 2006). Subjective sleep quality was assessed by a FIGURE 1 Overview of study design and data collection. The consecutive phases were: baseline period at 520 m, first high altitude sojourn (Cycle 1, days 4-10, with nights spent at 2,900 m, days at 5,050 m); recovery period at 520 m, days 11-17; second altitude sojourn (Cycle 2, days 18-24). Assessments included overnight sleep studies (N) and daytime assessments (D). Effects of acute high altitude exposure were evaluated in Cycle 1 during night 1 and day 1 (C1, N1, C1, D1) in comparison to baseline night and day (N D); effects of acclimatization in Cycle 1 were evaluated in night 6 and day 6 (C1, N6, C1, D6) in comparison to the first night and day. Corresponding evaluations took place during the high altitude re-exposure in Cycle 2. The acute effects of high altitude re-exposure were evaluated in Cycle 2 in comparison to overnight and daytime assessments at the end of the low altitude recovery period. Frontiers in Physiology frontiersin.org 03 visual analog scale ranging from 0 (extremely bad) to 100 mm (excellent). Insomnia was assessed by asking participants to estimate the time until falling asleep, the number of awakenings and total time spent awake during the night. Daytime assessments at 5,050 m included spirometries with reference values of the Global Lung Function Initiative (Quanjer et al., 2012), clinical examinations and assessment of sniff inspiratory nasal pressure (SNIP) (Laveneziana et al., 2019) after several hours exposure to very high altitude. Outcomes The main outcome was the mean nocturnal SpO 2 over the course of high altitude exposure during two sojourns at 2,900 m compared to 520 m. Secondary outcomes were further indices of oxygenation and of high altitude periodic breathing such as AHI, ODI and results from clinical assessments, sleep-related questionnaires and lung function. Sample size To detect a minimal difference in nocturnal SpO 2 of 2% (SD of 3%) with a power of 80%, alpha level of 0.05 and dropout rate of 10%, 21 participants were required. This sample size allowed to detect changes in important secondary outcomes such as difference in AHI or ODI between acute exposures to high altitude in Cycle 2 vs Cycle 1 of 10 events/h, assuming a SD of 15 events/h. Statistical analysis According to the intention-to-treat principle, missing data in the primary outcome (SpO 2 ) were replaced by multiple imputations (n = 20) using regression models with chained equations including anthropometrics, altitude location, Cycle and examination day as independent predictors (Zhang, 2016). Occasional missing data in secondary outcomes were not replaced. Data are presented as means ± SD. Mean differences and 95% confidence intervals were computed using mixed linear regression models with outcomes as dependent variable and altitude location, Cycle and examination day as independent variables. For the primary outcome SpO 2 , Bonferroni corrections to account for 2 comparisons (differences between 2nd vs 1st Cycle in acute altitude effects and acclimatization effects) were applied. For secondary outcomes, only hypothesis-based comparisons were performed to minimize the false positive discovery rate. Statistical significance was assumed at p < 0.05 and if 95% confidence intervals of mean differences did not include zero. Results All 21 healthy volunteers completed the study. Baseline characteristics are presented in Table 1. No adverse events requiring therapeutic interventions were reported. Data from all 21 participants were analyzed. The results are displayed in Figure 2 (individual trends in Supplementary Figure S1) and numerically reported in Tables 2 and 3. Respiratory sleep studies Among 126 assessments of the main outcome, missing data had to be replaced by multiple imputations in 3 instances (2%). Baseline sleep studies revealed normal indices of nocturnal oxygenation and a normal AHI. In the first night at 2,900 m (Cycle 1, night 1), there was a significant decrease in nocturnal oxygenation reflected in lower mean nocturnal SpO 2 and longer night-time spent with SpO 2 <95%, <90% and <85%. Compared to 520 m, the ODI and AHI were increased in the first night at 2,900 m related to emergence of periodic breathing with central apneas/hypopneas. After 5 additional nights at 2,900 m (Cycle 1, night 6), SpO 2 , heart rate, ODI and AHI had partially normalized (p < 0.05 Cycle 1, night 6 vs night 1, all comparisons). At the end of the recovery period at 520 m, the second baseline sleep studies revealed a higher nocturnal SpO 2 and a lower heart rate compared to the first baseline. After 7 nights at 520 m, during re-exposure to high altitude in Cycle 2, the sleep study in the first night at 2,900 m (Cycle 2, night 1) again revealed decreased pulseoximetric indices of oxygenation and increased ODI and heart rate (p < 0.05 compared to 520 m) but these changes, although similar compared to Cycle 1, resulted in less altitude-induced deteriorations (Table 2). After 5 nights in Cycle 2 at 2,900 m, SpO 2 and ODI had partially improved but the changes were less pronounced than those in the Cycle 1 as the initial deviations of the values from low altitude baseline were less prominent than those in Cycle 1, night 1 (Figure 2, Supplementary Figure S1). Values are counts and medians (quartiles). FIGURE 2 Effect of acute high-altitude exposure, acclimatization and re-exposure on indices of nocturnal oxygenation (A-C) and heart rate (D). In panels (A-D) mean ± SD values at 520 m and at 2,900 m, nights 1 and 6, in the Cycle 1 and 2 are shown. Panels A′-D′ illustrate changes in variables with acute ascent in Cycle 1 and 2 (vector arrows C1 and C2) along with their mean difference (Diff.) and 95% confidence interval. Panels A″-D″ illustrate changes with acclimatization in Cycle 1 and 2 along with their mean difference and 95% confidence interval. *p < 0.05 vs 520 m in corresponding Cycle (acute altitude effect); ‡p < 0.05 vs 1 st night at 2,900 m in corresponding Cycle (acclimatization effect). †p < 0.05 vs corresponding Baseline or value in Cycle 1, respectively (repeated exposure effect); **p < 0.05 for differences between Cycles. Subjective sleep assessment Compared to low altitude baseline, the first acute exposure to 2,900 m was associated with a reduced subjective sleep quality, a longer estimated time to fall asleep and a longer time spent awake during the night. These changes partially improved with acclimatization over 5 days (Table 2). After returning to 520 m, subjective sleep quality, estimated time spent awake and number of awakenings were similar as those in the first baseline night at 520 m. Daytime evaluation Ascent to 2,900 m and 5,050 m in Cycle 1 was associated with an increase in heart rate, a reduction in forced vital capacity (FVC) and SNIP in Cycle 1, on day 1 at 5,050 m. These values did not significantly change over the altitude sojourn in Cycle 1. In Cycle 2 at 5,050 m, heart rate and blood pressure remained unchanged but reductions in FVC and SNIP were noted in comparison to the second baseline. Over the course of the altitude sojourn, daytime assessments revealed no significant changes in heart rate, blood pressure and lung function. Discussion We performed a comprehensive prospective clinical and physiological study in young, healthy volunteers who spent two periods (Cycles) of 7 nights at 2,900 m combined with daytime sojourns at 5,050 m, with a 7-days recovery period near sea level in-between. Compared to baseline near sea level, the main findings during the first Cycle at high altitude included pronounced nocturnal hypoxemia and periodic breathing associated with impaired subjective sleep quality after arrival at 2,900 m. These acute altitude-effects partially improved with acclimatization over the course of the stay at high altitude. After low-altitude recovery, evaluations during the second high-altitude Cycle revealed less pronounced acute physiological and subjective effects suggesting some retention of adaptation induced by the first altitude sojourn. Our findings are novel and clinically important as they may help to optimize initial and longer term work schedules and the use of measures (such as oxygen or medication, for example) that reduce adverse effects of altitude in professionals at high altitude work places. Frontiers in Physiology frontiersin.org 07 impairment in daytime psychomotor vigilance (PVT) reaction time (Latshang et al., 2013b). In 16 mountaineers studied at a higher altitude of 4,559 m, hypoxemia and periodic breathing were more pronounced (median SpO 2 67%, AHI 60.9/h) and the proportion of deep sleep was reduced more (reduction of NREM 3 and 4 by 12% vs sea level) than at 2,590 m. In the current study, nocturnal SpO 2 (86%) and the AHI (26.6/h) were between the values reported at 2,590 m and 4,559 m illustrating the altitudedependence of hypoxemia and breathing disturbances. Because of logistic constraints, neurophysiologic monitoring sleep was not feasible in the current study. Therefore, we are unable to assess potential interactions among breathing and sleep disturbances that may have modified the observed changes in AHI. Expectedly, the altitude-induced rise in AHI was predominantly due to central apneas/hypopneas but a slight increase in obstructive events was also observed ( Table 2). We propose that the mechanisms promoting high altitude periodic breathing mentioned above may also have enhanced the propensity of certain subjects with sleep-induced upper airway instability to experience some obstructive events as shown previously at lower altitude (Latshang et al., 2013b). In the current study, subjectively perceived sleep quality assessed by VAS was impaired during the first night at 2,900 m (Table 2). Presumably, sleep disruption and hypoxemia at high altitude account for the impaired daytime vigilance and cognitive performance reported previously in the participants of the current study (Pun et al., 2018a;Pun et al., 2018b) but this was not associated with subjectively perceived sleepiness. Altitude exposure may therefore have differential effects on perceived sleep quality, sleepiness and cognitive performance and/or the instruments used to evaluate these outcomes may vary in their responsiveness to effects of hypoxia. Spirometry at 5,050 m revealed a reduced FVC in association with a reduced SNIP suggesting an altitude-related reduction in inspiratory muscle strength and, possibly, interstitial pulmonary fluid accumulation as reported previously (Clarenbach et al., 2012). These changes may alter the plant gain of the respiratory feed-back control system and thereby modulate the propensity to periodic breathing (Dempsey, 2005) although our data do not allow corroboration of this hypothesis. The higher forced expiratory volume in 1 s/FVC ratio at 5,050 m compared to near sea level is consistent with a reduced air density and an associated reduction in airflow resistance. Acclimatization to high altitude (cycle 1) The time course of changes in ventilatory control induced by a prolonged stay at high altitude has not been extensively studied. While some studies found an increase of periodic breathing during high altitude acclimatization despite improving oxygenation (Salvaggio et al., 1998;Bloch et al., 2010;Nussbaumer-Ochsner et al., 2012a;Burgess et al., 2013), others have shown no change or a decrease in periodic breathing (Latshang et al., 2013b;Tseng et al., 2015). This variation has been suggested to depend on altitude, i.e., the degree of hypoxia (Ainslie et al., 2013;Bloch et al., 2015). Thus, in 34 mountaineers who climbed from 3,750 m to 7,546 m within 19-20 days, periodic breathing continuously increased even though SpO 2 improved over the same period (Bloch et al., 2010). Conversely, in the current study, nocturnal SpO 2 , AHI and subjective sleep quality were significantly improved after spending 6 nights at 2,900 m and days at 5,050 m, supporting a positive role of acclimatization to these two alternating hypoxic stimuli in stabilizing breathing pattern and sleep quality at 2,900 m. The relative importance of the degree and the pattern of hypoxic stimuli experienced during wakefulness and sleep on changes in control of breathing during a prolonged stay at high altitude requires further study. As shown in our previous report (Pun et al., 2018a), the reduced PVT reaction time after acute exposure to 5,050 m improved as well with acclimatization supporting a role of nocturnal oxygenation and breathing pattern in next day cognitive performance. Re-exposure to high altitude (cycle 2) During re-exposure to 2,900 m we observed milder acute altitude-effects on SpO 2 , in particular on indices of severe deoxygenation, and ODI compared to Cycle 1. This suggests, that physiologic acclimatization achieved during Cycle 1 was partially retained during low altitude recovery until beginning of the second altitude ascent, thereby mitigating acute effects of reexposure to hypoxia. The milder acute altitude effects during reexposure might be due to higher ventilatory responsiveness to CO 2 and, thus, improved oxygenation during re-exposure as suggested by data obtained in 21 lowlanders re-exposed to 5,260 m after a previous stay at that altitude and a 16 days sojourn near sea level (Fan et al., 2014). CO 2 responsiveness was not measured in our study, but the higher SpO 2 after acclimatization is consistent with this assumption. Since acute, altitude-induced changes in spirometry and SNIP were similar in the 1st and 2nd Cycle (Table 3) we have no evidence that differences in periodic breathing between the Cycles were due to differences in plant gain of the respiratory feed-back control system. Acclimatization effects during the stay at high altitude in Cycle 2 were less pronounced than that during Cycle 1, which can be explained due to the milder acute altitude effects and a ceiling or floor effect in certain variables such as SpO 2 and percent time spent with periodic breathing, for example, which revealed less pronounced deviation from normal values in the beginning of Cycle 2 and could therefore not improve further to the same extent as in Cycle 1. Subjective sleep quality in the first night at 2,900 m in Cycle 2 was similar compared to 2,900 m in Cycle 1, however, estimated time falling asleep improved during Cycle 2 suggesting some improvements in the altitude-related sleep latency observed in Cycle 1. Frontiers in Physiology frontiersin.org 08 Limitations Study participants were young, healthy and active individuals and may not be representative for high altitude workers of older age or with pre-existing illness at the altitude of the ALMA or at other elevations. Whether one or several additional successive high/low altitude cycles or different lengths of exposure and break would optimize respiratory acclimatization requires further studies. As we did not assess sleep by neurophysiological monitoring we report nocturnal respiratory events in reference to time in bed. This may have resulted in some underestimation of AHI, ODI and of altitude effects on these variables compared to corresponding indices referenced to total sleep time since altitude is known to reduce sleep efficiency (Bloch et al., 2015). Conclusion We showed that acclimatization and a repeated exposure to high altitude simulating a typical work/leisure schedule of professionals at high altitude mitigated acute physiological changes in terms of hypoxemia and cyclic deoxygenations during sleep due to partial retention of acclimatization achieved during a preceding altitude sojourn. These findings, combined with the improved cognitive performance observed during a second altitude sojourn in the same participants (Pun et al., 2018a;Pun et al., 2018b), indicate that the working schedule ('7 days sleeping high, working higher') and a second high altitude sleep/working schedule after a 7-day recovery period at near sea level reduce adverse effects of extreme altitude. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by University of Calgary, Canada, Conjoint Health Research Ethic Board (CHREB ID: REB15-2709) and the Cantonal Ethic Committee Zurich, Switzerland (REQ-2016-00048). The patients/participants provided their written informed consent to participate in this study.
2022-09-07T15:01:20.812Z
2022-09-05T00:00:00.000
{ "year": 2022, "sha1": "78e6c8cf2e54f73d4554122819fece94939a7e88", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "4ef05e736ba21a0eb6538c62e2811e1aae0f146f", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213717126
pes2o/s2orc
v3-fos-license
Organoid culture systems: Products supporting one of the biggest revolutions in biological research Organoids are stem cell-derived structures that are generated in three-dimensional tissue culture. They are unique since they exhibit a high degree of self-organization and differentiation, and thus recapitulate many of the features of the tissues from which they were derived. Because of this, organoids are now firmly established as an essential tool in medical research, and have the potential to drastically reduce the number of animals required for experimentation. Introduction Organoids are stem cell-derived structures that are generated in three-dimensional tissue culture. Organoids are unique since they exhibit an advanced degree of selforganization and differentiation, and thus recapitulate many of the features of the tissues from which they are derived. Because of their similarity to their in vivo counterparts, organoids are emerging as the preferred model system for studying tissue development, homoeostasis, and diseased states in vitro. Organoid technology is widely recognized as one of the greatest technological breakthroughs in basic biological research in the last decade. For these reasons, this technology holds tremendous potential to complement 2D-culture methods and highly reduce the use of animal models in research. STEMCELL Technologies is committed to developing culture media and providing tools that efficiently support the generation and maintenance of multiple organoid culture systems. IntestiCult™ OGM (Mouse) Upper intestines of C57BL6/J mice were dissected and washed several times in PBS. The intestinal fragments were then incubated for 20 minutes with Gentle Cell Dissociation Reagent (GCDR) at room temperature (RT) to separate the crypts and villi from the intestinal basal surface. The crypts were then isolated from villi through centrifugation, counted and re-suspended in a 50:50 mixture of Corning ® Matrigel ® dome and IntestiCult™ OGM at 6,000 crypts/mL. A 50 µL droplet of the suspension was gently placed into the center of pre-warmed 12well culture plate wells, creating a dome containing ~300 crypts/well. The domes were solidified at 37°C for 5 min and the wells were then flooded with 750 µL of IntestiCult™ OGM. Crypts were cultured at 37°C for 4-7 days with 3 medium changes per week. After 7 days, organoids were passaged by treating cultures with GCDR for 15 min at RT followed by mechanical disruption into smaller aggregates. The resultant suspension was mixed with IntestiCult™ OGM at a 1:6 ratio and then re-plated as above to establish secondary cultures. This protocol was repeated to generate long-term cultures. IntestiCult™ OGM (Human) Human intestinal crypts were isolated by incubating human small intestine and colon tissue samples with GCDR for 30 minutes at 4°C with gentle agitation. The liberated crypts were counted and plated at 1,000 crypts per Corning ® Matrigel ® dome, flooded with 750 µL IntestiCult™ OGM (Human) and cultured at 37°C with 3 medium changes per week. The mature organoids were expanded by harvesting, dissociated by manual agitation and 10 min GCDR incubation and re-plated at a 1:4 split ratio every 10-12 days over 15 passages. STEMdiff™ Cerebral Organoid Kit This kit contains 2 basal media and 5 supplements, which are combined to prepare four separate complete media corresponding to the 4 stages of cerebral organoid formation. Human pluripotent stem cells (hPSCs) maintained in mTeSR1™ were dissociated into single cell suspensions, and cultured in Embryoid Body (EB) Formation Medium (days 1 -5, Stage 1). The resulting EBs were then transferred to Induction Medium (days 6 -7, Stage 2); next, they were expanded by embedding in Corning ® Matrigel ® and cultured in Expansion Medium (days 7 -10, Stage 3). The expanded organoids were then cultured in Maturation Medium, with agitation, for extended periods of time (days 10 -40+, Stage 4). Morphological analysis of organoids was performed on days 5, 7, 10 and 40, which are the endpoints of Stages 1 -4 respectively. Organoids at day 40 were analyzed by RT-qPCR or cryosectioned and processed for immunofluorescence (>3 organoids per analysis; 2 human embryonic stem cell (hESC) lines, n = 2 and 2 induced pluripotent stem cell (iPSC) lines, n = 2). IntestiCult™ OGM (Human) Spherical human intestinal organoid structures could be identified within 2 days of initial seeding. Organoids analyzed by immunohistochemical and qRT-PCR analyses for intestinal stem (LGR5 and AXIN2), Paneth (LYZ), enteroendocrine (CHGA), goblet (MUC2) and enterocyte (VIL1) cell marker expression, revealed that organoids were comprised of both stem and differentiated cell types, demonstrating cultured human organoids closely resembled human intestine structure. Organoids expanded for >1 year in culture with an average passaging ratio of 1:6 every 8 to 12 days. Conclusions STEMCELL Technologies supports the organoid culture system revolution in biological research by developing products like IntestiCult™ OGM (Mouse and Human) and STEMdiff™ Cerebral Organoid Kit that enable generation of organoids in a highly reproducible manner, thus reducing experimental variability and increasing accuracy of results for researchers to advance their scientific discoveries.
2020-02-27T09:36:05.191Z
2020-02-12T00:00:00.000
{ "year": 2020, "sha1": "a4105c5e48e1691bdb1b270fc8d088fb82b6617e", "oa_license": "CCBYNC", "oa_url": "https://pagepress.org/technology/index.php/bse/article/download/102/71", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3ebb78316b4fce846b0b2149b068852822907491", "s2fieldsofstudy": [ "Biology", "Engineering", "Medicine" ], "extfieldsofstudy": [ "Engineering" ] }
119268116
pes2o/s2orc
v3-fos-license
Limits of the quantum SO(3) representations for the one-holed torus For $N \geq 2$, we study a certain sequence $(\rho_p^{(c_p)})$ of N-dimensional representations of the mapping class group of the one-holed torus arising from SO(3)-TQFT, and show that the conjecture of Andersen, Masbaum, and Ueno \cite{1} holds for these representations. This is done by proving that, in a certain basis and up to a rescaling, the matrices of these representations converge as $p$ tends to infinity. Moreover, the limits describe the action of $SL_{2}(\mathbb{Z})$ on the space of homogeneous polynomials of two variables of total degree $N-1$. Introduction Quantum topology was born by the physical interpretation of the Jones Polynomials made by E. Witten. An interesting problem in quantum topology is to study the asymptotics of quantum objects by linking them to classical objects. In this paper we focus on the quantum representations of the mapping class group of the one-holed torus arising from the Witten-Reshetikhin-Turaev SO(3) Topological Quantum Field Theory (TQFT). For any odd p and any c ∈ {0, ..., p−3 2 }, the SO(3) TQFT built in [2] associates to the one-holed torus a finite dimensional complex vector space V p (T c ) of dimension p−1 2 −c. Denoting Γ 1,1 the mapping class group of the one-holed torus, V p (T c ) carries a projective representation of Γ 1,1 which depends on a choice of a primitive 2p-th root of unity A p . It is known (see [4]) and easy to see that in the case of Γ 1,1 , this projective represention lifts to a linear representation which we denote: ρ (c) p : Γ 1,1 −→ Aut(V p (T c )) On the other hand, Γ 1,1 maps onto SL 2 (Z). For N ≥ 2, the later group acts naturally on H N : the space of homogeneous polynomials of two variables of total degree N − 1. So we have a representation: Remark that if p is odd and p ≥ 2N + 1 we can set c p = p−1 2 − N so that dim(V p (T cp )) = N . This creates a sequence of N-dimensional representations ρ (cp) p of Γ 1,1 . It turns out that those representations are closely related to h N . Indeed, up to rescaling, the quantum representations can be viewed as deformations of h N . Here is a precise statement of what we mean: Main theorem : Let Q(X) be the field of rational functions in an indeterminate X. Fix N ≥ 2 an integer. There exists a representation ρ : Γ 1,1 −→ GL N (Q(X)) which does not depend on p and a character χ p : Γ 1,1 −→ C * (which depends on the choice of root of unity A p ) such that : • All the matrices in ρ(Γ 1,1 ) can be evaluated at X = A p and X = −1, those evaluations are denoted respectively ρ [Ap] and ρ [−1] (which are representations into GL N (C)) Let t y and t z be the Dehn twists along the canonical meridian and longitude on the one-holed torus. We choose the map Γ 1,1 → SL 2 (Z) such that t y maps to 1 1 0 1 and t z maps to 1 0 −1 1 . Since t y and t z generate Γ 1,1 , the main theorem is implied by the following one: [3]. Let T p and T * p be the matrices of µ −1 cp ρ (cp) . Then there exists T (X), T * (X) ∈ GL N (Q(X)) independent of p which can be evaluated at X = A p and X = −1 such that: • The matrices T (−1) and T * (−1) are the matrices of h N (t y ) and h N (t z ) in the basis Remark : Concretely, the previous theorem implies that if φ ∈ Γ 1,1 and M p denotes the matrix of ρ (cp) p (φ) (in the basis of theorem 1), we have as A p → −1: where M is the matrix of h N (φ) in the basis (1). We can also use this theorem to prove the following version of the AMU conjecture (see [1]) in the case of the one-holed torus: Theorem 2 : For any fixed N ≥ 2, if φ ∈ Γ 1,1 is pseudo-Anosov then there exists p 0 (φ) such that for any odd p ≥ p 0 (φ) the automorphism ρ Acknowlegments. I would like to thank Gregor Masbaum who gave me this problem and who helped me to write a precise statement of my result. Review of SO(3)-TQFT We are going to recall the basic notions we need. We refer to [3] for more details. In what follows, N ≥ 2 will be fixed. We set c = d − N where d = p−1 2 . We saw that V p (T c ) is N-dimensional and it has a basis {L c,n } 0≤n≤N −1 given by the colored graphs in the solid torus (see [3]) which can be described pictorially by the following diagrams : Notations L c,n = n+c 2c + T c can be viewed as a torus T 2 equipped with a banded point x with color 2c. We can think of T c as the boundary of a tubular neighborhood of the graph.This tubular neighborhood is a solid torus, and the univalent vertex of the graph is "attached" to the banded point x. ρ (c) p (t y ) has a nice expression in this basis, indeed for 0 ≤ n ≤ N − 1 We also denote by ((, )) the Hopf pairing on V p (T c ). It is a symmetric nondegenerated bilinear form (see [3]). Curve operators For any multicurve (disjoint union of simple close curves) γ on the one-holed torus 3 4 ] ∪ x × I where x × I has color 2c. By the axioms of TQFT, C γ defines an operator Z p (γ) ∈ End(V p (T c )). Let y and z be respectively the meridian and the longitude curves on the one-holed torus. We can see the action of Z p (y) and Z p (z) in the basis {L c,n }: Those diagrams can be evaluated using skein theory. We have also the following basic facts: • Z p (z) and Z p (y) are transposed by the Hopf pairing. Remark : The last property is obtained by applying the skein relation: We can now recall the definition of the basis used in theorem 1. Following [3], for 0 ≤ n ≤ N − 1 let: The interest of this basis is that it is orthogonal with respect to the Hopf pairing. For n, m we have (see [3]): . Let also (y m,n ) be the matrix of Z p (y), (z m,n ) be the matrix of Z p (z) and (z m,n ) be the matrix of Z p (t y (z)) . Since, with respect to the Hopf pairing, ρ In [3] there are already explicit expressions for (a m,n ) and (b n,m ) but here we use new formulas which are more helpful for our purpose. The limit of the representations In this section we will prove Theorem 1 and Theorem 2. Proof of theorem 1 To make the proof of this theorem easier we need the following two lemmas: Lemma 1. For any n and m, there exists R n,m (X) ∈ Q(X) independent of p which can evaluated at X = A and X = −1 such that : Proof: If n = m the result is clear. By symmetries, we just have to prove it for n ≥ m + 1. In this case, since (−A) p = 1 and 2c = p − 1 − 2N we have for any x integer: {a} is the evaluation of (−X) n − (−X) −n ∈ Q(X) at X = A and {a} + is the evaluation of (−X) n + (−X) −n ∈ Q(X) at X = A we see that there exists R n,m (X) ∈ Q(X) (which clearly does not depend on p) such that R n,m = R n,m (A). We also know that for all a , when A tends to −1 , {a} {1} → a and {a} + → 2 so we deduce the expression of R n,m (−1) Then there exists a matrix M (n) (X) = ( M (n) m,l (X)) ∈ GL N (Q(X)) independent of p which can be evaluated at X = A and X = −1 such that for all m, l: n + 1 where δ k,l is the Kronecker symbol. Proof : Since Z p (z) and Z p (y) are transposed by the Hopf pairing, we have (y m,l ) = (R (c) l,m z l,m ). By the proposition in section 1 we also know that so for all m, l : We have easily (by just writing the definition of {Q Finally when l = m. And by lemma 1 as A → −1 We can conclude that M Key observation : The idea to prove theorem 1 is very simple. Observe that if n ≤ N − 2 : Since (see the proposition in section 1) In the basis {Q (c) n } this simply means that if we apply the matrix M (n) to the n-th column of the matrix (a m,k ) we get the (n+1)-st column of (a m,k ). In other words if we denote : (a m,k ) = (a 0 , ..., a N −1 ) where a i is the i-th column, we have a n+1 = M (n) a n (Recall that (a k,l ) is the matrix of µ −1 c ρ (c) n }) From this key observation we are going to prove theorem 1 in 3 steps. First we prove the existence of T (X) , T * (X) ∈ GL N (Q(X)) independent of p such that T (A) = T p and T * (A) = T * p ; then we compute T (−1) and T * (−1) ; finally we give an interpretation of T (−1) and T * (−1). Step 1 : Existence of T (X) and T * (X) . We define : By lemma 2, these vectors are independent of p. Since n,m (X)) −1 ) is independent of p. We have therefore found two matrices T (X), T * (X) ∈ GL N (Q(X)) independent of p such that T (A) = T p and T * (A) = T * p . Step 2 : Expression of T (−1) and T * (−1). We will prove that for all n: when m ≤ n and 0 otherwise. R n,m (−1) and lemma 1. We will compute T (−1) by an induction on n (the index of column). If n = 0, a 0 = e so the limit is as expected. Step 3 : Interpretation of T (−1) and T * (−1) . Recall the action of SL 2 (Z) on H N (the space of homogeneous polynomials of two variables X and Y of total It gives in the basis (1) (see theorem 1) : Since t y maps to 1 1 0 1 and t z maps to 1 0 −1 1 in SL 2 (Z) we conclude that T (−1) and T * (−1) are the matrices of h N (t y ) and h N (t z ) in the basis (1) which completes the proof of theorem 1. Remark 1 : Using the previous techniques, one can get explicit formulas for T and T * but they are quite complicated and we don't need them to compute the limits. (for any primitive 2p-th root of unity A). Since a rational function has only a finite number of roots, in GL N (Q(X)) we have : T T * T = T * T T * Since Γ 1,1 has a presentation t y , t z | t y t z t y = t z t y t z , the previous relation ensures that there exists a unique representation ρ : Γ 1,1 → GL N (Q(X)) such that ρ(t y ) = T and ρ(t z ) = T * . By the same argument, for all odd p and all primitive 2p-th roots of unity A, there exists a unique character χ p : Γ 1,1 → C * such that χ p (t y ) = µ cp and χ p (t z ) = µ cp . Then if we choose the same A (a primitive 2p-th root of unity) to define ρ (cp) p and χ p , by theorem 1 in the bases considered above we have: Proof of theorem 2 We know that Γ 1,1 acts on H 1 (T 2 , C) (the first homology of the torus with coefficients in C). So we have a representation :
2012-02-08T20:51:07.000Z
2012-02-08T00:00:00.000
{ "year": 2012, "sha1": "5b828a0b4f71a3b5be7b279682162b675737d92d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1202.1813", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5b828a0b4f71a3b5be7b279682162b675737d92d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
54501373
pes2o/s2orc
v3-fos-license
A METHODOLOGY FOR THE DETECTION AND DIAGNOSTIC OF LOCALIZED FAULTS IN GEARS AND ROLLING BEARINGS SYSTEMS In this work, an effective methodology to detect early stage faults in rotating machinery is proposed. The methodology is based on the analysis of cyclostationarity, which is inherent to the vibration signals generated by rotating machines. Of a particularly interest are the second and higher orders cyclostationary components since they contain valuable information, which can be used for the early detection of faults in rolling bearings and gear systems. The first step of the methodology consists in the separation of the first-order periodicity components from the raw signal, in order to focus the analysis in the residual part of the signal, which contains the second and higher order periodicities. Then, the residual signal is filtered and demodulated, using the frequency range of highest importance. Finally, the demodulated residual signal is auto-correlated, obtaining an enhanced signal that may contain clear spectral components related to the presence of a prospective localized fault. The methodology is validated analyzing experimental vibration data for two different cases. The first case is related to the detection of a crack in one of the teeth of a gearbox system and the second case is related to the detection of a pitfall in the inner race of a rolling bearing. The results show that the proposed method for the condition monitoring of rotating machines is a useful tool for the tasks of fault diagnosis, which can complement the analysis made using traditional diagnostic techniques. INTRODUCTION Vibration signals generated by rotating machines may be considered as non-stationary processes that present periodic (i.e.cyclic) variations in the time domain in some of its statistics [1], which is a main characteristic of those type of signals named cyclostationary signals.A vibratory signal x(t) is said to be nth order cyclostationary with period T if its nth order moment exist and is periodic in the time domain with the period T. Typical examples of first order periodicity (FOP) vibration signals are generated by rotating machines with misaligned couplings and/or unbalanced rotors, whereas, modulated vibratory signals generated by wear mechanisms, friction and impact forces are some examples of second order periodicity (SOP) processes.In order to analyze FOP signals and to extract the required information for the fault detection tasks, the classical spectral analysis is an adequate a practical tool that may be used for most of these cases.However, when SOP signals have to be analyzed (e.g.signals with amplitude and/or frequency modulations), the analysis should be carried out using more sophisticated tools, in order to be able to identify variations in the statistics of the signals, containing meaningful information of the system under analysis [2].In some cases, demodulation techniques may be satisfactorily used to analyze SOP signals, as long as, either the resonant zones or the main frequency ranges of the expected faults can be known in advance.However, the efficacy of the demodulation techniques diminishes when the signal contains higher orders of cyclostationarity, as well as, random noise components.The basic idea behind the theory of cyclostationary analysis is to apply an appropriate quadratic transformation to a SOP signal in order to obtain a modified signal of FOP [3].Then, the modified signal can be analyzed with traditional diagnostic techniques applied to the mechanical components under study. In this framework, the methodology proposed in this study, incorporates time-frequency analysis of SOP signals in combination with traditional techniques typically used in fault detection (e.g.spectral analysis, enveloping analysis, etc).First, the components of FOP are reduced using an adaptive filtering method.In this way, a residual signal containing the SOP components is also obtained.Then, the residual signal is filtered in order to highlight the SOP components of the signal.In this stage, the cutting frequencies of the filter are estimated by using a time-frequency transformation based on the cyclic autocorrelation function.Finally, the filtered residual signal is demodulated and auto-correlated, obtaining a resultant signal with useful information for fault diagnostic purposes.In summary, from the time-frequency analysis, appropriate filters are configured, and then, using an enveloping detector the residual signal is demodulated, and finally, in order to improve the signal to noise ratio (SNR), a matched filter based in the autocorrelation function is used.This procedure has been tested with two cases using experimental vibration data from two test rigs used to simulate faults in gears and rolling bearings respectively (Figure 1).The results show that the proposed methodology is an effective tool for the early detection and diagnostic of faults in rotating machinery. This work is organized as follows: firstly, the principles of cyclostationarity and the basics of the proposed method is presented and validated using experimental vibration data of a faulty gearbox and a faulty rolling bearing; finally, the main conclusions are drawn. BASICS OF CYCLOSTATIONARITY AND PROPOSED METHOD A well detailed tutorial on the principles of cyclostationarity, focused on mechanical applications is given in [4].However for the completeness of this work, and to address the use of cyclostationarity towards the proposed method, the basics of cyclostationarity are included here.A non-stationary signal can be considered as cyclostationary with FOP and SOP components, only if its moments of first and second order are periodic, in other words, if the moments satisfy the equations (1) and (2) [5]: where, E is the expected operator and T is the period or cycle of the signal x(t).An auto-correlation function with variation in time can be associated to the signal x(t), which is given by: where, is the time lag and satisfies: 1.The function of equation ( 3) is also known as the instantaneous auto-correlation function (ACF). In general, it is possible to assume that a vibration signal x(t) is composed of FOP, SOP and random noise, as shown in equation (4). Considering that the focus of the analysis is on the SOP components of the signal x(t), the first stage of the procedure consists of using an LMS (least mean square) adaptive filter [6], in order to reduce the FOP components from the signal to be analyzed.In this way, the FOP components are separated from the raw signal and a residual signal (i.e.error signal) containing the SOP components and random noise is obtained.If a typical vibration signal containing amplitude modulations (i.e.SOP components), is assumed, the residual signal can be expressed as in equation ( 5). where, i = 1, 2, …, N is the number of modulation signals, f ci and f 0 are the modulating and modulated signal respectively, b is a constant and n(t) is white noise with unknown variance.Since the main interest here, is to extract the information of the modulating signals from the signal that includes the SOP components (x SOP ), a simple demodulator (i.e. a low pass filter with cut frequency f 0 ) can be used.In order to obtain a good estimation of the cut frequency f 0 , it is used a time-frequency distribution of the Cohen type [7], which is given by equation ( 6): where, is an arbitrary function (kernel) and r x corresponds to the instantaneous ACF given by equation (3).The type of time-frequency distribution is determined by the selected function .For instance, if is equal to 1, the Wigner-Ville distribution is obtained, which is given by ( 7), whereas, if is a cubic function type, that helps to reduce the frequency cross terms, the Zhao-Atlas-Marks (ZAM) distribution is obtained [8], which is given by ( 8): Finally, in order to enhance the SOP signal, the auto correlation function of the filtered signal is computed. Table 1.Main characteristic frequencies of the gearbox. Component Frequency [Hz] Motor (rotational speed) 17 Main mesh frequency 289 ith harmonic of mesh frequency i × 289 Pinion (rotational speed) 17 Gear (rotational speed) 10.32 The auto correlation function of a signal x c (t) is given by: In summary, the main steps of the proposed method are listed below: -LMS adaptive filtering: to separate the FOP components from the measurement vibration signal, obtaining a residual signal with the SOP components.-Time frequency transformation: the estimation of f 0 (required for the further digital filtering), is done by using the ZAM distribution.-Digital filtering: the residual signal is filtered using the cutting frequencies identified in the previous stage.-Noise reduction: the filtered residual signal and specially the SOP components are enhanced by using the autocorrelation function (matched filter).-Detection and diagnosis of faults: the spectrum of the enhanced residual signal is analyzed, looking for spectral components that might be related to the presence of a fault. EXPERIMENTAL VALIDATION In this section, the proposed method is validated by analyzing experimental vibration data from two different cases (see Figures 2 and 3).The first case corresponds to the detection of a fault in a one-stage gearbox, and the second case corresponds to the detection of a localized pitfall in a rolling bearing. Case 1: A faulty one-stage gearbox In this case, the experimental data is taken from a test rig, which consists of an asynchronous electrical motor controlled by a frequency converter and coupled to a singlestage spur gear transmission through a flexible coupling.The pinion has 17 teeth and the wheel has 28 teeth.The system is under a constant load, which is supplied by a DC generator, as it is illustrated in the sketch of Figure 4. The rotational frequencies and mesh frequencies of the gearbox are listed in the Table 1. When a local fault of the cracked tooth type occurs in one of the gears of the system, it is expected to have a vibration signature containing amplitude modulations of the fundamental gear mesh frequency and its harmonics with a modulating frequency equal to the rotational frequency of the faulty gear [9].Therefore, in this particular case, if spectral components at a frequency of 17 Hz and its multiples are identified in the spectrum of the vibration signal, it can be associated to a fault in the pinion.In contrast, if the spectral components are at the frequency of 10.32 Hz and its multiples, the fault can be associated to the gear.The following analysis is done for vibration data taken from the test rig with a faulty pinion. The vibration data were acquired from two piezoelectric accelerometers mounted on the supports A and B, shown in Figure 4, and using a data acquisition system configured with a sampling frequency of 30 kHz.The waveform time and the frequency spectrum of the acquired vibration signal are shown in Figures 2a and 2b respectively.The main mesh frequency and some of its harmonics can be identified from the spectrum of Figure 2b.In order to separate the FOP components, a LMS adaptive filter with 500 coefficients and a learning rate of 0.01ms was used.The waveform time and the spectrum of filtered signal, which contains the FOP cyclostationary components, are shown in Figure 5.In the same manner, the waveform time and the spectrum of the residual signal, which contains the SOP cyclostationary components and signal noise, are shown in Figure 6.From the spectra of Figures 5 and 6, it can be observed that the FOP components are predominated, when they are compared to the other components, which is generally expected [1]. Analyzing the spectrum of Figure 6, two possible resonant zones of the system can be identified, with the range frequencies between 1.200 to 2.600 Hz and between 2.800 to 5.800 Hz approximately.To complement the analysis, and before of filtering the signal containing the SOP components, the ZAM transform was applied to the residual signal, obtaining the time-frequency distribution shown in Figure 7.Despite the frequency resolution is a bit course (Δf ≈ 208 Hz), it is enough to visualize the variation in time of the two main resonant zones.It can be observed, that the resonant zones are excited approximately every 0.003 s, which corresponds to a close value of the fundamental mesh frequency (289 Hz).In Figure 7, can be seen that the impulsive variations are clearer defined in the second frequency range (2800 -5800 Hz), therefore, these frequencies are selected as the cutting frequencies of the further filtering stage of the residual signal. In order to filter the residual signal, a finite impulse response (FIR) filter was used.The implemented bandwidth filter has 400 coefficients with the low cut frequency of 2500 Hz and the high cut frequency of 6500 Hz.The waveform time and the spectrum of the filtered signal are shown in Figure 8.This signal has the typical pattern of an amplitude-modulated signal found in a mechanical system.Therefore, if a demodulation process is applied to the filtered signal, the results are shown in Figure 9.The demodulation technique applied is as follows: first, a high pass filter with cut frequency of 2800 Hz is applied; second, the signal is rectified and the mean value of the signal is subtracted and third, a low pass filter with cut frequency of 200 Hz is applied.Both of the filters used for the demodulation are of the infinitive impulse response (IIR), and with 5 coefficients.From Figure 9, a fault in the pinion can be confirmed since clear spectral components at 17 Hz and its first harmonics are presented in the spectrum.Additionally, in order to enhance the main components of interest in the residual signal the auto-correlation function can be used and the result is shown in Figure 10.This last step in the methodology could be avoided, as in this case, where the spectral components were already identified with the filtered and demodulated residual signal (Figure 11), however, in other cases where the vibration signals contain higher noisy components, the use of the auto-correlation function is very useful to clean the signal and therefore, should be include it in the analysis. Case II: A faulty rolling bearing In this case, the method is applied to experimental vibration data taken from a test rig with an incipient localized fault in the inner race of one of the bearings that support the shaft.The test rig consists of an asynchronous electrical motor controlled by a frequency converter, which drives a rotor shaft supported by two radial ball bearings.A schematic drawing of the test rig is shown in Figure 11. A static load can be indirectly applied to the bearings by using a tensor pulley system mounted in the centre of the shaft.The vibration data were acquired using piezoelectric accelerometers mounted on the supports A and B, and using a data acquisition system with a sampling frequency of 30 kHz.The vibration data analyzed for this case is for a faulty bearing located at the motor side (bearing A).The main fault bearing frequencies for the bearing under study are listed in the Table 2.It has been shown by several studies that similarly to the case of faulty gears, localized faults in bearings generate spectral sidebands around the resonant frequencies, which are related to the source frequency of the fault [10].However, when a fault is located in the inner race, it is a complete challenge to detect it in an early stage, due to the low amplitude of the spectral vibration components related to this fault (BPFI), which may be hidden by the background noise of the signal and by the cyclostationary components of FOP.The waveform time and the frequency spectrum of the vibration signal taken from the accelerometer mounted on the bearing A, are shown in Figures 3a and 3b, respectively. From the waveform time, some impulsive events can be identified, which seem to be modulated and periodic, however, it is not possible either from the waveform or the spectrum, to identify precisely their periodicity and/ or frequency of repetition, which should be equal to the BPFI frequency listed in Table 2. Table 2. Main fault frequencies of the radial ball bearing.Following with the application of the proposed method, the FOP components were separated from the raw signal using an adaptive filtering scheme and using the same parameters for the filter used in case I.The waveform time and the spectrum of the filtered signal, which contains the FOP cyclostationary components, are shown in Figure 12 and the waveform time and the spectrum of the residual signal, which contains the SOP cyclostationary components and signal noise, are shown in Figure 13. In contrast to the results obtained in case I for the filtered and residual signals (see Figures 5 and 6), in this case, the cyclostationary components of SOP are predominated when they are compared to the FOP components in the signal.This behavior can be due to several factors: the modulation of the zone load over the localized fault in the inner race (i.e. the inner race is rotating at the rotational shaft speed), slip motion between the rolling elements and the races, and random vibration components generated by friction mechanisms (e.g. the friction in the -tensor-pulley system). In order to identify an appropriated frequency range for the further filtering stage, the ZAM transform was applied to the residual signal, obtaining the time-frequency distribution shown in Figure 14 and computed with a Δf ≈ 110 Hz.In this figure, it can be identified the presence of short duration events in the range of frequency between 800 to 2000 Hz, which may involve resonant frequencies of the bearing races.Therefore, this frequency range is selected for the configuration of band-pass filter used to filter the residual signal.The waveform time and the spectrum of the filtered signal, using a FIR filter with 400 coefficients, are shown in Figure 5.Even though, the impulsive events are notorious in the waveform time of the filtered signal, it is still not possible to determine clearly their periodicity.However, when the filtered signal is demodulated the fault frequencies at BPFI (44 Hz) can be precisely identified from the spectrum, as shown in Figure 16.Finally, and in order to reconfirm the results obtained, the auto-correlation function is applied to the demodulated signal in order to enhance even more the main components of interest, as it can be seen in the results shown in Figure 17.In this way, the diagnostic of the fault is confirmed and it is very precise since the frequency of the main spectral component found is very close to the theoretical value of the BPFI frequency.With the results obtained from the analysis of these two cases, it is shown the effectiveness of the proposed methodology when it is applied to the detection and diagnostic of localized faults, particularly in gear systems and rolling bearings. CONCLUSIONS AND FUTURE ASPECTS In this work, it has been proposed a practical procedure based on the cyclostationary analysis of vibration signals, which can be used for the early detection of localized faults in mechanical components, such as gears and bearings.Vibration signals can be assumed as a combination of FOP and SOP components.tools.In this work, a procedure to analyze SOP signal is presented and tested in two practical cases.From the time-frequency analysis, appropriate filters are configured in order to obtain the SOP signal and then, using an enveloping detector the useful signal is obtained.To improve the signal to noise ratio (SNR), a matched filter based in the autocorrelation function is used. The results of applied the proposed method has been validated with the analysis of vibration data taken from two different laboratory test rigs.In order to extent this results to industrial applications, some of the further aspects of the present work include the analysis of vibration data taken from industrial rotating machinery, such as, multi-stage gearboxes and rotors supported on rolling bearings.Additionally, the procedure can be adapted and modified to be applied in cases where the early detection of localized faults is even more challenged, as for instance, detection of faults in the rolling elements of bearings and in general in mechanical systems with variable load and speed conditions.Finally, when more than one fault occur is possible to obtain the detection of several faults in time-frequency analysis, but in this case, several SOP components are obtained.Nevertheless, this aspect is part of the future research work. a) Picture of an induced fault (of 10mm length) on the tooth surface of the pinion of a singlestage spur gear transmission.b) Picture of an induced localized fault in the inner race of a radial ball bearing. Figure 2 .Figure 3 .Figure 4 . Figure 2. Waveform time and spectrum of the raw vibration signal-case I. Figure 5 . Figure 5. Waveform time and spectrum of the filtered signal (FOP components) -case I. Figure 6 .Figure 7 . Figure 6.Waveform time and spectrum of the residual signal (SOP components and noise)-case I. Figure 14 . Figure 14.ZAM time-frequency distribution of the residual signal-case II. FOP signals are generated by rotating machines with misaligned couplings and/or unbalanced rotors, whereas, SOP components correspond to the modulated vibratory signals generated by wear mechanisms, friction and impact forces.To analyze FOP signals and to extract the required information for the fault detection tasks, the classical spectral analysis is an appropriate tool.For SOP signals the analysis should be carried out using more sophisticated
2018-12-03T01:24:48.082Z
2010-04-01T00:00:00.000
{ "year": 2010, "sha1": "bb37aba16640d16d4b46b8590b097671b16b8f9a", "oa_license": "CCBY", "oa_url": "http://www.scielo.cl/pdf/ingeniare/v18n1/art06.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bb37aba16640d16d4b46b8590b097671b16b8f9a", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Physics" ] }
8819531
pes2o/s2orc
v3-fos-license
Determinants of never having tested for HIV among MSM in the Netherlands Objectives Men who have sex with men (MSM) who are unaware of their HIV infection are more likely to infect others, and unable to receive treatment. Therefore, we aimed to identify the proportion and characteristics of Dutch MSM who never tested for HIV. Methods In 2010, the European MSM Internet Survey (EMIS) recruited 174 209 men from 38 countries through an anonymous online questionnaire in 25 languages. We analysed data from participants living in the Netherlands (N=3787). The outcome we investigated was having never (lifetime) been tested for HIV. Results A total of 770 MSM (20.4%) had never been tested for HIV. In multivariate regression analyses, not being from Amsterdam (adjusted OR, aOR 1.54, CI 1.17 to 2.03), with low education (aOR 1.28, CI 1.04 to 1.57) and low knowledge on HIV-testing (aOR 2.23, CI 1.37 to 3.64) were significantly associated with never having tested. Lower sexual risk (including having fewer sexual partners (aOR 2.19, CI 1.57 to 3.04) and no anal intercourse (aOR 5.99, CI 3.04 to 11.77)), and less social engagement (including being less out (aOR 1.93, CI 1.55 to 2.40)) were also associated with having never been tested. Additionally, 36.1% of MSM who never tested for HIV reported high-risk sexual behaviour that may have put them at HIV risk. Conclusions MSM make their own risk assessments that inform their choices about HIV-testing. Nevertheless, MSM who were never tested may have been at risk for HIV, and remain important to target for HIV interventions. INTRODUCTION The 90-90-90 goal of United Nations Programme on AIDS (UNAIDS) states that 90% of all people living with HIV should be aware of their status, 90% of all HIV infected people should receive antiretroviral therapy, and of that group 90% should achieve viral suppression by 2020. 1 To reach this goal in the Netherlands, we specifically have to increase the number of people aware of their HIV status. Recent studies found that approximately 25-34% of people in the Netherlands are unaware of their HIV status, which is still far from the 10% goal. 2 3 However, this is an improvement compared to 2007 when this percentage was estimated to be 40%. 4 One estimation indeed showed that 71% of people infected with HIV were in care (17 750 of 25 000), of those 84% were on cART (16 081 of 17 750), and of those 91% had reached viral suppression (14 602 of 16 081). 2 Even though there is much to gain in the other steps of the treatment cascade, the percentage of people unaware of HIV infection is a priority. One thing that could possibly explain the decrease in people unaware is the opt-out HIV screening that has been implemented in sexually transmitted infection (STI) clinics since 2010, meaning that HIV routinely takes place unless someone refuses. [5][6][7] Also, men who have sex with men (MSM) are encouraged to be tested repeatedly, once every 6 months. 8 However, these interventions Strengths and limitations of this study ▪ The aim of the current study was to get insight into men who have sex with men (MSM) never tested for HIV, as mobilising them could potentially reduce the number of people unaware of their HIV status. ▪ A strength of this study was the topic, as recent estimation studies show that there are still many MSM unaware of the HIV infection in the Netherlands. ▪ A limitation of this study as with most studies on MSM is that it cannot claim to be representative for the total (Dutch) MSM population (selfselection bias). We think it is likely that the proportion of MSM in the Netherlands who are not tested for HIV is actually higher than in our analytic sample. ▪ Our findings showed that more MSM with lower sexual risks were never tested for HIV, suggesting that MSM made risk assessments that informed their choices about testing for HIV. ▪ Promoting HIV testing is important, as MSM who never tested for HIV showed sexual behaviour that may put them at risk for HIV. mainly reach MSM who have already found their way to the STI clinic and might miss a population of men who have never tested. As there were 278 new HIV infections diagnosed in STI-clinics in 2014 among MSM in the Netherlands, which is a 12% decline compared to 2013, 5 new perspectives should be added to keep up this trend. This corresponds to the general trend among MSM. 2 As such, people in the most at risk populations (such as MSM), who have never tested for HIV can be seen as the ultimate unaware group, assuming that at least some of them have been at risk for HIV. People who are unaware of their HIV infection have been estimated to contribute up to 90% of new HIV infections. 9 Therefore, mobilising the group never tested to test for HIV could potentially reduce the number of people unaware of their HIV status and onward transmission. Whereas much research focuses on the risk factors associated with contracting HIV and STIs, less research investigates risk factors associated with never testing for HIV. [10][11][12] In addition, although the number of HIV diagnoses among MSM has decreased between 2013 and 2014, this is probably not explained by a reduction in sexual risk behaviour, as other STIs do keep increasing in this group. 5 In 2011, data from the European MSM Internet Survey (EMIS) first became available with data on demographics, sexual behaviour, and psychosocial factors related to sexual health among MSM. 13 As in 2014 70% of all new HIV cases in the Netherlands were diagnosed among MSM, they are considered important to target for prevention efforts. 2 5 In the current study, we investigated risk factors associated with never being tested for HIV among EMIS participants residing in the Netherlands. METHODS The EMIS is an anonymous, self-administered, crosssectional, online study in 25 languages that covered MSM in 38 countries. In the Netherlands, 3917 men completed the EMIS questionnaire between 4 June and 31 August 2010. After data cleaning (excluding respondents with eg, discrepant answers), the Dutch sample consisted of 3783 MSM. Participants residing in the Netherlands were recruited predominantly via instant messages on internet sites visited by MSM, such as PlanetRomeo (53.0%), Gaydar (7.8%) and e-mails to Schorer Monitor participants (a Dutch internet survey; 14.6%), 14 as well as via banners on websites that are frequently visited by MSM, through gay community organisations and by using printed materials at locations frequented by MSM (24.7%). An extensive description of the survey methods can be found elsewhere. 13 15 Participants had to confirm that they had read and understood the introductory text, had reached the age of consent (16 years in the Netherlands), and consented to participate in the study before proceeding to the questions. We analysed the association between never been tested and the variables age, residence, country of birth, educational level, sexual identity, being out about their sexual attraction to men among family, friends and colleagues (outness or 'being out of the closet'), the proportion of gay friends among all friends, number of non-steady sexual partners in the past 12 months, anal intercourse ever, unprotected anal intercourse (UAI) with any male partner of unknown or discordant HIV serostatus in the past 12 months, ever visiting gay social venues, ever visiting sex venues, ever having had sex abroad, ever and recent (in the past 12 months) use of sex and party drugs (ie, ecstasy, amphetamine, crystal meth, mephedrone, GHB/GBL, ketamine, cocaine) and self-reported STI diagnoses in the past 12 months. We also calculated an approximation of the number of sexually active years, by subtracting age at first anal intercourse from current age. However, if MSM filled out the categories 'younger than 12' or 'older than 30' years of age at first anal intercourse, we qualified this as 12 and 31 years old at first anal intercourse. In addition, we analysed two variables of knowledge. First, HIV test-related knowledge was measured with five items ('AIDS is caused by a virus called HIV', 'There is a medical test that can show whether or not you have HIV', 'If someone becomes infected with HIV it may take several weeks before it can be detected in a test', 'There is currently no cure for HIV infection', 'HIV infection can be controlled with medicines so that its impact on health is much less'). Men who answered 'I knew this already' to at least four items were classified as high in knowledge on HIV-testing. Second, HIV transmission-related knowledge was also measured with five items ('You cannot be confident about whether someone has HIV or not from their appearance', 'Effective treatment of HIV infection reduces the risk of HIV being transmitted', 'HIV cannot be passed during kissing, including deep kissing, because saliva does not transmit HIV', 'You can pick up HIV through your penis while being 'active' in unprotected anal or vaginal sex (fucking) with an infected partner, even if you don't ejaculate', 'You can pick up HIV through your rectum while being 'passive' in unprotected anal sex (being fucked) with an infected partner'). Univariable and multivariable logistic regression analyses were conducted to investigate associations between the outcome, demographic, psychosocial and behavioural factors. Variables showing an association of p<0.20 (Wald test, univariable analysis) were included in the multivariable analyses. Backward stepwise logistic regression analyses were performed, including variables with p<0.05 for the likelihood ratio test. Associations were examined using adjusted ORs (aOR) and 95% CIs. In addition, we checked the variables in the multivariate model for collinearity, and did not find any, as indicated by tolerance coefficients between 0.625 and 0.984 (below 0.1 indicates collinearity), and variance inflation factors between 1.016 and 1.600 (above 10 indicates collinearity). All statistical analyses were performed using IBM SPSS for Windows V.19. RESULTS Respondents residing in the Netherlands were mostly older than 40 years (48.5%) compared to 12.4% younger than 25 years, and 39%, between 25 and 39 years, of Dutch origin (76.5%), from Amsterdam (28.9%) and highly educated (61.8% with tertiary education). The proportion of MSM who reported to never have tested for HIV during their lifetime was 20.4% (N=770). The median age of MSM who were never tested was 35 years (range . Of all MSM who were never tested for HIV, 65.4% had sex with one or more casual partners in the past 12 months and 36.1% had unprotected anal intercourse. Among those who were tested for HIV, 19.7% were tested HIV positive and 80.3% were diagnosed HIV negative at their last test. Among MSM who had tested negative at their last test, 53.5% were tested longer than 6 months ago. Demographics associated with never being tested for HIV in univariable analysis were age group and residence. Untested MSM were particularly likely to be younger than 25 years (27.9%), and MSM living outside of Amsterdam (table 1). Higher odds to never be tested for HIV were also found among MSM who had low to moderate educational levels, who had low to moderate knowledge about HIV testing and transmission. In the univariable analysis, MSM with a country of birth other than the Netherlands were less likely to be never tested for HIV. This was mainly explained by lower odds of never been tested for MSM from Europe (OR 0.57, 95% CI 0.43 to 0.77), and from North America, Canada and Australia (OR 0.43, CI 0.21 to 0.86). In addition, some social factors played an important role in never being tested. Notably, MSM who were sexually active for less than 5 years, those were not 'out' to nearly everybody, and those with a lower proportion of gay friends had higher odds to never be tested for HIV. Finally, related to sexual behaviour, MSM who had fewer non-steady sexual partners, who never had anal intercourse, who had no recent UAI with any male partner of unknown or discordant HIV serostatus, who had no sex abroad, who did not visit social venues, who did not visit sex venues, who reported to have never used sex or party drugs, and who reported no STI diagnoses in the past 12 months had higher odds to have never tested for HIV in the univariable analyses. In multivariable analysis, there was still an association between never being tested and living in Amsterdam, lower education, lower knowledge on HIV-testing, less than 5 years of sexual activity, low outness (or 'being closeted'), not visiting social venues, having fewer gay friends, having fewer non-steady sexual partners, no anal intercourse ever, no sex abroad and no self-reported STI diagnoses. DISCUSSION Our findings show that perceived higher sexual risks in the recent past decreased the odds to never be tested for HIV. It makes sense that MSM who behave in less risky ways, for example who have not had UAI with any male partner of unknown or discordant HIV status, had fewer non-steady sexual partners or have not had sex abroad, also perceived their risk for contracting HIV as lower, and therefore did not feel the need to test for HIV. Importantly, however, 36.1% of untested men reported UAI with a male partner of unknown or discordant HIV serostatus in the past 12 months, underlining that the untested group is an important target for HIV-testing campaigns. Additionally, educational level, knowledge about HIV-testing were also related to testing. Knowledge about HIV-testing seemed to be especially important in the testing decision, people with less knowledge we more likely to never be tested, or in other words people with more knowledge were less likely to never been tested. Notably, HIV-testing in the Netherlands is predominantly organised in specialised centres (not hospitals or clinics), and differs in this regard from other Western European countries. MSM who are more assimilated into the gay community seemed less likely to never be tested for HIV, as exemplified by having a higher proportion of gay friends, visit social venues and being out to nearly everybody. These MSM possibly had more positive examples or role models and social support. Analysis of EMIS data of Portuguese MSM showed that higher educational level, gay or homosexual sexual identity and number of sexual partners in the past 12 months were associated with HIV testing. 10 These factors were also associated with HIV testing among Dutch MSM, however, gay or homosexual sexual identity did not reach significance in our multivariable model. An Australian study found that HIV testing was associated with sexual practices as well, and that many of the untested men reported multiple sex partners and unprotected anal intercourse. 12 Insights into the risk factors associated with never having tested for HIV remain important. Specifically, our findings show that in order to reach the group of MSM who have never tested for HIV assimilation into the gay community is important. In the current climate, in which mobile applications seem to be replacing gay venues as primary meeting ground, 16 it could become increasingly difficult for young MSM to build supportive social networks. Healthcare professionals and health promotors should be vigilant that the reduction of gay venues (social or sexual) does not lead to more MSM who never test for HIV and try to stay in touch with the population through other means. The current findings correspond with known barriers to test, namely low-risk perception and fear and worries. 17 MSM in our sample also made risk assessments that informed their testing behaviour. However, a low-risk perception was not always similar to no-risk, emphasising the importance to also reach lower risk MSM. In addition, assimilation in the gay community A limitation to our study is that the data used has been collected in 2010. We have several reasons for still considering this data as important. First, a recently published paper indicated that in the Netherlands there is still an unacceptable large group of people who are unaware of their diagnosis. 3 We believe this might partly be explained by MSM who never tested, who as our paper shows do behave less riskily, but who still are at risk for HIV. Therefore, we think having better insights in risk factors for never being tested still is important. Strength of this study is its completeness regarding testing behaviour, sexual behaviour, and possible variables influencing this behaviour. Second, although more frequent testing is encouraged, in the past couple of years there have not been many initiatives focusing on never tested MSM. Therefore, we have no reason to believe that their behaviour has changed dramatically. We instead believe that increased use of applications for meeting sexual partners might have made this group more difficult to find, making insight into risk factors even more important. Although EMIS data have been collected 5 years ago, it offers the most comprehensive data set on this group of men, including most factors that could play a role in not testing for HIV. A repeat EMIS study would allow us to see whether the proportion of MSM never tested for HIV has already decreased in the Netherlands and other European countries. Another limitation could be recruitment bias, as more than 50% of the men were recruited from PlanetRomeo. Although PlanetRomeo users were rather young, they were less likely to have never tested for HIV (14.9%), compared to Gaydar (24.5%), Schorer E-mail (19.7%) and other recruitment methods (31.7%). This might have caused recruitment bias, we however think that PlanetRomeo users might be more sexually active, therefore at higher risk, which explains the smaller odds to never be tested. Moreover, a recent study found that the proportion of MSM never tested for HIV in the EMIS is comparable to the proportion of another internet survey in that same year (Schorer Monitor), which found a proportion of 24% of MSM never tested for HIV. Another venue-based recruitment method found that a lower percentage of MSM were never tested (12.9%), however this could be explained by the venue used namely STI clinics. 18 Despite the possible recruitment bias, this way of recruiting MSM probably is more generalisable than venue-based sampling frames, specifically for insights into MSM at risk for HIV. Moreover, in the Netherlands over 98% of the population has access to the internet (at home), therefore we believe that an internet survey does not limit the possible response due to lack of internet access. In this light, we think this way of recruiting MSM could actually be the strength of this study, even though we might reach more sexually active MSM, but in particular this group is interesting when looking at those within this group who have never tested for HIV. We find that they still behave riskily and could contract HIV, thus determinants influencing never testing is particularly interesting. CONCLUSION MSM with lower sexual risks were more likely to be never tested for HIV, suggesting that MSM made risk assessments that informed their choices about whether to test for HIV or not. However, we also showed that MSM who never tested for HIV showed sexual behaviour that may put them at risk for HIV, and are therefore an important group for targeted HIV interventions. Interventions should encourage regular HIV-testing for sexually active MSM. With the evolving of mobile meeting applications that could replace gay venues, it seems important that especially young MSM do develop strong social ties, to have role models and social support to inspire testing for HIV. Otherwise, mobile applications could be used for intervention (ie, to increase knowledge or encourage testing) in the increasingly individualistic social contexts and among MSM lacking strong social connections in the gay community. Contributors CdD led on the data analysis and drafting of the manuscript supported by EOdC, MD and AJS. AJS co-ordinated the European MSM Internet Survey. All authors contributed to the design of the study. All authors commented on drafts of the manuscript and approved the final version. Competing interests None declared. Ethics approval Research Ethics Committee of the University of Portsmouth, UK (REC application number 08/09:21). Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement No additional data are available. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons.org/licenses/by-nc/4.0/
2016-05-31T19:58:12.500Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "f1ba0441d5c66b5b93f1c9ada451d6eb43f654db", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/6/1/e009480.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7ec126493ceef3a7d9d2902ff8ab22b1f5cb5c0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257037351
pes2o/s2orc
v3-fos-license
10 Eye donation in palliative and hospice care settings: patient views and missed opportunities! Eye donation in Palliative and Hospice care settings: patient views and missed opportunities. Background There is a global shortage of donated eye tissue for use in sight saving and sight restoring operations such as corneal transplantation. In the UK the Royal National Institute of Blind People (RNIB) report that over two million people are currently living with sight loss with this figure predicted to rise to approx. four million by 2050. Patients who die in palliative and hospice care settings could potentially donate eye tissue, however, the option of eye donation is not routinely raised in end-of-life planning discussions. Research evidence suggests that health care professionals (HCP) are reluctant to discuss eye donation as they perceive it as something that will distress patients and family members. Aim This presentation will share findings regarding the views of patients and carers, including: their feelings and thoughts about the option of eye donation being raised with them; who they think should raise this issue; when this option should be discussed and who should be included in the discussion. Findings Findings are drawn from the NIHR funded national study: Eye Donation from Palliative and Hospice care contexts: investigating Potential, Practice, Preference and Perceptions (EDiPPPP) in partnership with three palliative care and three hospice care settings in England. Findings indicate high potential for eye donation but very low levels of identification of potential donors; low levels of approach to patients and family members about the option of eye donation; lack of inclusion of eye donation in end-of-life care planning and/or clinical meeting discussions (i.e. Multi-Disciplinary Team (MDT) meetings) and very limited awareness raising initiatives or activity to inform patients and carers of the option of eye donation. Conclusion It is imperative that patients who would want to be a donor are identified and assessed for eligibility for donation as part of high-quality end of life care. It is clear from studies reported over the past 10 years that not a lot has changed regarding the identification, approach, and referral of potential donors from palliative and hospice care settings, and this is due in part to perceptions held by HCPs that patients would be unwilling to engage in discussions regarding the option of eye donation in advance of their death. This perception that is not substantiated by empirical research. In the Corona pandemic, the importance of donor health for the supply of patients with high-quality transplants has once again become particularly apparent in the field of cornea donation. And there are further challenges ahead: Due to new operation methods such as lamellar techniques an earlier stage of disease can be treated hence patients are being operated at younger ages.At the same time, with demographic change, potential donors are getting older. Therefore, the demand for a high-quality transplant without pre-operations seems to be difficult to fulfil in the future.This is particularly important in the highly developed industrialised countries, where the indications for corneal transplantation are different and the expected quality characteristics are therefore other than in emerging or developing countries, for example.At the same time, the new surgical methods present the tissue banks with new tasks to meet the surgeons' demands. In the DGFG network, the average age of corneal donors is currently 69.7 years while the requests for transplants with a high endothelial cell density (ECD) increase.The ECD continues to be one of the main criteria for a high-quality cornea and is more likely to be found in younger donors.As mentioned at the beginning, however, the average life expectancy in Germany is already currently around 80 years. It seems that it is impossible to find the perfect donor of tomorrow.With the increase in the need for high-quality transplants, the question must be asked whether donor shortage is a home-grown problem in industrialised countries.What developments need to be initiated to counter the trend towards donor shortage?Could greater flexibility at the medical and/or regulatory level be a solution?The presentation aims to shed light on these and other questions and would like to discuss this with the experts. SUPPLY OF NON-CLINICAL OCULAR TISSUE FROM A TISSUE AND EYE SERVICES RESEARCH TISSUE BANK Introduction NHS Blood and Transplant Tissue and Eye Services (TES) is a human multi-tissue, tissue bank supplying tissue for transplant to surgeons throughout the UK.In addition, TES provides a service to scientists, clinicians and tissue bankers by providing a range of non-clinical tissue for research, training and education purposes.A large proportion of the non-clinical tissues supplied is ocular tissue ranging from whole eyes, to corneas, conjunctiva, lens and posterior segments remaining after the cornea is excised.The TES Research Tissue Bank (RTB) is based within the TES Tissue Bank in Speke, Liverpool and is staffed by two full-time staff.Non-clinical tissue is retrieved by Tissue and Organ Donation teams across United Kingdom.The RTB works very closely with two eye banks within TES, the David Lucas Eye Bank in Liverpool and the Filton Eye Bank in Bristol.Non-clinical ocular tissues are primarily consented by TES National Referral Centre Nurses. Methods and Results The RTB receives tissue via two pathways.The first pathway is tissue specifically consented and retrieved for non-clinical use and the second pathway is tissue that becomes available when tissue is found to be unsuitable for clinical use.The majority of the tissue that the RTB receives from the eye banks comes via the second pathway.In 2021, the RTB issued more than 1000 samples of non-clinical ocular tissue.The majority of the tissue, ~64% was issued for research purposes (including research into glaucoma, COVID-19, paediatrics and transplant research), ~31% was issued for clinical training purposes (DMEK and DSAEK preparation, especially after COVID-19 cessation of transplant operations, training for new eye bank staff) and ~5% was issued for inhouse and validation purposes.One of the findings was that corneas are still suitable for training purposes up to 6-months after removal from the eye. In 2021, the RTB received 43 applications for ocular projects from new customers and supplied to 36 different projects, meeting 95% of all orders placed this year.Discussion The RTB works to a partial cost-recovery system and in 2021 became self-sufficient.The supply of non-clinical tissue is crucial for advancement in patient care and has contributed to several peer-reviewed publications.Eye donation in Palliative and Hospice care settings: patient views and missed opportunities.Background There is a global shortage of donated eye tissue for use in sight saving and sight restoring operations such as corneal transplantation.In the UK the Royal National Institute of Blind People (RNIB) report that over two million people are currently living with sight loss with this figure predicted to rise to approx.four million by 2050.Patients who die in palliative and hospice care settings could potentially donate eye tissue, however, the option of eye donation is not routinely raised in end-of-life planning discussions.Research evidence suggests that health care professionals (HCP) are reluctant to discuss eye donation as they perceive it as something that will distress patients and family members.Aim This presentation will share findings regarding the views of patients and carers, including: their feelings and thoughts about the option of eye donation being raised with them; who they think should raise this issue; when this option should be discussed and who should be included in the discussion. Findings Findings are drawn from the NIHR funded national study: Eye Donation from Palliative and Hospice care contexts: investigating Potential, Practice, Preference and Perceptions (EDiPPPP) in partnership with three palliative care and three hospice care settings in England.Findings indicate high potential for eye donation but very low levels of identification of potential donors; low levels of approach to patients and family members about the option of eye donation; lack of inclusion of eye donation in end-of-life care planning and/or clinical meeting discussions (i.e.Multi-Disciplinary Team (MDT) meetings) and very limited awareness raising initiatives or activity to inform patients and carers of the option of eye donation.Conclusion It is imperative that patients who would want to be a donor are identified and assessed for eligibility for donation as part of high-quality end of life care.It is clear from studies reported over the past 10 years that not a lot has changed regarding the identification, approach, and referral of potential donors from palliative and hospice care settings, and this is due in part to perceptions held by HCPs that patients would be unwilling to engage in discussions regarding the option of eye donation in advance of their death.This perception that is not substantiated by empirical research.In India, the most densely populated state is Uttar Pradesh in the Northern region.This state has a huge base of corneal blind population due to cornea infections, ocular trauma, and (chemical) burns. GROWING TOGETHER IN DIVERSITY -INDO-GERMAN COOPERATION ENHANCING EYE DONATION IN NORTH INDIA Successful cornea transplantation using human post-mortem donated cornea is a treatment modality.In India lack of availability of donated cornea is a public health challenge.Thus, there is great need to reduce the huge demand and supply gap by increasing the donations for supply of cornea to patients. The Eye Bank at the Dr. Shroff 's Charity Eye Hospital (SCEH) and the German Society for Tissue Transplantation (DGFG) collaborate in a project to enhance cornea donation and eye bank's infrastructure in Delhi.The project is supported by the Hospital Partnerships funding programme which is a joint initiative of Germany's Federal Ministry for Economic Cooperation and Development (BMZ) and the Else Kröner-Fresenius Foundation (EKFS) and carried out by the German Society for International Collaboration (GIZ GmbH). The project aims to increase the number of cornea donations by the SCEH eye bank through establishing two new eye collection centers where donation is coordinated and that are integrated into the existing and well-established eye bank and donation infrastructure of SCEH.Further, data management of the eye bank will be improved by developing a concept for an electronic database system that allows faster monitoring and evaluation of the processes.All activities are carried out according to a defined project plan.The basis of the project is an open-minded analysis and understanding of processes of both partners in relation to the respective legislations plus the environment and conditions in both countries. Aside from intercultural exchange and personal contacts both partners benefit from mutual on-site visits and exchanging best practices in eye donation and banking as well as sharing expertise in research topics. This project is a great example on how strong and sustainable relationships can be build across the globe improving the infrastructure for cornea donations to help corneal blind patients.Background There is a need to identify additional routes of supply for ophthalmic tissue in the UK due to deficits between supply and demand.In response to this need the NIHR funded study, Eye Donation from Palliative and Hospice Care: Investigating Potential, Practice, Preference, and Perceptions) (EDiPPPP) project was developed in partnership with NHSBT Tissue Services ( now Organ Tissue Donation and Transplantation).Aim This presentation will report findings from work package one of EDiPPPP which aimed to: scope the size and clinical characteristics of the potential eye donation (ED) population via a large-scale, multi-site retrospective case notes review across England establishing: the size of the potential ED population; describe the clinical characteristics of the potential ED population and identify challenges for clinicians in applying the standard ED criteria for assessing patient eligibility.Results Retrospective review of 1200 deceased patient case notes (600 HPC; 600 HPCS) by reviewers (healthcare professionals) at research sites against current ED criteria were then evaluated by specialists based at the National Health Service Blood and Transplant Tissue services (NHSBT-TS).Note review established that 46% (n=553) of 1200 deceased patients notes were agreed as eligible for eye donation (total cases Hospice care settings = 56% (n=337); Palliative care settings = 36% (n=216) with only 1.2% of potential donors referred to NHSBT-TS for eye donation (Hospice care settings = 1.2% (n=4); Palliative care settings = 1.3% (n=3). 12 Application of the eye donation criteria resulted in an 81% agreement rate outcome for all sites (HPC = 79.2%;HPCS = 82.8%).If cases where there was a difference of assessment but where NHSBT evaluation indicated eligibility are included (n=113) the potential donor pool rises from 553 (46.1% total cases) to 666 (56%) eligible cases.Conclusions Significant potential exists for eye donation from the clinical sites in this study.This potential is not currently being realised.In view of the predicted increase in need for ophthalmic tissue it is essential that the potential route to increase the supply of ophthalmic tissue
2022-12-21T14:07:15.186Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "df30ebf9ce143c37fffee10497f025333705a796", "oa_license": "CCBYNC", "oa_url": "https://bmjophth.bmj.com/content/bmjophth/7/Suppl_2/A4.3.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2abf804bc956e023d89ccef4f000e1b3c53f32d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
59126225
pes2o/s2orc
v3-fos-license
WATER DEFICIT IMPACT ON SELECTED PHYSIOLOGICAL PARAMETERS OF THE WOODY PLANT CORNUS MAS L The urban environment is a natural habitat for plants. Only some species are able to survive in these extreme conditions (drought, salinity, high temperature, etc.). The plants were selected with focus on species with adequate properties. Nowadays, the plants with ornamental and useful character are becoming extremely popular, especially fruit species that can be grown also in unfavourable environmental conditions (Hričovský and Vargová, 2007). For these reasons, Cornus mas L. was set as a model plant. Cornus mas L. belongs to useful – ornamental shrubs that bear tasty fruits. It is used in ornamental horticulture thanks to its early flowering. The fruits of Cornus mas L. can be universally used both in a fresh and processed state as well as in the medicine. Evaluation of the reaction of plants to the physiological level, especially on the status of photosynthesis was chosen. The simple method of measurement of the photosystem II, which could be affected by stress factors, is the use of chlorophyll fluorescence techniques. Chlorophyll fluorescence technique has become popular among breeders, biotechnologists, plant physiologists, farmers, gardeners, foresters, ecophysiologists and environmentalists (Kalaji et al., 2016). The chlorophyll fluorescence signal is very rich in its content and very sensitive to changes in photosynthesis (Kalaji et al., 2014). Björkman and Demming (1987) consider the maximum quantum efficiency of PSII (Fv/Fm) as a screening indicator of plant response to a particular stress factor. Fv/Fm represents effectiveness of light utilization under the standard conditions of CO2 fixation and the quantum yield of photochemical processes (Björkman and Demming 1987). Reactions of Fv/Fm to drought by woody plants were recorded by Bauerle and Dudley (2003), Gallé and Feller (2007), Percival and Sheriffs (2002). The effective quantum yield of PSII (ΦPSII) is a real yield of active PSII reaction centres in the processing of absorbed light energy. ΦPSII represents the light used in photochemistry (Genty et al., 1989; Schreiber, 2004). The impact of water stress caused a decrease of ΦPSII in woody plants (Peguero-Pina et al., 2008, Gallé et al., 2007). Fluorescence decrease ratio (RFD) is considered to be the vitality index of the photosynthetic apparatus (Lichtenthaler and Babani 2000). Lower values are typical for plants in suboptimal conditions and the higher ones represent a higher photosynthetic activity and also adaptability of woody plants (Lichtenthaler et al., 2005). The decrease of RFD values was observed by Pukacki and Kamińska-Rożek (2005). WATER DEFICIT IMPACT ON SELECTED PHYSIOLOGICAL PARAMETERS OF THE WOODY PLANT CORNUS MAS L. Introduction The urban environment is a natural habitat for plants.Only some species are able to survive in these extreme conditions (drought, salinity, high temperature, etc.).The plants were selected with focus on species with adequate properties.Nowadays, the plants with ornamental and useful character are becoming extremely popular, especially fruit species that can be grown also in unfavourable environmental conditions (Hričovský and Vargová, 2007).For these reasons, Cornus mas L. was set as a model plant.Cornus mas L. belongs to useful -ornamental shrubs that bear tasty fruits.It is used in ornamental horticulture thanks to its early flowering.The fruits of Cornus mas L. can be universally used both in a fresh and processed state as well as in the medicine. Evaluation of the reaction of plants to the physiological level, especially on the status of photosynthesis was chosen.The simple method of measurement of the photosystem II, which could be affected by stress factors, is the use of chlorophyll fluorescence techniques.Chlorophyll fluorescence technique has become popular among breeders, biotechnologists, plant physiologists, farmers, gardeners, foresters, ecophysiologists and environmentalists (Kalaji et al., 2016). The chlorophyll fluorescence signal is very rich in its content and very sensitive to changes in photosynthesis (Kalaji et al., 2014).Björkman and Demming (1987) consider the maximum quantum efficiency of PSII (F v /F m ) as a screening indicator of plant response to a particular stress factor.F v /F m represents effectiveness of light utilization under the standard conditions of CO 2 fixation and the quantum yield of photochemical processes (Björkman and Demming 1987).Reactions of F v /F m to drought by woody plants were recorded by Bauerle and Dudley (2003), Gallé and Feller (2007), Percival and Sheriffs (2002). The effective quantum yield of PSII (Φ PSII ) is a real yield of active PSII reaction centres in the processing of absorbed light energy.Φ PSII represents the light used in photochemistry (Genty et al., 1989;Schreiber, 2004).The impact of water stress caused a decrease of Φ PSII in woody plants (Peguero-Pina et al., 2008, Gallé et al., 2007). Fluorescence decrease ratio (R FD ) is considered to be the vitality index of the photosynthetic apparatus (Lichtenthaler and Babani 2000).Lower values are typical for plants in suboptimal conditions and the higher ones represent a higher photosynthetic activity and also adaptability of woody plants (Lichtenthaler et al., 2005).The decrease of R FD values was observed by Pukacki and Kamińska-Rożek (2005). IN URBAN AREAS AND LANDSCAPE The stomatal conductance (g s ) of the leaves was used to monitor the reactions of stomata through water deficit impact.It is a very important defence mechanism against a loss of water (Tardieu andDavis, 1993, in Živčák, 2006).The impact of water deficit results in the closure of stomata and the decrease in photosynthesis.Lower values of stomatal conductance also represent adaptation of plants in extreme conditions (Živčák, 2006). One of the crucial factors for plant growth is the accessibility of water in the soil.Plants have more adaptive mechanisms that help them to tolerate adverse environmental conditions.There has been quite a lot of research about the drought response of crops but also more testing of drought tolerant ornamental plant species is needed in the field of landscape architecture.The impact of water scarcity to ornamental and useful woody plant Cornus mas L. was tested.The aim of the evaluation was to investigate if there were any differences in the mean values of chlorophyll fluorescence parameters and stomatal conductance of the leaves on the plants in variants during the experiment with different levels of soil water supply. Material and Methods Within a pot experiment, physiological responses of plants in relation to water scarcity were monitored.The non-destructive methods of monitoring the impact of a lack of water in the soil to plants, measurement of leaf stomatal conductance and modulated chlorophyll fluorescence were chosen. The one-year old seedlings (in the year 2013) of Cornus mas L. come from generative propagation.The two variants with different soil water supply in the 3 litre size pots were established.A half of plants were exposed to 30% of the soil water supply (a variant with reduced water content in the soil -a stress variant) and another part of the plants were further hydrated in 60% of the soil water supply (a control variant).The plants were cultivated into the substrate based on the white peat, enriched with the clay (20 kg/m 3 ), pH 5.5-6.5, with NPK fertilizer 14 : 10 : 18 (Klasmann TS3, Klasmann-Deilmann GmbH, Germany), under the polypropylene cover with 40% of shading.The experimental plants were grown in the differentiated water regime from June to September, for the total period of 151 days in the year 2013 and 154 days in the year 2014.Ten plants in the control variant and ten plants in the stress variant were used for the measurement.Chlorophyll fluorescence was measured on two leaves of each plant by the chlorophyll fluorometer Hansatech FMS 1 (Hansatech Instruments Ltd, United Kingdom) in the morning hours.The software Modfluor for the data analysis was used and 21 days period of measurements was set in the two years.The following characteristics of measurement protocol of the chlorophyll fluorescence were used: one second light pulses of red light with an intensity of 895 μmol/m 2 /s 1 , the intensity of actinic light 34 μmol/m 2 /s 1 and the saturation light pulse 10 000 μmol/m 2 /s 1 .These parameters were measured and also used for the statistical analysis: F v /F m , Φ PSII and R FD . When measuring the leaf stomatal conductance, the Delta T leaf porometer AP4 (Delta-T Devices Ltd, United Kingdom) was used.The measurement of a loss of water vapour through the stomata took place before midday (best conditions for measurement were between 8:00-10:00 am because of stomata closure at the midday because of higher temperature and light intensity) on two leaves per plant.The leaf stomatal conductance was determined in mm/s 1 , together with the recording the current time, light intensity in μmol/ m 2 /s 1 and the current temperature in degrees Celsius. For the mathematical and statistical analysis of the data, the one-way Anova and Kruskal-Wallis Test, P <0.05 were used.The statistical assessment of the data was conducted using the software Statgraphics Centurion XVII (StatPoint Technologies, USA, XV, licence number: 7805000000722).The differences in the monitored parameters in the woody plant Cornus mas L. with reference to different water content in the soil were tested. Results and Discussion F v /F m measurements results are presented by nonsignificant differences between two variants with different water supply in the soil.The mean value of F v /F m after 21 days of experiment duration time in the year 2013 was in the stress variant 0.77 and in the control variant 0.80.In the year 2014 after 84 days of treatment the mean value was 0.81 (stress variant) and 0.83 (control variant), (Table 1).It can be possible that 30% of water supply for Cornus mas L. was not the critical level of water stress.Cornic and Massacci (1996) define that metabolism processes are not affected in the moderate water stress and also the short period of water deficit does not affect the F v /F m (Papageorgiou and Govindjee, 2004).These findings confirm the results of other authors (Bjorkman and Deming 1987;Johnson et al., 1993, Kalaji et al., 2012) that values around 0.83 are optimal for the most of the plant species in non-stressed conditions.Also Roháček (2002) states the value 0.832 ±0.004 as the constant value which is reached by a lot of different plant species under non-stressed conditions.The results in the woody plants Spiraea japonica L. ´Little Princess´ and Cornus stolonifera Michx.´Kelseyi´ (Šajbidorová, 2013) as well as in Pyrus pyraster L. and Sorbus domestica L. (Šajbidorová et al., 2015) confirm that this parameter is non-sensitive to water deficit. On the other hand F v /F m and Φ PSII are considered by Maxwell and Johnson (2000) as sensitive indicators of plants to stress environmental conditions.Both parameters characterize the function of PSII that significantly reacts to any environmental impact (Swiatek et al., 2001).The values in plants in stressed conditions are rapidly declining, so the parameter is considered as an indicator of photoinhibition or other damage of PSII (Roháček, 2002).Water deficit decreased the F v /F m values in Acer rubrum L. and Acer × freemanii E. Murray (Bauerle and Dudley, 2003) and also in Fagus sylvatica L. (Gallé and Feller, 2007).Percival and Sheriffs (2002) identified drought tolerant, intermediate, sensitive and very sensitive woody plants based on the F v /F m measurement after dehydration.Φ PSII seems to be the significant parameter of the condition of drought stress in a model plant.Φ PSII is the real yield of active PSII reaction centres in the processing of absorbed light energy and reflects the actual state of the photosynthetic apparatus (Genty et al., 1989;Schreiber, 2004). The values of Φ PSII of the stress variant were significantly lower in comparison to the control variant (Table 1).In the year 2013, a significant decrease (46%) in the values of Φ PSII in the control variant (0.13) in comparison to the stress variant (0.06) after 41 days of lower water supply in the soil was observed.In the year 2014, a reduction in the values of Φ PSII was more remarkable, a 72% decrease between the variants (the control variant 0.11 and the stress variant 0.08) after longer duration of the differentiated irrigation regime (for 84 days).The similar decline of Φ PSII was observed in Spiraea japonica L. ‚Little Princess´ and Cornus stolonifera Michx.´Kelseyi´ (Šajbidorová 2013).Gallé et al. (2007) found a decrease in the values of Φ PSII in the condition of water deficit in Quercus pubescens Willd., as well as Peguero-Pina et al. (2008) in Quercus coccifera L. Hillová (2016) considered that the measurement of Φ PSII is a fast and affordable method for sorting of herbaceous perennials into five main groups which not fully correspond with the traditional use of perennials sorting according to Hansen and Stahl (1993). Within two years, the values of fluorescence decrease ratio (R FD ) were significantly lower in the stress variant for most of the period (Table 1).In the year 2013, after 41 days of a lower level of water supply the decrease (62%) from 1.42 in the control variant in comparison to 0.88 in When assessing stomatal conductance in the year 2013, lower values in the stress variant after 41 days of a differentiated water regime were observed.In the year 2014, there were lower values in the stress variant for most of the period.A bigger difference was recorded between the two variants after a 42 days period (a decrease by 73%).At the end of the period, after 84 days, the decrease in the values was much lower (35%) (Table 1).The same results emphasize Hillová et al. (2016) on herbaceous perennials when drought stress led to a considerable decline in stomatal conductance.Galmés et al. (2007) observed a decline in stomatal conductance in ten Mediterranean species when water stress intensified.Zweifel et al. (2009) said that stomatal regulation is species-specific.Closed stomata reduce transpiration and also photosynthesis and total tree metabolism (Larcher, 2003). Conclusion Reliable information about drought tolerant ornamental woody plants and herbs in the field of landscape architecture is needed, with regards to global warming and specific environmental conditions in urban areas.Analysing plants' reactions to extreme environmental conditions by research is more effective in case of crops rather than ornamental plants.Testing of drought resistance could be also useful when selecting ornamental plants because of the need for low maintenance of public green spaces in urban areas. Šajbidorová, V. -Lichtnerová, H. | Water Deficit Impact on Selected Physiological Parameters... Plants in Urban areas and landscaPe | 2018 | pp.64-68 PLANTS IN URBAN AREAS AND LANDSCAPE the stress variant was recorded.In the year 2014, after 84 days of lower water supply a significant decrease the values between the two different variants (stress = 0.83 and control = 1.42) was measured, which accounts for the decrease of 58%.Pukacki and Modrzyński (1998) considered the values of R FD ≥2.3 for plants in optimal conditions and the impact of stress factors may result in a decrease in values.Pukacki and Kamińska-Rożek (2005) evaluated the impact of drought in the soil on the plants of Picea abies L. After 42 days of water deficit there was observed a decrease in the values Rfd and F v / F m by 44% compared to control plants. resistance of woody plants and herbs to drought has become the subject of various experiments.The paper presents the study of impact of the water scarcity on chlorophyll fluorescence and stomatal conductance of the woody plants Cornus mas L. The experiment was carried out on plants in two different water regimes (a control variant was maintained with soil water supply on 60% and a stress variant with soil water supply on 30%).Chlorophyll fluorescence was carried out by chlorophyll fluorometer in a 21 days´ period in two growing seasons of 2013 and 2014.The following chlorophyll fluorescence parameters were recorded: maximum quantum efficiency of PSII (F v / F m ), effective quantum yield of PSII (Φ PSII ) and chlorophyll fluorescence decrease ratio (R FD ).Stomatal conductance (g s ) was carried out by leaf porometer in 21 days period in two growing season, 2013 and 2014.By the results the water deficit represented by soil water supply on 30% does not affect the values of F v /F m .The values of Φ PSII and Rfd were significantly affected due to water deficit in the soil in a model plant.Limiting of the irrigation on model plants resulted in the reduction in the stomatal conductance (g s ). Studying Table 1 The mean values of the analysed parameters and 95% LSD test for the studied taxa Cornus mas L. and for the two variants of the soil water supply (control/stress) in the year 2013.Values with the same letter are not significantly different Table 2 The mean values of the analysed parameters and 95% LSD test for the studied taxa Cornus mas L. and for the two variants of the soil water supply (control/stress) in the year 2014.Values with the same letter are not significantly different Šajbidorová, V. -Lichtnerová, H. | Water Deficit Impact on Selected Physiological Parameters... Plants in Urban areas and landscaPe | 2018 | pp.64-68 IN URBAN AREAS AND LANDSCAPE
2018-12-18T06:00:20.368Z
2018-05-20T00:00:00.000
{ "year": 2018, "sha1": "562f2ef53d837f20210f933dcddd4df671d9d7d7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15414/pual/2018.64-68", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "562f2ef53d837f20210f933dcddd4df671d9d7d7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
103713956
pes2o/s2orc
v3-fos-license
Thermal reaction characteristics of dioxins on cement kiln dust Cement kiln dust is commonly recycled back into the production process. This results in elevated concentrations of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the flue gases of cement plants. The present study investigated the effects the reaction temperature, oxygen content, and origin of kiln dust had on the thermal reaction characteristics of PCDD/Fs. The concentration of 2,3,7,8-PCDD/Fs that were desorbed from the kiln dust decreased as the reaction temperature was increased and the higher temperature facilitated the degradation of PCDD/Fs. However, the oxygen content, which ranged from 6–21%, had only a minor impact on the thermal reaction characteristics of PCDD/Fs. Finally, the thermal reaction characteristics of PCDD/Fs were largely affected by the origin of the kiln dust; 1.2 pg I-TEQ g−1 was desorbed from kiln dust originating from a cement plant that co-processed refuse-derived fuel (RDF) and 47.5 pg I-TEQ g−1 was desorbed from kiln dust originating from a cement plant that co-processed hazardous waste. The study also found that PCDD/F formation pathways were dependent on the origin of the kiln dust; precursor synthesis dominated PCDD/F formation on the kiln dust collected from a cement plant that co-processed RDF, while de novo synthesis dominated the formation of PCDD/Fs on the remaining samples of kiln dust. Introduction The mass of municipal solid waste (MSW) generated in China reached 191 million tons in 2015, whereas the installed treatment capacity of the MSW incineration plants in operation throughout China was just 80 million tons. 1 For that reason, most of the MSW generated that exceeds the installed incineration capacity is still landlled. This has motivated researchers to focus on the development of alternative MSW disposal methods. One such method is co-processing MSW in cement kilns. The disposal method is regarded to represent a viable option for managing MSW, especially in China, where demand for cement is continually growing. Furthermore, more than 24 cement plants have acquired licenses to co-process MSW, creating a daily treatment capacity of 12 000 t of MSW. One issue with co-processing waste in cement kilns is that polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/ Fs) are inevitably formed during the process of cement production. [2][3][4] Moreover, airborne emissions from some cement kilns might even exceed the emission standards of 0.1 ng I-TEQ Nm À3 set in China. [5][6][7] Therefore, signicant efforts have been invested in studying the formation, destruction, and desorption characteristics of PCDD/Fs in order to reduce the PCDD/F emissions that cement plants currently generate. The kiln dust collected from bag lters in cement plants is generally recycled into the rst stage of a cyclone preheater, which results in the formation of PCDD/Fs. The recycled kiln dust acts as the basis for PCDD/Fs formation because it has relatively high contents of chlorine and carbon. Li et al. 8 previously demonstrated that the rst stage of a cyclone preheater is the prevailing point at which dioxins form in cement kilns; 12% of the total gaseous PCDD/Fs were generated therein. Furthermore, it is more challenging to reduce PCDD/Fs in the gas phase than it is to reduce them in the solid phase. The expected temperature range for the de novo synthesis of PCDD/Fs is 250-450 C. 11 Meanwhile, PCDD/Fs can be degraded at temperatures of 200-600 C. 12 Therefore, the simultaneous formation and destruction of PCDD/Fs in the recycled kiln dust is possible during the rst stage of a cyclone preheater. Furthermore, the thermal reaction characteristics of PCDD/Fs are not only inuenced by their concurrent formation and degradation, but also by the initial content of PCDD/Fs in the dust. Furthermore, the temperature and oxygen content of ue gases might also inuence the thermal reaction characteristics of PCDD/Fs. For instance, about 94% of the PCDD/Fs, which are contained in MSWI y ash studied, is found in the gas phase when the reaction temperature is 350 C. 13 The balance between the formation and degradation effects of PCDD/Fs is largely dependent on the reaction temperature, and 450 C is regarded as a breakthrough temperature at which the destruction of PCDD/Fs dominates their formation. 14 Also, the physicochemical characteristics of cement kiln dust are different to those of MSWI y ash, especially the contents of Cu and Cl. 15,16 Furthermore, the types of waste that are co-processed in cement kilns can affect the characteristics of the kiln dust. All these factors highlight the complexity of the thermal reaction behavior of the PCDD/Fs contained in kiln dust. Some previous studies have focused on the formation, degradation, and desorption behavior of PCDD/Fs in relation to MSWI y ash; 9,10 however, similar studies that focus on the kiln dust have not yet been performed. Therefore, there is a need to study the thermal reaction characteristics of PCDD/ Fs in the kiln dust alongside the key factors that inuence such characteristics. The present study analyzed the impact of several parameters on the thermal reaction characteristics of the PCDD/Fs present in cement kiln dust. Particularly, the reaction temperature was modied within 300-400 C. Further, the oxygen content was changed within 6-21%. Finally, the impact of the waste co-processed in cement kilns on the thermal reaction characteristics of PCDD/Fs was studied. To comprehensively examine the thermal reaction behavior of PCDD/Fs, both the concentrations of PCDD/Fs and the gas/ particle distributions thereof were observed. The concurrent formation and degradation reactions of PCDD/Fs were analyzed by assessing the homologue and congener distributions of PCDD/Fs. The results of the study expose the relative contribution kiln dust makes to the formation of gaseous PCDD/Fs in the ue gas in cement kilns that co-process waste. Furthermore, the results can be used to determine whether the kiln dust can be returned to other parts of cement kilns; e.g., the second stage of a cyclone preheater or a precalciner, to prevent the accumulation of PCDD/Fs during the rst stage of a cyclone preheater. By doing so, the emissions of PCDD/Fs could be further reduced. Fig. 1 illustrates the apparatus that were used to conduct the thermal reaction experiments. The apparatus comprised of a vertical tubular furnace consisting of a heated section and a temperature controller. The reaction temperature inside the vessel was simultaneously controlled with an S-type thermocouple to ensure the high accuracy of a chosen temperature regime. A quartz reactor tube (cylindrical geometry d ¼ 50 mm and l ¼ 530 mm) was lled with silica balls (d ¼ 4 mm) up to a height of 80 mm. The silica balls were used to stabilize and homogenize the ow of a reaction gas. A quartz plate containing the reactants (kiln dust) was placed inside the quartz reactor tube right onto the silica balls at the point at which the temperature set for a specic experiment was reached and stable conditions had been achieved. The reaction gas was injected from the bottom of the reactor and was ushed through the reactants to carry gaseous PCDD/Fs to a collection zone that included XAD-II resin and toluene. The reaction gas was retained for 55 seconds. Materials One sample of kiln dust (KD1) was collected from a cement kiln that had a daily clinker capacity of 5000 t. The cement plant employs a dry production process and employs a state-of-the-art conguration with a preheater/precalciner system containing ve cyclones. 8 The kiln dust collected from a bag lter is recycled during the rst stage of the cyclone preheater. The cement plant co-processes RDF, which is fed into the precalciner at a constant rate of 15 t h À1 . Some primary characteristics of the hazardous waste co-processed in the cement kiln from which KD1 was sampled are given in Table 1. The second sample of kiln dust (KD2) was collected from a cement kiln that had a similar conguration to that from which KD1 was collected. The second kiln had a daily clinker capacity of 2000 t. No waste was co-processed in the cement kiln. The last sample of kiln dust (KD3) was collected from a cement kiln that had a daily capacity of 4000 t. 9 t h À1 hazardous waste was co-processed in this cement kiln. The waste contained pesticide waste, incineration y ash, Crcontaining waste, and non-ferrous metal smelting waste. The characteristics of the hazardous waste that was co-processed in the cement kiln from which the KD3 sample was taken are given in Table 2. Table 3 introduces the conditions of the experiments that were conducted in the present study. The experiments included a reference Experiment (R-0), as well as three series of experiments; namely A, B, and C, to study the impact of each chosen parameter on the desorption characteristics of PCDD/Fs. The R-0 reference experiment was conducted using KD1 at a temperature of 350 C and oxygen content of 6%. The series A studied the inuence of the reaction temperature on the desorption behavior of PCDD/Fs in the kiln dust. The reaction temperature was set to 300 C for Experiment A-1 and 400 C for Experiment A-2. The remainder of the parameters remained the same as those employed for the R-0 reference experiment. Series B studied the inuence the oxygen content of the reaction gas had on the desorption behavior of PCDD/Fs. The oxygen content in the reaction gas was set to 10% for Experiment B-1 and 21% for Experiment B-2. Series C studied the inuence the origin of the kiln dust had on the desorption behavior of PCDD/Fs; KD2 was used in Experiment C-1 and KD3 in Experiment C-2. Design In all experiments, the ow rate of the reaction gas was set to 300 ml min À1 , and the mass of the reactant was 8 g. Each experiment lasted 30 minutes to ensure completeness of reactions. Both the kiln dust and the gaseous compounds were collected and analyzed in parallel. Each experiment was replicated to ensure the reliability of the results. PCDD/Fs analysis The pretreatment and quantication of the collected gaseous samples was performed in accordance with the US EPA Method 1613. 17 The pretreatment process included a Soxhlet extraction, concentration in a rotary evaporator, acid washing, cleaning on a mixed acid/basic silica gel chromatographic column, cleaning on an alumina chromatographic column, and concentration in a nitrogen ow. The identication and quantication of PCDD/ Fs was accomplished by a high-resolution gas chromatographyhigh-resolution mass spectrometry (HRGC-HRMS) method using a 6890 Series gas chromatograph (Agilent, USA) employing a DB-5ms (60 m  0.25 mm I.D., 0.25 mm lm thickness) capillary column for separation of the PCDD/Fs congeners, and a JMS-800D mass spectrometer (JEOL, Japan). The temperature program during HRGC was optimized as follows: (a) splitless injection of 1 ml at the initial oven temperature of 150 C, which was kept for 1 min; (b) temperature increased to 190 C at the rate of 25 C min À1 ; and (c) temperature increased to 280 C at the rate of 3 C min À1 with the subsequent duration of the experiment being 20 min from the point at which the temperature was reached. The mean recoveries of standards for PCDD/ Fs ranged from 55-125%, which was within the acceptable range of 25-150%. Concentration of 2,3,7,8-PCDD/Fs The concentrations of PCDD/Fs and the corresponding I-TEQ values are presented in Fig. 2. The concentration of PCDD/Fs in the KD1 and KD2 samples was 254 AE 20 pg g À1 (9.2 AE 0.3 pg I-TEQ g À1 ) and 1823 AE 90 pg g À1 (190 AE 6 pg I-TEQ g À1 ) respectively. A signicantly higher concentration of PCDD/Fs of 6455 AE 500 pg g À1 (1399 AE 100 pg I-TEQ g À1 ) was identied in the KD3 sample. The concentration of PCDD/Fs in KD3 was at the same level as the concentration of PCDD/Fs in MSWI y ash. 18,19 The signicantly higher mass of PCDD/Fs released from KD3 could primarily be attributed to the operation conditions of the kiln, and the conguration of the kiln where kiln dust was sampled. The characteristics of waste co-processed in cement kilns, however, have previously been found to have less inuence on the formation and emissions of PCDD/Fs. 2,20 The results of the Series A experiments are presented in Fig. 2. As the data indicate, increasing the reaction temperature reduced the concentrations of PCDD/Fs and I-TEQ values. However, higher concentrations of PCDD/Fs were observed in the R-0 and A-1 experiments than in the original kiln dust KD1. The increase in the PCDD/Fs concentration observed during Experiment R-0 and Experiment A-1 suggested the prevalence of the formation effects of PCDD/Fs over their degradation at the temperatures of 300 and 350 C. Still, the degradation effect of PCDD/Fs was enhanced at a higher temperature of 400 C, at which point the concentration of PCDD/Fs fell to 94 pg g À1 , and the I-TEQ value decreased to 1.3 AE 0.1 pg I-TEQ g À1 , resulting in a 86% reduction efficiency of the PCDD/Fs. Although the concentrations of PCDD/Fs in the R-0 and A-1 Experiment increased in comparison to KD1, the I-TEQ values consistently decreased and were 6.9 AE 0.3 pg I-TEQ g À1 in Experiment A-1 and 4.3 AE 0.3 pg I-TEQ g À1 in Experiment R-0. Such results suggest that low chlorinated PCDD/Fs are more easily degraded than highly chlorinated PCDD/Fs. This result was aligned with ndings of a previous study by Yang et al. 21 The results of the Series B experiments are presented in Fig. 3 and show that increasing the oxygen content in the reaction gas resulted in reduced concentrations of PCDD/Fs and I-TEQ. The concentration of PCDD/Fs decreased from the reference value of 420 AE 40 pg g À1 (4.3 AE 0.3 pg I-TEQ g À1 ) to 233 AE 30 pg g À1 (3.4 AE 0.9 pg I-TEQ g À1 ) in Experiment B-1, which represented to the oxygen content increase to 10%. A further increase in the oxygen content to 21% in Experiment B-2 decreased the concentration of PCDD/Fs to 103 AE 27 pg g À1 (2.4 AE 0.4 pg I-TEQ g À1 ). Similarly, the study by Misaka et al. 22 indicated that increasing oxygen content promotes the thermal degradation of PCDD/Fs, while Shibata et al. 23 indicated that the formation of PCDD/Fs via the de novo synthesis weakens under an oxygen content higher than 10%. The results of the Series C experiments are presented in Fig. 4 and show that the concentration of PCDD/Fs increased from the initial 1823 AE 90 pg g À1 (190 AE 6 pg I-TEQ g À1 ) iden-tied in the K2 sample of kiln dust to 2397 AE 200 pg g À1 (276 AE 67 pg I-TEQ g À1 ) identied in the gas phase of Experiment C-1, when KD2 was heated at 350 C. The results indicate that PCDD/ Fs are inevitably formed even without waste co-processing in cement kilns. In the case of the KD3 kiln dust, the concentration of PCDD/Fs decreased from 6455 AE 500 pg g À1 (1399 AE 100 pg I-TEQ g À1 ) to 4194 AE 300 pg g À1 (339 AE 40 pg I-TEQ g À1 ) indicating the degradation effects of PCDD/Fs were greater than the formation effects. The dominant reactions related to the PCDD/Fs in the kiln dust were determined by comparing the distribution of PCDD/Fs in the original kiln dust with the actual properties of the kiln dust. This journal is © The Royal Society of Chemistry 2018 the reaction temperature from 300 C in Experiment A-1 to 400 C in Experiment A-2. A similar trend was found by Addink et al., 24 while the corresponding proportions of PCDD/Fs in the gas phase determined in the present study were much lower than the results achieved by Altwicker et al. 13 Such differences could partly be attributed to the characteristics of the reactants and the experimental conditions. The results of the R-0 reference experiment revealed that 25% of the PCDD/Fs contained in the kiln dust was released into the ue gas. Considering the application of air pollution control devices, the impact of the PCDD/Fs originating from the kiln dust to the total emissions could be largely minimized. Furthermore, the raw meal exhibited adsorption and suppression effects on the PCDD/Fs in the ue gas. 25 In the Series B experiments, the fractions of 17 toxic PCDD/ Fs in the gas phase were 21, 54, and 47% when the oxygen contents were 6, 10, and 21% respectively. Of these, the fractions of the corresponding I-TEQ values were 25, 22 and 22%. A similar trend was observed by Addink et al., 24 indicating that the oxygen content had a minor effect on the gas/particle distribution of I-TEQ values. Gas/particle distribution of PCDD/Fs In the Series C experiment, the fractions of 17 toxic PCDD/Fs in the gas phase were 10% in Experiments C-1 and C-2. However, the I-TEQ values indicated that the origin of kiln dust can affect the gas/particle distribution since only 3% of PCDD/Fs were discovered in the gas phase when kiln dust KD2 was used, which was much lower than the values for KD1 of 25% and KD3 of 13%. Homologue distribution of PCDD/Fs 3.3.1. Impact of reaction temperature. Fig. 6 shows the homologue proles of PCDD/Fs in the Series A experiments, during which the impact of the reaction temperature was studied. The fractions of polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) in the KD1 kiln dust were 27% and 73% respectively. Tetrachlorodibenzofuran (TCDF) was the most abundant homologue, accounting for 63% of the total PCDD/Fs, which is different to that of MSWI y ash. 26 The total concentration of PCDD/Fs formed in the KD1 kiln dust at 300 C (tetra-to octachlorinated PCDD/Fs) increased from 1210 pg g À1 to 1270 pg g À1 . On the one hand, the fraction of PCDDs increased to 63% mainly due to the increase in octa-chlorodibenzo-pdioxin (OCDD) from 17% to 56%. On the other hand, the fraction of TCDF decreased to 30% indicating that the chlorination and dechlorination reactions mainly occurred on the surface of the kiln dust. 27 A higher mass of PCDDs in the gas phase in comparison to the mass of PCDFs indicated that precursor synthesis had occurred. 28 Previous ndings by Li et al. 29 also demonstrated an abundance of precursors such as chlorobenzenes (CBzs) and polycyclic aromatic hydrocarbons (PAHs). In the R-0 experiment, the concentration of PCDD/Fs decreased from 1210 pg g À1 to 610 pg g À1 . The fraction of PCDDs increased to 71%. The dominant PCDD was OCDD, which accounted for 61% of the total PCDD/Fs. This indicated that OCDD were difficult to be degraded and that the precursor synthesis also contributed to the high fraction of OCDD. For Experiment A-2, the concentration of PCDD/Fs signicantly decreased to 134 pg g À1 . The fraction of PCDDs in A-2 was lower than that observed in Experiments A-1 and R-0. Moreover, the low chlorinated PCDD/Fs were relatively easily degraded due to their unstable structure. The weight average level of chlorination of PCDD/Fs increased from the original 4.86 in KD1 to 6.38 in A-1, 6.77 in R-0, and 5.98 in A-2. Such phenomenon could either be attributed to the strong combination of highly chlorinated precursors or to the degradation of low chlorinated PCDD/Fs. 30 3.3.2. Effect of oxygen content. Fig. 7 shows the homologue proles of PCDD/Fs in the Series B experiments, during which the impact of the oxygen content was studied. The total concentration of PCDD/Fs formed in the KD1 kiln dust decreased from 610 pg g À1 to 470 pg g À1 when the oxygen content increased from 6% to 10%. Similarly, the fraction of PCDDs decreased from 71% to 53% during the same experiments. The reason for this could be attributed to the reduction in OCDD, which was mainly formed via the precursor synthesis. 31 The increase in the oxygen content decreased the weight average level of chlorination from the initial 6.38 in R-0 to 6.00 in B-1 and 5.46 in B-2, when the oxygen content increased from 6% to 10% and 21% respectively. This indicates that the dechlorinated reaction is promoted by the increase in the oxygen content. 3.3.3. Different origins of KD. Fig. 8 shows the homologue proles of PCDD/Fs in the Series C experiments, in which the impact of the origin of kiln dust was studied. The total concentration of PCDD/Fs decreased from 38 900 pg g À1 to 23 100 pg g À1 for KD3. However, the total concentrations of PCDD/Fs in the original KD2 kiln dust increased from 9600 pg g À1 to 14 000 pg g À1 . The differences could be attributed to the characteristics of kiln dust, including the contents of chlorine, metal catalyst, and carbon. The fractions of PCDDs in the KD1, KD2, and KD3 kiln dusts were 27%, 30%, and 10% respectively. The value increased to 71% for KD1 aer the experiment, while controversial behavior was observed when KD2 and KD3 were thermally treated. This suggests that the de novo synthesis was the main pathway for the formation of PCDD/Fs on the KD2 and KD3 kiln dust. The dominant homologue of the KD2 was TCDF, which was the same as that of KD1. In the case of KD3, the fraction of TCDF increased from 0.3% to 52%. In the meanwhile, the fraction of pentachlorodibenzodioxin (P5CDF) decreased from 72% to 22%, indicating that the TCDF could have been formed during the dechlorination of P5CDF. The weight average level of chlorination decreased from 5.02 in KD2 to 4.76 in C-1. Similarly, the chlorination level of PCDD/Fs decreased from 5.18 in KD3 to 4.88 in C-2. The results revealed that the origin of kiln dust can signicantly affect the thermal characteristics of PCDD/Fs. Congener distribution of PCDD/Fs 3.4.1. Effect of reaction temperature. The congener distributions of PCDDs and PCDFs under different reaction temperatures are presented in Fig. 9. The concentration of 17 toxic PCDD/Fs in the KD1 was 254 pg g À1 , and the fraction of PCDDs was 77%. The leading PCDD and PCDF congeners were OCDD and 2,3,7,8-TCDF respectively. 2,3,4,7,8-PeCDF contributed 46% to the I-TEQ value, the most of all congeners. In Experiment R-0, the concentration of PCDD/Fs increased to 420 pg g À1 , while the fraction of PCDDs increased to 91%, which was dominated by OCDD. In terms of the PCDFs, the most abundant congener was also 2,3,7,8-TeCDF. In I-TEQ In Experiment A-1, the concentration of PCDD/Fs increased to 759 pg g À1 , of which PCDDs accounted for 95%. Unlike the homologue distribution, the fraction of toxic PCDD/Fs in the gas phase was 68%. The leading PCDD and PCDF congeners in A-1 were OCDD and OCDF. Corresponding to 51%, 2,3,4,7,8-PeCDF made the most pronounced contribution to the I-TEQ value. In Experiment B-2, the concentration of 17 toxic PCDD/Fs decreased to 103 pg g À1 . The fraction of PCDDs remained the same as that of Experiment B-1, indicating that the oxygen content had a minor effect on the congener distribution of 17 toxic PCDD/Fs. OCDD was the dominant PCDD congener, accounting for 97%. As per PCDFs, 2,3,7,8-TeCDF, 1,2,3,4,6,7,8-HpCDF, and OCDF were the most abundant congeners, which reected the outcomes of Experiments B-1 and R-0. In I-TEQ units, the contributions of each PCDD/Fs were similar to the Experiments R-0 and B-1, since their congener distributions were almost the same. The results revealed that the oxygen content had no selectivity on the desorption effect of PCDDs and PCDFs on the kiln dust. 3.4.3. Different origins of KD. The congener proles of the 17 toxic 2,3,7,8-substituted PCDD/Fs on different origins of kiln dust are displayed in Fig. 11. The concentration of 17 toxic PCDD/Fs was 1823 pg g À1 , of which PCDFs accounted for 51%. The leading PCDD and PCDF congeners were OCDD and 1,2,3,4,6,7,8-HpCDF respectively. In I-TEQ units, 2,3,4,7,8-PeCDF contributed 57%, the most of all congeners. In Experiment C-1, the concentration of PCDD/Fs increased to 2397 pg g À1 , of which PCDFs accounted for 74%. The share of OCDD decreased from 69% in KD2 to 56% in C-1. In terms of PCDFs, the dominant congener was OCDF, accounting for 25%. The high accumulation of PCDD/Fs in the solid phase could be The concentration of 17 toxic PCDD/Fs was 6455 pg g À1 in KD3, which was two times higher than that of KD2. PCDFs accounted for a substantial amount of the I-TEQ value at 90%, indicating greater de novo synthesis during co-processing of hazardous waste, which could supply more chlorine for the formation of PCDD/Fs. The dominant PCDD and PCDF congeners were OCDD and 2,3,4,7,8-PeCDF respectively. In terms of I-TEQ values, 2,3,4,7,8-PeCDF made the highest contribution of 78%. Discussion When kiln dusts KD1 and KD3 were thermally treated, the concentration of PCDD/Fs decreased from the initial values of 9.2 pg I-TEQ g À1 in KD1 and 1339 pg I-TEQ g À1 in KD3 to 4.3 pg I-TEQ g À1 in R-0 and 339 pg I-TEQ g À1 in C-2. However, an increase of PCDD/Fs concentration from the initial 190 pg I-TEQ g À1 in KD2 to 276 pg I-TEQ g À1 was observed. Such ndings are in line with those of Zhan et al., 28 who studied raw meal and soxhlet y ash and found that the concentration of PCDD/Fs increased in those cases from 3 to 55 pg I-TEQ g À1 for the raw meal and from 3 to 157 for the soxhlet y at the same reaction temperature. Therefore, the origin of the kiln dust had a pronounced impact on the thermal reaction characteristics of PCDD/Fs and such an impact could be attributed to variations in the properties of the kiln dust; for example, differences in contents of chlorine or metal catalyst. The PCDFs/PCDDs ratio of the kiln dusts KD1, KD2, and KD3 constantly exceeded 2.00. However, the same ratio decreased to 0.41 in Experiment R-0 and increased to 6.40 and 17.60 in Experiments C-1 and C-2 respectively. At the same time, a concurrent increase in the weight average level of chlorination from 4.86 to 6.77 in KD1 and concurrent decrease from 5.02 and 5.18 to 4.76 and 4.88 in KD2 and KD3 respectively, was observed. Such behavior can be explained by the higher stability of highly chlorinated PCDD/F congeners compared to the low chlorinated ones. The results indicated that the de novo synthesis dominated formation of PCDD/Fs in the KD2 and KD3 samples, while identifying the main formation pathway of the PCDD/Fs for KD1 kiln dust was challenging. As Table 4 highlights, the I-TEQ concentration of PCDD/Fs in the gas phase identied during thermal treatment of kiln dust KD1 decreased from 1.2 pg I-TEQ g À1 in Experiment A-1 to 0.4 pg I-TEQ g À1 in Experiment A-2. On the other hand, the elevated temperature resulted in an increase in the share of I-TEQ identied in the gas phase from 18 to 33% for the same samples. Likewise, the increasing oxygen content resulted in a reduction in the I-TEQ concentration of PCDD/Fs in the gas phase from 1.1 pg I-TEQ g À1 in R-0 to 0.5 pg I-TEQ g À1 in B-2. Li et al. 29 described the concentrations of gaseous PCDD/Fs in the ue gas collected at the outlets of the rst and the second stages of a cyclone preheater of 101 pg I-TEQ Nm À3 and 22 pg I-TEQ Nm À3 respectively. In the present study, 1.1 pg I-TEQ of PCDD/Fs g À1 kiln dust KD1 was released. Assuming the rates of kiln dust and stack gas of 47 t h À1 and 700 000 Nm 3 h À1 , 8 it was calculated that the concentration of PCDD/Fs in the ue gas at the rst stage of a cyclone preheater could increase by 74 pg I-TEQ Nm À3 due to the desorption of PCDD/Fs from the kiln dust. Still, the actual contribution the PCDD/Fs desorbed from KD1 made to the total emissions would be lower than the calculated value of 74 pg I-TEQ Nm À3 due to the presence of alkaline raw materials, which can inhibit the formation of PCDD/Fs in the kiln dust. 33 Moreover, gaseous PCDD/Fs will be abated when passing through the suspension preheater, raw mill, and bag lter as described by Li et al., 29 who reported a reduction in the PCDD/Fs concentration from 101 to 13 pg I-TEQ Nm À3 . Therefore, as reported by Li et al., 29 PCDD/Fs originating from their desorption from kiln dust would make a minor contribution to the overall emissions of PCDD/Fs if the PCDD/Fs reduction efficiency of 87% was achieved. However, recycling kiln dust that exhibits similar properties to the KD2 and KD3 kiln dusts during the second stage of operation of a cyclone preheater, during which higher temperatures are achieved, may be recommended to effectively destroy PCDD/Fs. Conclusions The thermal reaction characteristics of the PCDD/Fs contained in the cement kiln dust of varying origins and under varying conditions were investigated. The results of the study suggested that: (1) The temperature increase from 300 C to 400 C reduced the mass of 2,3,7,8-PCDD/Fs desorbed from the kiln dust from 1.2 to 0.4 pg I-TEQ g À1 . Likewise, the increase in oxygen content in ue gas from 6% to 21% decreased the mass of 2,3,7,8-PCDD/ Fs desorbed from the kiln dust from 1.2 to 0.5 pg I-TEQ g À1 . This implies that treating kiln dusts at higher temperatures and in gases with higher oxygen contents enhances the PCDD/Fs degradation effect. (2) The leading PCDD/Fs formation pathway was precursor formation on kiln dust KD1, while the de novo synthesis dominated the formation mechanisms of PCDD/Fs in kiln dusts KD2 and KD3. (3) Recycling kiln dust that is similar in properties to the KD1 kiln dust during the rst stage of a cyclone preheater would not signicantly increase the emission of PCDD/Fs. However, recycling kiln dust that exhibits similar properties to the KD2 and KD3 kiln dusts during the second stage of the operation of a cyclone preheater, during which higher temperatures are achieved, may me recommended to effectively destroy PCDD/Fs. Conflicts of interest There are no conicts to declare.
2019-04-09T13:08:05.698Z
2018-01-16T00:00:00.000
{ "year": 2018, "sha1": "596a0d019c7cf13f8ce7f0261b04e829252d8e0a", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c7ra09220b", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db9366c10cb19a1f341ff6372eb43f9c68d2077f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
258338017
pes2o/s2orc
v3-fos-license
Uterine artery embolization combined with percutaneous microwave ablation for the treatment of prolapsed uterine submucosal leiomyoma: A case report BACKGROUND Vaginal myomectomy is the most common form of radical treatment for prolapsed submucosal leiomyoma and is typically performed under general anesthesia. However, an alternative treatment approach is needed for patients who cannot tolerate general anesthesia. We describe a case with such a patient who was successfully treated via a minimally invasive method under local anesthesia. CASE SUMMARY A 46-year-old female suffered from abnormal uterine bleeding, severe anemia, and a reduced quality of life attributed to a massive prolapsed submucosal leiomyoma. She could not tolerate general anesthesia due to a congenital thoracic malformation and cardiopulmonary insufficiency. A new individualized combined treatment, consisting uterine artery embolization (UAE), percutaneous microwave ablation (PMWA) of the pedicle and the endometrium, and transvaginal removal of the leiomyoma by twisting, was performed. The lesion was completely removed successfully under local anesthesia without any major complications. The postoperative follow-up showed complete symptom relief and a significant improvement in the quality of life. CONCLUSION UAE combined with PMWA can be performed under local anesthesia and is a promising alternative treatment for patients who cannot tolerate general anesthesia. INTRODUCTION Uterine leiomyomas are the most common benign pelvic tumor in reproductive-aged women, with an incidence between 25% and 80% in the literature [1,2]. Submucosal leiomyomas can cause uterine cavity deformation, which usually leads to abnormal uterine bleeding (AUB), even if the lesion is small. Without effective treatment, leiomyomas may eventually protrude through the cervical canal and prolapse into the vagina, in which case they are classified as FIGO type 0 [3]. For isolated prolapsed pedunculated submucosal leiomyomas, transvaginal myomectomy is the mainstream radical treatment, performed by twisting, ligation, or excision. For large lesions with a thick pedicle that cannot be removed by twisting alone, ligation or excision followed by hysteroscopic electrocoagulation under general or epidural anesthesia is usually indicated [4,5]. However, an alternative treatment is needed for patients with severe systemic disease who cannot tolerate hysteroscopic surgery or hysterectomy. Herein, we present a case of a large prolapsed pedunculated submucosal leiomyoma impacted in the cervical canal that was successfully treated with a new method, uterine artery embolization (UAE) combined with percutaneous microwave ablation (PMWA). Chief complaints A 46-year-old female complained of prolonged menses, heavy menstrual bleeding (HMB) and severe anemia for 2 years, which led to a severely reduced quality of life. History of present illness In July 2019, the patient presented with intermenstrual bleeding and was diagnosed with a 2 cm submucosal leiomyoma in a local hospital but received no treatment. Half a year later, the patient began to experience HMB (sanitary towel changed every 1-2 h), prolonged menses (20 d), and severe anemia (minimal serum hemoglobin level 4.2 g/dL). Blood transfusion and iron supplementation were required on the heaviest days. Unfortunately, her cardiopulmonary function was not amenable to general anesthesia for hysteroscopic surgery. Therefore, the regimen was switched to medical therapy. Between July 2020 and June 2021, the patient underwent intramuscular injections of goserelin acetate 36 mg every three months, but the efficacy was unsatisfactory. Her menstrual length was 10-15 d, the pictorial bloodloss assessment chart (PBAC) scale score was 810, and the secondary anemia had not been corrected. In July 2022, the patient sought help at our clinic. The symptom severity score (SSS) and health-related quality of life (HRQOL) score were 75 and 12.07, respectively, according to the uterine fibroid symptoms and quality of life questionnaire [6]. Considering the patient's strong willingness to undergo radical treatment, we proposed a new plan: (1) Correct the anemia with pseudomenopausal therapy with combined oral contraceptive pills (COCs); and (2) determine a way to remove the prolapsed myoma under local anesthesia to permanently eliminate the source of the AUB. After taking COCs for 3 mo, the patient's serum hemoglobin (Hb) level increased to 8.9 g/dL. Then, she was admitted to our hospital for further treatment. History of past illness Her medical history mainly included congenital scoliosis and thoracic deformity, pulmonary insufficiency, and pulmonary heart disease. The heart function stabilized to New York Heart Association Cardiac Function Classification I or II while on long-term cardiotonic (ivabradine hydrochloride tablet bid 5 mg) and diuretic medication (spironolactone bid 20 mg and hydrochlorothiazide bid 25 mg). Personal and family history No family history of AUB or other tumors was identified. Physical examination Vaginal examination revealed a 6 cm, dark red mass prolapsed into the vagina without significant mobility. The patient refused bimanual examination. Laboratory examinations Laboratory tests revealed mild anemia with an Hb of 9.2 g/dL and an estradiol level below 18.35 U/L, consistent with previous hormonal therapy. Pregnancy tests, vaginal bacteriology, cervical cytology, and tumor biomarkers were all negative. Studies for systemic coagulation disorders, von Willebrand disease, and thyroid dysfunction were also performed, but the results were unremarkable. Imaging examinations Chest X-ray showed that the patient's bilateral thorax was asymmetric, with severe scoliosis and increased and thickened bilateral lung markings ( Figure 1). Electrocardiography revealed sinus tachycardia, and Doppler echocardiography showed mild pulmonary hypertension and mild regurgitation of the aortic, mitral, and tricuspid valves. Transabdominal ultrasound (TAUS) imaging revealed a mass prolapsing into the cervical canal ( Figure 2A) with a large blood vessel embedded in the pedicle ( Figure 2B). Contrast-enhanced ultrasound imaging (CEUS) showed that the two arteries in the pedicle were the main blood supply sources of the lesion ( Figure 2C), one measuring 2.7 mm and the other 3 mm in diameter. Pelvic magnetic resonance imaging (MRI) demonstrated that the pedicle was attached to the posterior uterine inner wall ( Figure 3A and B), and no evidence of malignancy was found on T2-weighted imaging or enhanced T1-weighted imaging. CHIEF COMPLAINTS A 46-year-old female complained of prolonged menses, HMB and severe anemia for 2 years, which led to a severely reduced quality of life. MULTIDISCIPLINARY EXPERT CONSULTATION After systematic evaluation of the patient, a case discussion was conducted by a multidisciplinary collaborative group, which consisted of gynecologists, radiologists, and US interventionists, to formulate an optimal radical treatment protocol. Then, a new combined sequential two-session treatment plan was developed. Session one UAE involved blocking of the feeding arteries to reduce the risk of massive intraoperative intrauterine bleeding. Session two PMWA involved ablation of the pedicle, followed by removal of the lesion by twisting; the pedicle stump (to prevent intrauterine bleeding) and the endometrium (to eliminate potential concurrent endometrial hyperplasia, which might also contribute to heavy menstrual bleeding [7]) should be ablated at the same time. FINAL DIAGNOSIS The diagnosis of this patient was defined clearly as a prolapsed pedicled submucous myoma, as its imaging manifestations were very typical. This could be proven by histopathological examination after lesion resection. The final diagnosis was uterine leiomyoma, as shown in the histopathological results ( Figure 4). May 6, 2023 Volume 11 Issue 13 Figure 1 Chest X-ray examination after admission. Chest X-ray reveals severe scoliosis, thoracic deformity, and thickened lung markings. TREATMENT After achieving consensus with the patient in terms of the therapeutic purposes and methods, the individualized therapy was implemented step by step. In session one, a standard UAE procedure was performed by a senior radiology interventional doctor. An angiographic imaging system (Siemens, Berlin, Germany) was used to perform pelvic digital subtraction angiography. Iopromide at 300 mg iodine/mL (Ultravist 240, Bayer Schering Pharma, Brussels, Belgium) was used to image the blood supply network, and 300-500 µm diameter, nonabsorbable polyvinyl alcohol (PVA) particles (Contour; Boston Scientific, Natick, Massachusetts, United States) were used to embolize the vascular network of the myoma. Before embolism, aortography revealed that the left uterine artery and two radial arteries downstream, which delivered nutrition to the pedicle in the early phase, were dilated ( Figure 5A), while the bilateral uterine arteries supplied blood to the myoma in the late phase ( Figure 5B). After the location of the opening of the uterine artery was identified with iodinated contrast media injection, May 6, 2023 Volume 11 Issue 13 sufficient amounts of PVA particles were slowly injected into the feeding artery through a 3F catheter. Postembolism aortography confirmed that most of the radical arteries were blocked successfully ( Figure 5C). The puncture site was locally pressurized with a pressure fixator, and the right lower extremity was immobilized for 6 h. The patient presented with fever, lower abdominal pain, and fatigue within 24 h, indicating postembolization syndrome. However, it was relieved after symptomatic treatment. Two days after UAE, CEUS revealed significant lesion volume reduction ( Figure 2D). Furthermore, significant perfusion was observed in part of the outer myometrium and the upper segment of the pedicle ( Figure 2E), indicating collateral recanalization. Thus, session two of the treatment was scheduled on the same day and conducted smoothly under conscious sedation and analgesia. For preoperative analgesia, 40 µg dexmedetomidine (1 µg/kg) was diluted in normal saline to a concentration of 4 µg/mL and slowly pumped into the peripheral vein for the first 10 minutes; after that, injection of the drug was maintained at 0.2 µg/kg/h via a pump. For intraoperative analgesia, 30 mg of ketorolac tromethamine was injected as a slow bolus (> 15 s) via the peripheral vein. A monopolar water-cooling MWA system (MTI-5A; 2450 MHz, Great Wall Medical Equipment Co. Ltd., Nanjing, China) equipped with a 14-gauge, 18 cm-long monopolar MWA antenna (XR-A2018W; Great Wall Medical Equipment Co. Ltd.) with a 1 cm active tip was used to carry out the PMWA procedure. The output power was set at 50-60 W. After local infiltration anesthesia with 0.1 g lidocaine hydrochloride, the microwave antenna was inserted into the pedicle under real-time TAUS guidance ( Figure 2F). Then, the pedicle was ablated from deep to shallow with the "moving-shot" technique until the entire pedicle was covered by a hyperechoic cloud. After intraoperative CEUS confirmed no enhancement throughout the pedicle, the myoma was clamped with oval forceps and removed by twisting. Finally, the pedicle stump and the endometrium of the upper and middle uterine cavity were ablated. The PMWA procedure was considered complete after B-mode US imaging revealed that the whole uterine cavity was covered by a hyperechoic cloud ( Figure 2G). Then, CEUS was performed again, and the results showed no signs of intrauterine bleeding ( Figure 2H). Two days later, postoperative MRI revealed that the anatomy of the uterus had returned to normal ( Figure 3C), and half of the outer myometrium had regained perfusion ( Figure 3D). The postoperative course was uneventful, and the patient was discharged 3 days later. OUTCOME AND FOLLOW-UP After the treatment, the patient achieved complete symptom relief. As we expected, the patient developed amenorrhea between 2 and 5 mo after treatment, and her Hb increased to normal levels at 3 mo ( Figure 6A). During this period, the patient had mild lower abdominal pain for a week, which was relieved after traditional Chinese medical treatment. At the 6-month follow-up, the patient's weight increased from 37 kg to 42 kg ( Figure 6B), the menstrual length decreased to 6 days ( Figure 6C), PBAC score decreased from 810 to 38 ( Figure 6D), the SSS score decreased from 75 to 0 ( Figure 6E), and the HRQOL score increased from 12.07 to 92.24 ( Figure 6F). No major complications were recorded. DISCUSSION Most international guidelines agree that hysteroscopic myomectomy should be used as the first-line treatment for the management of symptomatic submucosal leiomyomas [8,9]. As surgeons accumulate sufficient skills and experience in this field, the clinical indications for hysteroscopic myomectomy are also gradually expanding to almost all submucosal leiomyomas [8,10,11], with success rates of 95% in the literature [12]. For women with a submucosal leiomyoma who have completed childbearing, endometrial ablation can be combined with hysteroscopic myomectomy for increased efficacy [7,13]. However, for the special case we described, the history of systemic disease limited the spectrum of available treatment options, and the traditional radical treatment through hysteroscopic myomectomy was considered unsuitable. As the efficacy of conservative drug treatment was not satisfactory, an alternative treatment was needed. Several alternative, minimally invasive treatments have been developed for treating uterine leiomyomas in the past 20 years, including transcatheter UAE, MR or ultrasound (US)-guided highintensity focused ultrasound (HIFU), US-guided MWA and radiofrequency ablation (RFA) [14][15][16]. With the exception of HIFU, which was not feasible in our case due to depth limitations, the other methods were all candidates for further treatment [17,18]. Their mechanisms are similar in that all can destroy the lesion blood supply network, indirectly or directly leading to coagulation and necrosis as well as tumor volume reduction several months after treatment. They are all promising methods for alleviating the symptoms of AUB, but naturally, they are associated with certain risks of complications [15]. For UAE, posttreatment complications mainly include pain, postembolism syndrome, pelvic infection, amenorrhea, and occasional embolism of the ovarian artery [19]. Minor complications after US-guided in situ thermal ablation (MWA or RFA) are similar and include pain, fever, pelvic infection, and vaginal discharge of necrotic tissue [20,21]. However, our patient had a strong desire to remove the leiomyoma during one hospitalization, which could not be achieved by applying any of the above technologies alone. Our idea to solve this problem was to leverage each technique and invent a new hybrid method mimicking the standard procedure for transcervical myoma removal. In this new hybrid method, real-time US imaging was used to guide and monitor the surgery instead of hysteroscopy; UAE followed by PMWA was adopted to devascularize and dissect the pedicle, achieve intrauterine hemostasis, and perform endometrial ablation. To our knowledge, this method has not yet been reported. When direct hysteroscopy guidance is not available, real-time US imaging becomes the best choice for guided treatment. Contrast-enhanced MRI and CEUS both play an important role in preoperative evaluation and local response evaluation after nonsurgical interventional treatment of uterine benign diseases [22]. In this case, with CEUS assessment before treatment, we preliminarily characterized the blood supply of the lesions, which provided a basis for assessing the bleeding risk and formulating the radical treatment plan. The second day after UAE, we observed partial vascular recanalization in the gained weight gradually after ablation; C: Menstrual length was 6 mo for each cycle 6 mo after treatment; D: The patient had a transient amenorrhea from 3 to 5 mo after treatment, and returned to normal mense 6 mo after treatment with a PBAC score < 100; E: SSS score decreased from 75 to 0 after treatment, indicating complete relief of symptoms; F: HRQOL score increased from 12.07 to 92.24 at 6 mo after treatment, indicating huge improvement of the life quality. Hb: Hemoglobin; PBAC: Pictorial blood-loss assessment chart; SSS: Symptom severity scale; HRQOL: Health-related quality of life. pedicle through CEUS examination, which indirectly confirmed the opening of some collateral branches of the uterine arteries, providing a basis for determining the optimal time for subsequent PMWA treatment. Finally, during the PMWA session, CEUS was used to detect potential intrauterine hemorrhage and evaluate the local response following thermal coagulation of the pedicle stump and the endometrium instead of hysteroscopy. This could inspire future PMWA treatments for patients with AUB caused by leiomyoma or adenomyosis. The reasons why we used PMWA to assist in the dissection of the thick pedicle in this case were as follows. Electronic energy has been widely used to stop bleeding by inducing thermal coagulation and to cut tissue and seal vasculature with high power [23][24][25]. With different outpower settings and working durations of the electronic surgical instruments, protein denaturation, tissue necrosis, and even explosive vaporization of cells can be induced. Therefore, we used electronic surgical devices to cut tissue, achieve intraoperative hemostasis, and directly seal the vasculature. US-guided PMWA with 60 W output power can quickly increase the tissue temperature within the electromagnetic field to 60-100°C, which is sufficient to induce tissue necrosis and small vessel occlusion [20,21,26]. Therefore, if the microwave antenna was pointed for a long enough duration in the direction perpendicular to the long axis of the pedicle, it could also be used to cut tissues. Li et al [27] reported that RFA of 80-90 W output power effectively blocked the feeding artery of a liver tumor with a diameter ≤ 3 mm with a success rate of 100%. Unfortunately, there is no clinical evidence that US-guided in situ thermal ablations could be used to block the feeding artery of a prolapsed submucosal leiomyoma, and further study is needed. Therefore, there was a potential risk of massive intraoperative bleeding when using the PMWA technique alone to block the feeding artery. As UAE has unique advantages in achieving hemostasis, it is often used in combination with surgery to treat large submucosal myomas with a high risk of intraoperative bleeding [28]. UAE is recommended by several guidelines as an alternative treatment for symptomatic uterine leiomyomas, including submucosal myomas [19,29,30]. Therefore, UAE was preoperatively performed to create a safe condition for the ultimate radical treatment. The planned sequential treatments were carried out successfully, and the patient was eventually cured. CONCLUSION This case demonstrates a new combined minimally invasive treatment for large prolapsed submucosal leiomyomas with a thick pedicle that can be performed under local anesthesia. This new method has potential as an alternative treatment for patients who cannot tolerate general anesthesia.
2023-04-27T15:11:31.787Z
2023-05-06T00:00:00.000
{ "year": 2023, "sha1": "c8a26a2bc4a576063958bef4603d4dbbcff78db8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12998/wjcc.v11.i13.3052", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2cdb0223548691265f3bad17d20799970b555959", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
118134422
pes2o/s2orc
v3-fos-license
Knot Topology of QCD Vacuum We show that one can express the knot equation of Skyrme theory completely in terms of the vacuum potential of SU(2) QCD, in such a way that the equation is viewed as a generalized Lorentz gauge condition which selects one vacuum for each class of topologically equivalent vacua. From this we show that there are three ways to describe the QCD vacuum (and thus the knot), by a non-linear sigma field, a complex vector field, or by an Abelian gauge potential. This tells that the QCD vacuum can be classified by an Abelian gauge potential with an Abelian Chern-Simon index. The non-Abelian gauge theory has been well known to have a non-trivial topology. In particular it has infinitely many topologically distinct vacua which can be connected by tvacuum tunneling through the instantons [1,2]. The existence of topologically distinct vacua and the vacuum tunneling has played a very important role in quantum chromodynamics (QCD) [3,4]. In a totally independent development the Skyrme theory has been shown to admit a topologically stable knot which can be interpreted as a twisted magnetic vortex ring made of helical baby skyrmion [5,6,7]. And very interestingly, this knot is shown to describe the topologically distinct QCD vacua [8,9]. This is puzzling because the knot is a physical object which carries a nonvanishing energy. So it appears strange that the knot can be related to a QCD vacuum. On the other hand this is understandable since the Skyrme theory is closely related to QCD, and both the knot and the QCD vacuum are described by the same topology π 3 (S 3 ). Under this circumstance one need to know in exactly what sense the QCD vacuum can be identified as the knot. Since there exists one knot solution for each topological quantum number (up to the trivial space-time translation and the global SU (2) rotation), one might suspect that the knot equation could be viewed as a gauge condition for the topologically equivalent vacua. In fact it has been suggested that the knot equation can be viewed as a non-local gauge condition which describes the maximal Abelian gauge in SU (2) QCD [9]. The purpose of this Letter is to show that the knot equation is nothing but a generalized Lorentz gauge condition which selects one representative vacuum for * Electronic address: ymcho@yongmin.snu.ac.kr each class of topologically equivalent QCD vacua. This allows us to interpret the knot as a complex vector field which couples to an Abelian gauge field, and the knot equation as an Abelian gauge condition for the complex vector field. We first obtain a most general expression of the vacuum, and write the knot equation completely in terms of the vacuum potential. With this we prove that the knot equation is nothing but a generalized Lorentz gauge condition of the QCD vacuum. From this we show that the knot equation can be viewed as an Abelian gauge condition for a complex vector field. Moreover, we show that this complex vector field is uniquely determined by the Abelian gauge potential. This allows a new interpretation of the knot, the knot as a complex vector field or an Abelian gauge potential. As importantly, this tells that one can classify the topologically different QCD vacua by an Abelian Chern-Simon index. A best way to describe the QCD vacuum is to introduce a local orthonormal frame in the non-Abelian group space and obtain a potential which parallelizes the local orthonormal frame. Consider the SU (2) QCD and letn i (i = 1, 2, 3) be a right-handed local orthonormal frame. A vacuum potential must be the one which parallelizes the local orthonormal frame. Imposing the condition to the gauge potential A µ we obtain a most general vacuum potential wheren isn 3 and C µ is C 3 µ . One can easily check that Ω µ describes a vacuum This tells that bothΩ µ and (C 1 µ , C 2 µ , C 3 µ ) describe a QCD vacuum. Obviously they are gauge equivalent. Notice that the vacuum is essentially fixed byn, becausen 1 and n 2 are uniquely determined byn up to a U (1) gauge transformation which leavesn invariant. A nice feature of (2) is that the topological character of the vacuum is naturally inscribed in it. The topology of the SU (2) QCD vacuum has been described by the non-trivial mapping π 3 (S 3 ) from the (compactified) three-dimensional space S 3 to the group space S 3 . Butn can also describes the vacuum topology because it defines the mapping π 3 (S 2 ) which can be transformed to π 3 (S 3 ) through the Hopf fibering [2]. So one can naturally classify the vacuum topology byn, which is manifest in (2). one may choosê so that one has Of course they are uniquely determined up to the U (1) gauge transformation which leavesn invariant. Notice that, whenn becomes the unit radial vectorr, C µ describes the well-known Dirac's monopole potential. But whenn is smooth everywhere, it describes a vacuum. The vacuum (2) is obtained by three conditions given by (1). Suppose we impose only one condition This singles out the restricted potential which defines the restricted gauge theory [10,11] where A µ =n · A µ is the chromoelectric potential. This tells that the two extra conditions (for i = 1, 2) in (1) uniquely determines A µ to be Indeed with this (8) becomes (2), which tells that the restricted QCD has exactly the same multiple vacua. Furthermore, in the absence of (7), one can express the most general SU (2) gauge potential A µ by [10,11] A µ = µ + X µ , (n · X µ = 0) (10) where X µ is a gauge covariant vector field. This is because under the infinitesimal gauge transformation one has where α is the infinitesimal gauge parameter. This means that one can interpret QCD as a restricted gauge theory which has a gauge covariant valence gluon X µ as the colored source [10,11]. The importance of the decompositions is that they are gauge independent. Oncen is given the decomposotion follows automatically, independent of the choice of a gauge. The above analysis shows that µ by itself describes an SU (2) connection which enjoys the full non-Abelian gauge degrees of freedom. More importantly, it has a dual structure [10,11] This tells that C µ in (2) is nothing but the chromomagnetic potential of the field strength H µν (Since H µν forms a closed two-form, it admits a potential). Moreover A µ and C µ transform equally but oppositely under the gauge transformation. In particular, the Abelian gauge group which leavesn invariant acts on both A µ and C µ . This shows that the restricted QCD has a manifest electricmagnetic duality [10,11]. Now, let's review the knot in Skyrme theory [5,7]. Let ω andn (withn 2 = 1) be the Skyrme field and the non-linear sigma field, and let With this one can write the Skyrme Lagrangian as where σ = cos ω 2 , The Lagrangian has an obvious global SU (2) symmetry, but it also has a (hidden) U (1) gauge symmetry [7]. This is because the invariant subgroup ofn can be viewed as a U (1) gauge group. Notice thatĈ µ is nothing but the magnetic part of the restricted potential (8). This provides the crucial link between QCD and Skyrme theory. From this link one can argue that the Skyrme theory is a theory of monopole which describes the chromomagnetic dynamics of QCD [7]. It is well-known that σ = 0 is a classical solution of (15), independent ofn. When σ = 0 the Skyrme Lagrangian is reduced to whose equation is given by It is this equation that allows the monopole, the baby skyrmion, and the knot in Skyrme theory [7,8]. With (4) the knot equation is written as where But this can neatly be expressed by the vacuum potential C i µ . To see this, notice that the knot equation (17) can be understood as a conservation equation of an SU (2) current j µ The origin of this conserved current, of course, is the global SU (2) symmetry of the Skyrme Lagrangian (15). From (19) one can express the knot equation by With this can be put into a single complex equation This tells that the knot equation (17) can be expressed completely in terms of the QCD vacuum potential, as an Abelian gauge condition for the complex vector field ω µ . In this form the U (1) gauge symmetry of the knot equation (and the Skyrme theory) becomes manifest [7]. Now let us go back to QCD, and consider the following constraint equation for the vacuum where nowD This is equivalent tō This tells that the equation (23) not only describes the knot, but also fixes the U (1) gauge degree of the knot equation. This proves that the knot equation can be interpreted as a generalized Lorentz gauge condition of the QCD vacuum which selects one vacuum for each topologically equivalent class of vacua. The knot equation (22) contains both ω µ and C µ . But they are not independent. To see this, notice that the vacumm condition (3) tells that where So ω µ and C µ are determined by each other. This tells that ω µ alone can describe the knot. Equivalently, this means that the knot can also be described by an Abelian gauge potential C µ . So we have three different ways to describe the knot and thus the QCD vacuum, byn, ω µ , and C µ . With (25) we havē so that we can simplify (22) to This tells that the knot equation can be expressed as a covariant Lorentz gauge condition of the complex vector fieldω µ . The knot quantum number is given by the Abelian Chern-Simon index of the magnetic potential C µ [6,7], which describes the non-trivial topology π 3 (S 2 ) defined byn. The preimage of the mapping from the compactified space S 3 to the target space S 2 defined byn forms a closed circle, and any two preimages of the mapping are linked together when π 3 (S 2 ) is non-trivial. This linking number is given by the Chern-Simon index. Obviously our analysis tells that exactly the same description applies to the QCD vacuum. In particular, this means that the QCD vacuum can also be classified by an Abelian gauge potential with the Abelian Chern-Simon index [2,8]. Conversely, with (25) we can transform the knot quantum number (28) to a non-Abelian form which proves that the knot quantum number can also be expressed by a non-Abelian Chern-Simon index. More significantly, this tells that the Abelian Chern-Simon index is actually identical to the non-Abelian Chern-Simon index. They have been thought to be two different things, but our analysis tells that they are one and the same thing which can be transformed to each other through the vacuum condition (25). The fact that the knot can be described by an Abelian gauge potential C µ raises a totally unexpected and very interesting possibility that, under a proper circumstance, one could create the knot in a condensed matter. Indeed it has been conjectured that a superconducting knot could exist in the ordinary superconductor [12]. It has long been assumed that this is impossible, because the Abelian gauge theory is thought to be too trivial to allow the knot topology [13]. Our analysis tells that this is not true. There exists a well-defined knot topology in the Abelian gauge theory. Our analysis could have important applications in QCD. For example, the decomposition (10) plays an important role in the discussion of the Abelian domonance and the confinement of color in QCD [14,15]. Moreover it plays a crucial role for us to study the geometry of the principal fiber bundle, in particular the Deligne cohomology of the non-Abelian gauge theory [16,17]. Further interesting applications of our analysis to QCD will be published elsewhere [18]. ACKNOWLEDGEMENT The author thanks Professor C. N. Yang for the illuminating discussions, and G. Sterman for the kind hospitality during his visit to Institute for Theoretical Physics. The work is supported in part by the ABRL Program (Grant R14-2003-012-01002-0) of Korea Science and Engineering Foundation, and by the BK21 project of Ministry of Education.
2019-04-14T02:52:56.962Z
2004-09-24T00:00:00.000
{ "year": 2004, "sha1": "73f738a90233a16da2c69addc5192a957292d318", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0409246", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "633dcc511677241e46b98814ac21e7ed179cc6b4", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
7685959
pes2o/s2orc
v3-fos-license
Loss of a unique tumor antigen by cytotoxic T lymphocyte immunoselection from a 3-methylcholanthrene-induced mouse sarcoma reveals secondary unique and shared antigens Most chemically induced tumors of mice express unique antigens that can be recognized by cytotoxic T lymphocytes (CTL) and thereby mediate tumor rejection. The number of different antigens expressed by a single tumor and their interplay during immunization and rejection are largely unexplored. We used CTL clones specific to individual tumor antigens to examine the number and distribution of CTL antigens expressed by cell lines derived from 3-methylcholanthrene-induced sarcomas of (C57BL/6J X SPRET/Ei)F1 mice. Each tumor cell line expressed one or more antigens that were unique, that is, not detected on cell lines from independent sarcomas. Immunoselection against an immunodominant antigen produced both major histocompatibility complex class I antigen and unique tumor antigen loss variants. Immunization of mice with antigen-negative immunoselected variants resulted in CTL that recognized additional antigens that were also expressed by the progenitor tumor. Some CTL recognized additional unique tumor antigen(s); other CTL recognized a shared antigen expressed not only by the immunizing cell line, but also by independent sarcoma cell lines and untransformed myoblastoid cell lines. CTL that recognized the shared antigen were also recovered from mice immunized in vivo with an untransformed myoblastoid cell line. These findings support a model of immunodominance among chemically induced tumor antigens in which shared antigens are masked by unique immunodominant antigens. C hemically induced, transplantable tumors of mice have been used extensively to investigate immunologically mediated tumor rejection (1), and understanding the cellular mechanisms and molecular antigens responsible for tumor rejection in such mouse models may enable the development of more effective immunotherapies. Historically, studies have used in vivo tumor rejection assays to indirectly investigate antigens expressed by 3-methylcholanthrene (MCA)l-induced tumors. These studies demonstrated that immunization with a chemically induced tumor usually protects against in vivo challenge by the same tumor, but only rarely or sporadically protects against challenge by independent syngeneic tumors (2)(3)(4). These observations led to the suggestion that the antigens responsible for mediating antitumor immunity to MCA-induced tumors are unique to individual tumors. More recently, CD8 + CTL have been shown to be necessary for immunity to transplantable sarcomas (5,6), and 1Abbreviations used in this paper: CML, cell-mediated lympholysis; MCA, 3-methylcholanthrene; MLTC, mixed lymphocyte tumor culture. the antigens recognized by tumor-specific CTL have come under investigation. Short-term CTL lines from immunized mice have been used to demonstrate that the CTL antigens of MCA-induced sarcomas consist of MHC class I-bound peptides on the cell surface (7). Each peptide antigen is expressed only by a single sarcoma, correlating with the pattern of in vivo protection. The unique CTL antigens expressed by MCA-induced mouse tumors contrast with the CTL antigens of human melanomas, which are lineage-specific (8,9) or "activation" (10) antigens expressed by multiple tumors. The contribution of single antigens to tumor rejection has been investigated in mouse models using in vitro immunoselection to generate CTL-resistant tumor variants that lack expression of the selected antigen. For some highly immunogenic tumors, including spontaneously regressing, UV-induced sarcomas and SV-40 large T antigen-induced sarcomas, multiple epitopes independently mediate rejection (11)(12)(13)(14). For other UV-induced and some chemically induced tumors, selected loss of an individual antigen produces variants that display a more malignant phenotype than their antigen-expressing progenitor tumors (15)(16)(17), indicating that the selected antigen is the principal target of tumor re-jection. These studies emphasize the need to assess the spectrum of antigens that are available as targets, and immunoselection provides a powerful tool for exposing secondary epitopes to evaluation. The use of monoclonal CTL to study antigenic diversity of MCA-induced tumors has been limited. In this report, we use CTL lines and clones to investigate the antigens expressed by cell lines derived from MCA-induced sarcomas of highly heterozygous, (C57BL/6J • SPRET/Ei)F1, termed (B6 • SPE)F/ mice. The antigenic complexity of one tumor cell line was investigated by selecting CTL-resistant variants in vitro and characterizing the immune response to the variants. CTL with new specificities were derived, identifying previously undetected antigens. Some variant-reactive CTL lines defined one or more uniquely expressed antigen(s); other CTL lines defined a shared antigen whose expression could not be inferred from the CTL response against primary cell lines. Materials and Methods Tumor Cell Lines. The tumors and their derivative cell lines exanfined in this study have been previously described in detail (18). Briefly, cell lines were derived from MCA-induced sarcomas generated in male and female mice from an F 1 cross between C57BL/6J (B6) and SPRET/Ei (SPE). All tumor lines were diagnosed histologically as poorly differentiated sarcomas or rhabdomyosarcomas that grew progressively and could be transplanted into syngeneic hosts. Cell lines derived from tumors were grown in 100-mm tissue culture-treated petri plates (Coming Glass Works, Coming, NY) in DMEM-based Vc medium (18) supplemented with 5% FCS (Hyclone Laboratories, Inc., Logan, UT). Cell lines were passaged weekly by preparation of a single-cell suspension using trypsin-EDTA (Sigma Chemical Co., St. Louis, MO) and vigorous pipetting and reseeding fresh plates. For imnmnization and T cell-mediated lympholysis (CML) assays, cells were harvested from petri plates and washed twice in PBS before use. The male-derived tumor cell lines used in this study were bs2 and its clonal derivative, bs2.1; bs4 and its clone, bs4.1; and bs15 and its clone, bs15.1; and the female-derived tumor cell lines were bs9 and its clone, bsg.1. The NK-sensitive Yac-I cell line was passaged weekly in Vc5 medium. U, tran.fomwd Myoblastoid Cell Lira's. Untransformed myoblasts were derived from (B6 X SPE)F 1 neonates. Muscle tissue was carefully dissected from skin, bone, and fat, minced into fine pieces, and rocked for 1 h at 37~ in 10 ml of HBSS (GIBCO BILL, Gaithersburg, MD) containing 1 mg/ml collagenase and 2.5 U/ml hyaluronidase (Sigma). Large pieces of tissue were allowed to settle out, then suspended cells were washed twice in PBS (GIBCO BRL) and plated in Vcl0 medium in a 100-mm tissue culture-treated cell culture dish (Coming). Myoblastoid cells were passed weekly into fresh Vcl0 (same formulation as Vc5, except with 10% FCS) at a 1-10 dilution. (B6 X SPE)F 1 mice (same sex as immunizing tumor) were inmmnized by intraperitoneal injection of 3-5 • 10 ~' irradiated (1,000 Gy) tumor cells admixed with 150 ~g heat-killed Corynebacterium pawum (19; culture kindly provided by Dr. C. Cummins, Virginia Polytechnical Institute and State University, Blacksburg, VA). Immunized mice were boosted at weekly intervals two or three times with 3 • 10 ~' irradiated tumor cells without C. parvum. 10 d to 2 wk after the final boost, animals were euthanized by carbon dioxide asphyxiation, spleens were removed, and mononuclear splenocyte preparations were obtained. Mice for these experiments were bred and housed at The Jackson Laboratory Research Animal Facility following protocols approved by the institutional Animal Care and Use Committee and conforming to the American Association for Accreditation of Laboratory Animal Care (AAALAC) standards. The (B6 • SPE)F I CTL lines used in this study (and their cognate tumor lines, used for immunization and restimulation) were BxS/2 (bs2), BxS/4 (bs4) BxS CML Assay. 4-6-h CML assays were performed as previously described (23). 51Cr released into the supernatant was determined, and specific lysis was calculated using the following ratio: specific lysis = (experimental -spontaneous) / (maximum --st~ontam'ol~s ) Data are reported as the mean of three wells. SDs (generally <5%) are omitted for clarity of presentation. The ability of vari-ous antibodies to inhibit target lysis was determined by the addition of 1/20 vol antibody ascites to microplate wells. In some experiments, tumor target cells were grown in 5 U/ml IFN-y (a generous gift from Dr. van der Meide, Biomedical Primate P,,esearch Center, Rijswijk, The Netherlands) for 48 h before assay to augment surface expression of MHC class I molecules. T cell lymphoblast target cells were cultured from splenocytes in Vc5 supplemented with 25U/ml IL-2 and 2 I~g/ml Con A (Sigma). These cells were used as CML targets 2-5 d later. (B6 • SPE)F I Tumors Express Unique Tumor Antigens. Tumor cells lines were used to immunize (B6 • SPE)F1 mice, and CTL lines were derived from inmaunized splenocytes by in vitro restimulation. Fig. 1 (Fig. 1). Lysis was inhibited by anti-CD8 MAb and anti-H2K b mAb, but not by anti-CD4 mAb or anti-H2D b mAb (not shown). These data indicate that tumor cell line bs15.1 expresses a tumor-specific, Kb-restricted CTL antigen. broad expression pattern. This CTL line lysed cell lines bs 15.1, 15V.1, and 15A1, but not Yac-1 cells or untransformed Con A-stimulated T cell blasts. Additionally, CTL BxS/A lysed independently derived tumor clones bs2.1, bs4.1, and bs9.1, as well as untransformed syngeneic myoblasts, indicating that BxS/A recognized an antigen expressed in common among multiple independent sarcoma cell lines. A CTL clone, BxS/A.11, was derived from the BxS/A CTL line and demonstrated identical target specificity (Fig. 3). The failure of BxS/A to lyse Con A-stimulated splenic T cell blasts, LPS-stimulated splenic B cell blasts (not shown), or Yac-1 cells indicated that the antigen is not expressed as an artifact of in vitro cell growth, and is not expressed by cells of the lymphoid lineage. Moreover, trypsin-EDTA treatment of splenic T cell blasts did not sensitize them to lysis by BxS/A. 11, while cell cultures freshly prepared ex vivo from progressing bs15.1 tumors retained sensitivity (not shown), indicating that cell preparation or culture conditions are unlikely to account for the observed cross-reactive antigen. Shared Ant(~ens Are Weak Elicitors of CTL. To test the relative efficacy of different antigens to elicit CTL, additional MLTC were derived and tested for lyric activity. Six unimmunized (B6 • SPE)FI mice were examined. MLTC derived from all unimmunized mice did not proliferate well upon restimulation, and they failed to demonstrate lyric activity against any target tested (e.g., 8/A, Fig. 4). The inability to derive tumor-reactive MLTC from naive (unimmunized) mice indicates the requirement for in vivo printing Four additional mice were immunized, and their splenocytes were restimulated in vitro using tumor cell line bs15.1. All four MLTC specifically lysed the cognate tumor, bs15.1, but failed to lyse the tumor antigen-specific variant, 15A1 (not shown), indicating that the CTL populations in these additional MLTCs recognize the same unique tumor antigen as CTL BXS/15.4. The H2Kb-restricted bs15.1 antigen recognized by all these CTLs is therefore immunodominant, in that it is highly effective at eliciting CTL under the culture conditions used. BxS/V BxS/A BxS/A Three additional MLTC were established from splenocytes of mice immunized with tumor cell line 15A1 and restimulated with 15A1 cells (Fig. 4). MLTC 2/A and 3/A demonstrated potent specific lysis of bsl5.1 cells and variants 15V.1 and 15A1, but not of tumor cell line bs4.1 or untransformed myoblasts. This pattern of reactivity is identical to that exhibited by the CTL line BxS/V, and indicates recognition of a secondary tumor-specific antigen(s). MLTC 1/A demonstrated relatively high lysis of Yac-1 targets, indicating NK-like or LAK-like nonspecific cytotoxicity, but litde additional sarcoma-specific lyric activity. MLTC 2/A and 3/A also initially demonstrated low levels oflysis of sarcoma bs4.1 and untransformed myoblastoid lines (replicate experiments, not shown). This lytic activity could indicate a minor population of CTL that recognized a shared antigen with the same specificity as the BxS/A line. Immunization with Untransformed Myoblasts Primes against a Shared Tumor Antigen. Four mice were immunized with untransformed myoblasts. Splenocytes from these mice were split, and one half was restimulated using untransformed myoblasts while the other half was restimulated using tumor cell line 15A1. All MLTC derived by restimulating in vitro using myoblasts failed to lyse any target tested. However, three of the four MLTC from myoblast-immunized splenocytes that were restimulated with tumor cell line 15A1 demonstrated target cell lysis. All three MLTC exhibited lysis of bs 15.1, bs4.1, and untransformed myoblasts, but not of Yac-1 cells or splenic T cell blasts. One example 445 Dudley and Roopenian of a pair of MLTCs derived from splenocytes of a single mouse, but restimulated using two different antigen sources, is shown in Fig. 5. The antigen specificity exhibited by the MLTC from myoblast-immunized, tumor-restimulated MLTC recapitulates the target specificity exhibited by CTL BxS/A. Discussion Despite the widespread use of MCA tumors for investigating immunological tumor rejection (1,17), the use of monoclonal CTL lines to probe tumor antigen expression has been limited. The results presented here demonstrate that CTL lines and clones derived from (B6 • SPE)F1 mice immunized and restimulated with syngeneic MCA-induced sarcoma cell lines exhibit specific lysis of their cognate (immunizing) tumor cell lines. Lysis of independent tumor cell lines was not observed, indicating that the primary CTL antigens expressed by MCA-induced sarcomas are uniquely expressed by individual tumors. These results recapitulate and extend the extensively replicated observation that chemically induced and W-induced sarcomas are inununogenic, but fail to elicit cross-protective immunity (7,24). The genetic mechanisms by which tumor cells escape CTL lysis can be studied by applying an in vitro CTL immunoselection approach. We observed that selective pressure on the highly heterozygous tumor cell line bs15.1 resulted in both MHC antigen variants and unique tumor antigen variants. It is interesting that these variant phenotypes reflect mutational alterations also observed in human and mouse tumors in vivo (25)(26)(27), suggesting that CTL selective pressure can shape tumor progression in patients. Because (B6 X SPE)F 1 tumor cell lines comprise abundant polymorphism between B6 and SPE alleles throughout the genome, they should expedite genetic analysis of mutations accompanying antigen loss after immunoselection and provide a powerful tool for understanding genetic mechanisms of tumor progression. 80-4/myo 4/ To investigate the antigenic complexity of tumor cell line bs15.1, variants lacking expression of the immunodominant antigen were exploited for detecting additional CTL antigens. Two qualitatively different, secondary antigens of tumor bs15.1 elicit CTL. A secondary tumor-specific antigen is recognized by CTL BxS/V. Expression of multiple tumor-specific antigens by a single tumor has been previously reported, including UV-induced sarcomas with highly immunogenic "regressor" phenotypes (11, 17). Expression of multiple tumor-specific antigens by MCA-induced tumors is therefore not unexpected, although it has not been extensively reported. More surprising is the observation that a shared antigen recognized by CTL BxS/A.11 is expressed by independent MCA-induced sarcomas as well as untransformed myoblastoid cell lines. Shared antigens were unexpected under these experimental circumstances because no cross-reactive lytic activity was generated in bulk MLTCs against the progenitor bs15.1 cell line, and shared antigens of UV-induced and MCA-induced tumors are not routinely observed using in vitro approaches or in vivo cross-protection assays. The three CTL antigens expressed by tumor cell line bs15.1 exhibit a gradation ofimmunodonfinance under the experimental conditions used to generate MLTC. The primary tumor-specific antigen monopolized the CTL response when it was expressed by the immunizing/restimulating tumor. Most MLTC generated by using primary antigen-loss variants 15A1 and 15V.1 recognized the tumor-specific secondary antigen, although one MLTC recognized the shared antigen. Finally, although the shared antigen appears to be the weakest of the three CTLdefined antigens, under appropriate conditions, the shared antigen-specific CTL activity was reproducibly generated. The failure to observe persistent lytic activity aimed at more than a single antigen suggests that these activities may be mutually exclusive. Thus, these results support the hypothesis that immunodominant, tumor-specific antigens of MCA-induced tumors not only provoke a strong, tumorspecific CTL response, but they may also suppress or mask responses against weaker antigens, including both secondary tumor-specific antigens and shared antigens. The coexpression of immunodominant tumor-specific antigens and secondary, shared antigens may explain sporadic reports of in vivo cross-protection in immunized animals. While most studies investigating in vivo rejection indicate that cross-protection between MCA sarcomas is rare (3,4,28), sporadic cases of cross-protection have been reported: Basombrio (4) demonstrated replicable cross-protection with one combination out of 14 MCA-induced tumors, and Prehn and Main (2) showed significant crossprotection in two of the four tumor combinations they tested. This rare in vivo cross-protection may reflect sporadic priming against a shared sarcoma antigen in the context of frequent priming against unique antigens. Further studies are needed to characterize the nature of the shared antigen detected on (B6 • SPE)F~ sarcomas and to evaluate the efficacy of shared-antigen specific CTL for tumor rejection in vivo. The use of untransformed myoblastoid cell lines to prime a CTL response against shared antigens offers a new tool to dissect the potential role of shared antigens in tumor rejection.
2014-10-01T00:00:00.000Z
1996-08-01T00:00:00.000
{ "year": 1996, "sha1": "570dfb9498d7c511db8a52c4e1f2c41cce33bc35", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/184/2/441.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "570dfb9498d7c511db8a52c4e1f2c41cce33bc35", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }